Zangs3011 commited on
Commit
6465873
1 Parent(s): e39fee7

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +284 -160
README.md CHANGED
@@ -1,199 +1,323 @@
1
  ---
2
- library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
-
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
 
12
  ## Model Details
 
13
 
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
 
131
- #### Summary
132
 
 
133
 
 
134
 
135
- ## Model Examination [optional]
136
 
137
- <!-- Relevant interpretability work for the model goes here -->
138
 
139
- [More Information Needed]
140
 
141
- ## Environmental Impact
 
 
 
 
142
 
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
 
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
 
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
 
153
- ## Technical Specifications [optional]
154
 
155
- ### Model Architecture and Objective
156
 
157
- [More Information Needed]
 
158
 
159
- ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
 
163
- #### Hardware
 
164
 
165
- [More Information Needed]
166
 
167
- #### Software
 
 
 
 
 
168
 
169
- [More Information Needed]
170
 
171
- ## Citation [optional]
 
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
 
175
- **BibTeX:**
176
 
177
- [More Information Needed]
178
 
179
- **APA:**
 
 
 
 
 
 
 
 
180
 
181
- [More Information Needed]
182
 
183
- ## Glossary [optional]
 
 
 
 
 
 
 
 
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
 
 
 
 
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
 
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
 
 
 
 
198
 
199
- [More Information Needed]
 
 
 
 
 
 
1
  ---
2
+ extra_gated_heading: You need to share contact information with Meta to access this model
3
+ extra_gated_prompt: >-
4
+ ### LLAMA 2 COMMUNITY LICENSE AGREEMENT
5
+
6
+ "Agreement" means the terms and conditions for use, reproduction, distribution
7
+ and modification of the Llama Materials set forth herein.
8
+
9
+ "Documentation" means the specifications, manuals and documentation
10
+ accompanying Llama 2 distributed by Meta at
11
+ https://ai.meta.com/resources/models-and-libraries/llama-downloads/.
12
+
13
+ "Licensee" or "you" means you, or your employer or any other person or entity
14
+ (if you are entering into this Agreement on such person or entity's behalf),
15
+ of the age required under applicable laws, rules or regulations to provide
16
+ legal consent and that has legal authority to bind your employer or such other
17
+ person or entity if you are entering in this Agreement on their behalf.
18
+
19
+ "Llama 2" means the foundational large language models and software and
20
+ algorithms, including machine-learning model code, trained model weights,
21
+ inference-enabling code, training-enabling code, fine-tuning enabling code and
22
+ other elements of the foregoing distributed by Meta at
23
+ ai.meta.com/resources/models-and-libraries/llama-downloads/.
24
+
25
+ "Llama Materials" means, collectively, Meta's proprietary Llama 2 and
26
+ documentation (and any portion thereof) made available under this Agreement.
27
+
28
+ "Meta" or "we" means Meta Platforms Ireland Limited (if you are located in or,
29
+ if you are an entity, your principal place of business is in the EEA or
30
+ Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
31
+ or Switzerland).
32
+
33
+
34
+ By clicking "I Accept" below or by using or distributing any portion or
35
+ element of the Llama Materials, you agree to be bound by this Agreement.
36
+
37
+ 1. License Rights and Redistribution.
38
+
39
+ a. Grant of Rights. You are granted a non-exclusive, worldwide, non-
40
+ transferable and royalty-free limited license under Meta's intellectual
41
+ property or other rights owned by Meta embodied in the Llama Materials to
42
+ use, reproduce, distribute, copy, create derivative works of, and make
43
+ modifications to the Llama Materials.
44
+
45
+ b. Redistribution and Use.
46
+
47
+ i. If you distribute or make the Llama Materials, or any derivative works
48
+ thereof, available to a third party, you shall provide a copy of this
49
+ Agreement to such third party.
50
+
51
+ ii. If you receive Llama Materials, or any derivative works thereof, from a
52
+ Licensee as part of an integrated end user product, then Section 2 of this
53
+ Agreement will not apply to you.
54
+
55
+ iii. You must retain in all copies of the Llama Materials that you distribute
56
+ the following attribution notice within a "Notice" text file distributed as a
57
+ part of such copies: "Llama 2 is licensed under the LLAMA 2 Community
58
+ License, Copyright (c) Meta Platforms, Inc. All Rights Reserved."
59
+
60
+ iv. Your use of the Llama Materials must comply with applicable laws and
61
+ regulations (including trade compliance laws and regulations) and adhere to
62
+ the Acceptable Use Policy for the Llama Materials (available at
63
+ https://ai.meta.com/llama/use-policy), which is hereby incorporated by
64
+ reference into this Agreement.
65
+
66
+ v. You will not use the Llama Materials or any output or results of the Llama
67
+ Materials to improve any other large language model (excluding Llama 2 or
68
+ derivative works thereof).
69
+
70
+
71
+ 2. Additional Commercial Terms. If, on the Llama 2 version release date, the
72
+ monthly active users of the products or services made available by or for
73
+ Licensee, or Licensee's affiliates, is greater than 700 million monthly
74
+ active users in the preceding calendar month, you must request a license from
75
+ Meta, which Meta may grant to you in its sole discretion, and you are not
76
+ authorized to exercise any of the rights under this Agreement unless or until
77
+ Meta otherwise expressly grants you such rights.
78
+
79
+ 3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA
80
+ MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN "AS IS"
81
+ BASIS, WITHOUT WARRANTIES OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING,
82
+ WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT,
83
+ MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY
84
+ RESPONSIBLE FOR DETERMINING THE APPROPRIATENESS OF USING OR REDISTRIBUTING
85
+ THE LLAMA MATERIALS AND ASSUME ANY RISKS ASSOCIATED WITH YOUR USE OF THE
86
+ LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.
87
+
88
+ 4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
89
+ UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE,
90
+ PRODUCTS LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST
91
+ PROFITS OR ANY INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
92
+ PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
93
+ POSSIBILITY OF ANY OF THE FOREGOING.
94
+
95
+
96
+ 5. Intellectual Property.
97
+
98
+ a. No trademark licenses are granted under this Agreement, and in connection
99
+ with the Llama Materials, neither Meta nor Licensee may use any name or mark
100
+ owned by or associated with the other or any of its affiliates, except as
101
+ required for reasonable and customary use in describing and redistributing
102
+ the Llama Materials.
103
+
104
+ b. Subject to Meta's ownership of Llama Materials and derivatives made by or
105
+ for Meta, with respect to any derivative works and modifications of the Llama
106
+ Materials that are made by you, as between you and Meta, you are and will be
107
+ the owner of such derivative works and modifications.
108
+
109
+ c. If you institute litigation or other proceedings against Meta or any
110
+ entity (including a cross-claim or counterclaim in a lawsuit) alleging that
111
+ the Llama Materials or Llama 2 outputs or results, or any portion of any of
112
+ the foregoing, constitutes infringement of intellectual property or other
113
+ rights owned or licensable by you, then any licenses granted to you under
114
+ this Agreement shall terminate as of the date such litigation or claim is
115
+ filed or instituted. You will indemnify and hold harmless Meta from and
116
+ against any claim by any third party arising out of or related to your use or
117
+ distribution of the Llama Materials.
118
+
119
+ 6. Term and Termination. The term of this Agreement will commence upon your
120
+ acceptance of this Agreement or access to the Llama Materials and will
121
+ continue in full force and effect until terminated in accordance with the
122
+ terms and conditions herein. Meta may terminate this Agreement if you are in
123
+ breach of any term or condition of this Agreement. Upon termination of this
124
+ Agreement, you shall delete and cease use of the Llama Materials. Sections 3,
125
+ 4 and 7 shall survive the termination of this Agreement.
126
+
127
+ 7. Governing Law and Jurisdiction. This Agreement will be governed and
128
+ construed under the laws of the State of California without regard to choice
129
+ of law principles, and the UN Convention on Contracts for the International
130
+ Sale of Goods does not apply to this Agreement. The courts of California
131
+ shall have exclusive jurisdiction of any dispute arising out of this
132
+ Agreement.
133
+
134
+ ### Llama 2 Acceptable Use Policy
135
+
136
+ Meta is committed to promoting safe and fair use of its tools and features,
137
+ including Llama 2. If you access or use Llama 2, you agree to this Acceptable
138
+ Use Policy (“Policy”). The most recent copy of this policy can be found at
139
+ [ai.meta.com/llama/use-policy](http://ai.meta.com/llama/use-policy).
140
+
141
+ #### Prohibited Uses
142
+
143
+ We want everyone to use Llama 2 safely and responsibly. You agree you will not
144
+ use, or allow others to use, Llama 2 to:
145
+
146
+ 1. Violate the law or others’ rights, including to:
147
+ 1. Engage in, promote, generate, contribute to, encourage, plan, incite, or further illegal or unlawful activity or content, such as:
148
+ 1. Violence or terrorism
149
+ 2. Exploitation or harm to children, including the solicitation, creation, acquisition, or dissemination of child exploitative content or failure to report Child Sexual Abuse Material
150
+ 3. Human trafficking, exploitation, and sexual violence
151
+ 4. The illegal distribution of information or materials to minors, including obscene materials, or failure to employ legally required age-gating in connection with such information or materials.
152
+ 5. Sexual solicitation
153
+ 6. Any other criminal activity
154
+ 2. Engage in, promote, incite, or facilitate the harassment, abuse, threatening, or bullying of individuals or groups of individuals
155
+ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful or harmful conduct in the provision of employment, employment benefits, credit, housing, other economic benefits, or other essential goods and services
156
+ 4. Engage in the unauthorized or unlicensed practice of any profession including, but not limited to, financial, legal, medical/health, or related professional practices
157
+ 5. Collect, process, disclose, generate, or infer health, demographic, or other sensitive personal or private information about individuals without rights and consents required by applicable laws
158
+ 6. Engage in or facilitate any action or generate any content that infringes, misappropriates, or otherwise violates any third-party rights, including the outputs or results of any products or services using the Llama 2 Materials
159
+ 7. Create, generate, or facilitate the creation of malicious code, malware, computer viruses or do anything else that could disable, overburden, interfere with or impair the proper working, integrity, operation or appearance of a website or computer system
160
+ 2. Engage in, promote, incite, facilitate, or assist in the planning or
161
+ development of activities that present a risk of death or bodily harm to
162
+ individuals, including use of Llama 2 related to the following:
163
+ 1. Military, warfare, nuclear industries or applications, espionage, use for materials or activities that are subject to the International Traffic Arms Regulations (ITAR) maintained by the United States Department of State
164
+ 2. Guns and illegal weapons (including weapon development)
165
+ 3. Illegal drugs and regulated/controlled substances
166
+ 4. Operation of critical infrastructure, transportation technologies, or heavy machinery
167
+ 5. Self-harm or harm to others, including suicide, cutting, and eating disorders
168
+ 6. Any content intended to incite or promote violence, abuse, or any infliction of bodily harm to an individual
169
+ 3. Intentionally deceive or mislead others, including use of Llama 2 related
170
+ to the following:
171
+ 1. Generating, promoting, or furthering fraud or the creation or promotion of disinformation
172
+ 2. Generating, promoting, or furthering defamatory content, including the creation of defamatory statements, images, or other content
173
+ 3. Generating, promoting, or further distributing spam
174
+ 4. Impersonating another individual without consent, authorization, or legal right
175
+ 5. Representing that the use of Llama 2 or outputs are human-generated
176
+ 6. Generating or facilitating false online engagement, including fake reviews and other means of fake online engagement
177
+ 4. Fail to appropriately disclose to end users any known dangers of your AI system
178
+ Please report any violation of this Policy, software “bug,” or other problems
179
+ that could lead to a violation of this Policy through one of the following
180
+ means:
181
+ * Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
182
+ * Reporting risky content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
183
+ * Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
184
+ * Reporting violations of the Acceptable Use Policy or unlicensed uses of Llama: [[email protected]](mailto:[email protected])
185
+ extra_gated_fields:
186
+ First Name: text
187
+ Last Name: text
188
+ Date of birth: date_picker
189
+ Country: country
190
+ Affiliation: text
191
+ geo: ip_location
192
+ By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
193
+ extra_gated_description: >-
194
+ The information you provide will be collected, stored, processed and shared in
195
+ accordance with the [Meta Privacy
196
+ Policy](https://www.facebook.com/privacy/policy/).
197
+ extra_gated_button_content: Submit
198
+ language:
199
+ - en
200
+ pipeline_tag: text-generation
201
+ tags:
202
+ - facebook
203
+ - meta
204
+ - pytorch
205
+ - llama
206
+ - llama-2
207
+ license: llama2
208
  ---
209
+ # **Llama 2**
210
+ Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 70B pretrained model, converted for the Hugging Face Transformers format. Links to other models can be found in the index at the bottom.
 
 
 
 
211
 
212
  ## Model Details
213
+ *Note: Use of this model is governed by the Meta license. In order to download the model weights and tokenizer, please visit the [website](https://ai.meta.com/resources/models-and-libraries/llama-downloads/) and accept our License before requesting access here.*
214
 
215
+ Meta developed and publicly released the Llama 2 family of large language models (LLMs), a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama-2-Chat, are optimized for dialogue use cases. Llama-2-Chat models outperform open-source chat models on most benchmarks we tested, and in our human evaluations for helpfulness and safety, are on par with some popular closed-source models like ChatGPT and PaLM.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
216
 
217
+ **Model Developers** Meta
218
 
219
+ **Variations** Llama 2 comes in a range of parameter sizes — 7B, 13B, and 70B — as well as pretrained and fine-tuned variations.
220
 
221
+ **Input** Models input text only.
222
 
223
+ **Output** Models generate text only.
224
 
225
+ **Model Architecture** Llama 2 is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF) to align to human preferences for helpfulness and safety.
226
 
 
227
 
228
+ ||Training Data|Params|Content Length|GQA|Tokens|LR|
229
+ |---|---|---|---|---|---|---|
230
+ |Llama 2|*A new mix of publicly available online data*|7B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
231
+ |Llama 2|*A new mix of publicly available online data*|13B|4k|&#10007;|2.0T|3.0 x 10<sup>-4</sup>|
232
+ |Llama 2|*A new mix of publicly available online data*|70B|4k|&#10004;|2.0T|1.5 x 10<sup>-4</sup>|
233
 
234
+ *Llama 2 family of models.* Token counts refer to pretraining data only. All models are trained with a global batch-size of 4M tokens. Bigger models - 70B -- use Grouped-Query Attention (GQA) for improved inference scalability.
235
 
236
+ **Model Dates** Llama 2 was trained between January 2023 and July 2023.
237
 
238
+ **Status** This is a static model trained on an offline dataset. Future versions of the tuned models will be released as we improve model safety with community feedback.
 
 
 
 
239
 
240
+ **License** A custom commercial license is available at: [https://ai.meta.com/resources/models-and-libraries/llama-downloads/](https://ai.meta.com/resources/models-and-libraries/llama-downloads/)
241
 
242
+ **Research Paper** ["Llama-2: Open Foundation and Fine-tuned Chat Models"](arxiv.org/abs/2307.09288)
243
 
244
+ ## Intended Use
245
+ **Intended Use Cases** Llama 2 is intended for commercial and research use in English. Tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
246
 
247
+ To get the expected features and performance for the chat versions, a specific formatting needs to be followed, including the `INST` and `<<SYS>>` tags, `BOS` and `EOS` tokens, and the whitespaces and breaklines in between (we recommend calling `strip()` on inputs to avoid double-spaces). See our reference code in github for details: [`chat_completion`](https://github.com/facebookresearch/llama/blob/main/llama/generation.py#L212).
248
 
249
+ **Out-of-scope Uses** Use in any manner that violates applicable laws or regulations (including trade compliance laws).Use in languages other than English. Use in any other way that is prohibited by the Acceptable Use Policy and Licensing Agreement for Llama 2.
250
 
251
+ ## Hardware and Software
252
+ **Training Factors** We used custom training libraries, Meta's Research Super Cluster, and production clusters for pretraining. Fine-tuning, annotation, and evaluation were also performed on third-party cloud compute.
253
 
254
+ **Carbon Footprint** Pretraining utilized a cumulative 3.3M GPU hours of computation on hardware of type A100-80GB (TDP of 350-400W). Estimated total emissions were 539 tCO2eq, 100% of which were offset by Meta’s sustainability program.
255
 
256
+ ||Time (GPU hours)|Power Consumption (W)|Carbon Emitted(tCO<sub>2</sub>eq)|
257
+ |---|---|---|---|
258
+ |Llama 2 7B|184320|400|31.22|
259
+ |Llama 2 13B|368640|400|62.44|
260
+ |Llama 2 70B|1720320|400|291.42|
261
+ |Total|3311616||539.00|
262
 
263
+ **CO<sub>2</sub> emissions during pretraining.** Time: total GPU time required for training each model. Power Consumption: peak power capacity per GPU device for the GPUs used adjusted for power usage efficiency. 100% of the emissions are directly offset by Meta's sustainability program, and because we are openly releasing these models, the pretraining costs do not need to be incurred by others.
264
 
265
+ ## Training Data
266
+ **Overview** Llama 2 was pretrained on 2 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over one million new human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data.
267
 
268
+ **Data Freshness** The pretraining data has a cutoff of September 2022, but some tuning data is more recent, up to July 2023.
269
 
270
+ ## Evaluation Results
271
 
272
+ In this section, we report the results for the Llama 1 and Llama 2 models on standard academic benchmarks.For all the evaluations, we use our internal evaluations library.
273
 
274
+ |Model|Size|Code|Commonsense Reasoning|World Knowledge|Reading Comprehension|Math|MMLU|BBH|AGI Eval|
275
+ |---|---|---|---|---|---|---|---|---|---|
276
+ |Llama 1|7B|14.1|60.8|46.2|58.5|6.95|35.1|30.3|23.9|
277
+ |Llama 1|13B|18.9|66.1|52.6|62.3|10.9|46.9|37.0|33.9|
278
+ |Llama 1|33B|26.0|70.0|58.4|67.6|21.4|57.8|39.8|41.7|
279
+ |Llama 1|65B|30.7|70.7|60.5|68.6|30.8|63.4|43.5|47.6|
280
+ |Llama 2|7B|16.8|63.9|48.9|61.3|14.6|45.3|32.6|29.3|
281
+ |Llama 2|13B|24.5|66.9|55.4|65.8|28.7|54.8|39.4|39.1|
282
+ |Llama 2|70B|**37.5**|**71.9**|**63.6**|**69.4**|**35.2**|**68.9**|**51.2**|**54.2**|
283
 
284
+ **Overall performance on grouped academic benchmarks.** *Code:* We report the average pass@1 scores of our models on HumanEval and MBPP. *Commonsense Reasoning:* We report the average of PIQA, SIQA, HellaSwag, WinoGrande, ARC easy and challenge, OpenBookQA, and CommonsenseQA. We report 7-shot results for CommonSenseQA and 0-shot results for all other benchmarks. *World Knowledge:* We evaluate the 5-shot performance on NaturalQuestions and TriviaQA and report the average. *Reading Comprehension:* For reading comprehension, we report the 0-shot average on SQuAD, QuAC, and BoolQ. *MATH:* We report the average of the GSM8K (8 shot) and MATH (4 shot) benchmarks at top 1.
285
 
286
+ |||TruthfulQA|Toxigen|
287
+ |---|---|---|---|
288
+ |Llama 1|7B|27.42|23.00|
289
+ |Llama 1|13B|41.74|23.08|
290
+ |Llama 1|33B|44.19|22.57|
291
+ |Llama 1|65B|48.71|21.77|
292
+ |Llama 2|7B|33.29|**21.25**|
293
+ |Llama 2|13B|41.86|26.10|
294
+ |Llama 2|70B|**50.18**|24.60|
295
 
296
+ **Evaluation of pretrained LLMs on automatic safety benchmarks.** For TruthfulQA, we present the percentage of generations that are both truthful and informative (the higher the better). For ToxiGen, we present the percentage of toxic generations (the smaller the better).
297
 
 
298
 
299
+ |||TruthfulQA|Toxigen|
300
+ |---|---|---|---|
301
+ |Llama-2-Chat|7B|57.04|**0.00**|
302
+ |Llama-2-Chat|13B|62.18|**0.00**|
303
+ |Llama-2-Chat|70B|**64.14**|0.01|
304
 
305
+ **Evaluation of fine-tuned LLMs on different safety datasets.** Same metric definitions as above.
306
 
307
+ ## Ethical Considerations and Limitations
308
+ Llama 2 is a new technology that carries risks with use. Testing conducted to date has been in English, and has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Llama 2’s potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate, biased or other objectionable responses to user prompts. Therefore, before deploying any applications of Llama 2, developers should perform safety testing and tuning tailored to their specific applications of the model.
309
 
310
+ Please see the Responsible Use Guide available at [https://ai.meta.com/llama/responsible-use-guide/](https://ai.meta.com/llama/responsible-use-guide)
311
 
312
+ ## Reporting Issues
313
+ Please report any software “bug,” or other problems with the models through one of the following means:
314
+ - Reporting issues with the model: [github.com/facebookresearch/llama](http://github.com/facebookresearch/llama)
315
+ - Reporting problematic content generated by the model: [developers.facebook.com/llama_output_feedback](http://developers.facebook.com/llama_output_feedback)
316
+ - Reporting bugs and security concerns: [facebook.com/whitehat/info](http://facebook.com/whitehat/info)
317
 
318
+ ## Llama Model Index
319
+ |Model|Llama2|Llama2-hf|Llama2-chat|Llama2-chat-hf|
320
+ |---|---|---|---|---|
321
+ |7B| [Link](https://huggingface.co/meta-llama/Llama-2-7b) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)|
322
+ |13B| [Link](https://huggingface.co/meta-llama/Llama-2-13b) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)|
323
+ |70B| [Link](https://huggingface.co/meta-llama/Llama-2-70b) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-hf) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat) | [Link](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf)|