Text Generation
Transformers
PyTorch
llama
text-generation-inference
Inference Endpoints

Update `README.md`

#1
by alvarobartt HF staff - opened
Files changed (1) hide show
  1. README.md +9 -20
README.md CHANGED
@@ -13,13 +13,12 @@ library_name: transformers
13
 
14
  In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
15
 
16
-
17
-
18
  ## Weights Release, License and Usage
19
 
20
  We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
21
 
22
  ### Loading the Weights with Hugging Face Transformers
 
23
  Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
24
 
25
  ```python
@@ -41,16 +40,19 @@ model = LlamaForCausalLM.from_pretrained(
41
 
42
  prompt = 'Q: What is the largest animal?\nA:'
43
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
 
44
 
45
- generation_output = model.generate(
46
- input_ids=input_ids, max_new_tokens=32
47
- )
48
- print(tokenizer.decode(generation_output[0]))
 
49
  ```
50
 
51
  For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
52
 
53
  ### Evaluating with LM-Eval-Harness
 
54
  The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
55
 
56
  ```python
@@ -62,9 +64,8 @@ tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
62
  ```
63
 
64
  ### Loading the Weights with EasyLM
65
- For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
66
-
67
 
 
68
 
69
  ## Dataset and Training
70
 
@@ -72,14 +73,12 @@ The v1 models are trained on the [RedPajama dataset](https://huggingface.co/data
72
 
73
  We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
74
 
75
-
76
  ## Evaluation
77
 
78
  We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
79
 
80
  The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
81
 
82
-
83
  | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
84
  | ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- |
85
  | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 |
@@ -105,10 +104,8 @@ The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained
105
  | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 |
106
  | Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 |
107
 
108
-
109
  We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
110
 
111
-
112
  ## Contact
113
 
114
  We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
@@ -117,15 +114,12 @@ OpenLLaMA is developed by:
117
  [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
118
  *Equal Contribution
119
 
120
-
121
-
122
  ## Acknowledgment
123
 
124
  We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
125
 
126
  The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
127
 
128
-
129
  ## Reference
130
 
131
  If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
@@ -155,8 +149,3 @@ If you found OpenLLaMA useful in your research or applications, please cite usin
155
  year={2023}
156
  }
157
  ```
158
-
159
-
160
-
161
-
162
-
 
13
 
14
  In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing a series of 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. The v2 model is better than the old v1 model trained on a different data mixture. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
15
 
 
 
16
  ## Weights Release, License and Usage
17
 
18
  We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
19
 
20
  ### Loading the Weights with Hugging Face Transformers
21
+
22
  Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that** [**the auto-converted fast tokenizer sometimes gives incorrect tokenizations**](https://github.com/huggingface/transformers/issues/24233)**.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
23
 
24
  ```python
 
40
 
41
  prompt = 'Q: What is the largest animal?\nA:'
42
  input_ids = tokenizer(prompt, return_tensors="pt").input_ids
43
+ input_ids = input_ids.to(model.device)
44
 
45
+ with torch.cuda.amp.autocast():
46
+ generation_output = model.generate(
47
+ input_ids=input_ids, max_new_tokens=32
48
+ )
49
+ print(tokenizer.decode(generation_output[0]))
50
  ```
51
 
52
  For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
53
 
54
  ### Evaluating with LM-Eval-Harness
55
+
56
  The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
57
 
58
  ```python
 
64
  ```
65
 
66
  ### Loading the Weights with EasyLM
 
 
67
 
68
+ For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights.
69
 
70
  ## Dataset and Training
71
 
 
73
 
74
  We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
75
 
 
76
  ## Evaluation
77
 
78
  We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
79
 
80
  The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
81
 
 
82
  | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7Bv2 | OpenLLaMA 3B | OpenLLaMA 7B | OpenLLaMA 13B |
83
  | ---------------------- | -------- | -------- | --------- | -------------- | ------------ | ------------ | ------------- |
84
  | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.34 | 0.33 | 0.33 | 0.33 |
 
104
  | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.66 | 0.62 | 0.67 | 0.70 |
105
  | Average | 0.52 | 0.55 | 0.57 | 0.56 | 0.53 | 0.55 | 0.57 |
106
 
 
107
  We removed the task CB and WSC from our benchmark, as our model performs suspiciously high on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
108
 
 
109
  ## Contact
110
 
111
  We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
 
114
  [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
115
  *Equal Contribution
116
 
 
 
117
  ## Acknowledgment
118
 
119
  We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
120
 
121
  The OpenLLaMA 13B v1 model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
122
 
 
123
  ## Reference
124
 
125
  If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
 
149
  year={2023}
150
  }
151
  ```