GGUF
Inference Endpoints
maddes8cht commited on
Commit
f847f50
1 Parent(s): c81539c

"Update README.md"

Browse files
Files changed (1) hide show
  1. README.md +206 -0
README.md ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - togethercomputer/RedPajama-Data-1T
5
+ ---
6
+ [![banner](https://maddes8cht.github.io/assets/buttons/Huggingface-banner.jpg)]()
7
+
8
+ I'm constantly enhancing these model descriptions to provide you with the most relevant and comprehensive information
9
+
10
+ # open_llama_13b - GGUF
11
+ - Model creator: [openlm-research](https://huggingface.co/openlm-research)
12
+ - Original model: [open_llama_13b](https://huggingface.co/openlm-research/open_llama_13b)
13
+
14
+ OpenLlama is a free reimplementation of the original Llama Model which is licensed under Apache 2 license.
15
+
16
+
17
+
18
+ # About GGUF format
19
+
20
+ `gguf` is the current file format used by the [`ggml`](https://github.com/ggerganov/ggml) library.
21
+ A growing list of Software is using it and can therefore use this model.
22
+ The core project making use of the ggml library is the [llama.cpp](https://github.com/ggerganov/llama.cpp) project by Georgi Gerganov
23
+
24
+ # Quantization variants
25
+
26
+ There is a bunch of quantized files available to cater to your specific needs. Here's how to choose the best option for you:
27
+
28
+ # Legacy quants
29
+
30
+ Q4_0, Q4_1, Q5_0, Q5_1 and Q8 are `legacy` quantization types.
31
+ Nevertheless, they are fully supported, as there are several circumstances that cause certain model not to be compatible with the modern K-quants.
32
+ ## Note:
33
+ Now there's a new option to use K-quants even for previously 'incompatible' models, although this involves some fallback solution that makes them not *real* K-quants. More details can be found in affected model descriptions.
34
+ (This mainly refers to Falcon 7b and Starcoder models)
35
+
36
+ # K-quants
37
+
38
+ K-quants are designed with the idea that different levels of quantization in specific parts of the model can optimize performance, file size, and memory load.
39
+ So, if possible, use K-quants.
40
+ With a Q6_K, you'll likely find it challenging to discern a quality difference from the original model - ask your model two times the same question and you may encounter bigger quality differences.
41
+
42
+
43
+
44
+
45
+ ---
46
+
47
+ # Original Model Card:
48
+ # OpenLLaMA: An Open Reproduction of LLaMA
49
+
50
+
51
+ In this repo, we present a permissively licensed open source reproduction of Meta AI's [LLaMA](https://ai.facebook.com/blog/large-language-model-llama-meta-ai/) large language model. We are releasing 3B, 7B and 13B models trained on 1T tokens. We provide PyTorch and JAX weights of pre-trained OpenLLaMA models, as well as evaluation results and comparison against the original LLaMA models. Please see the [project homepage of OpenLLaMA](https://github.com/openlm-research/open_llama) for more details.
52
+
53
+
54
+ ## Weights Release, License and Usage
55
+
56
+ We release the weights in two formats: an EasyLM format to be use with our [EasyLM framework](https://github.com/young-geng/EasyLM), and a PyTorch format to be used with the [Hugging Face transformers](https://huggingface.co/docs/transformers/index) library. Both our training framework EasyLM and the checkpoint weights are licensed permissively under the Apache 2.0 license.
57
+
58
+ ### Loading the Weights with Hugging Face Transformers
59
+ Preview checkpoints can be directly loaded from Hugging Face Hub. **Please note that it is advised to avoid using the Hugging Face fast tokenizer for now, as we’ve observed that the auto-converted fast tokenizer sometimes gives incorrect tokenizations.** This can be achieved by directly using the `LlamaTokenizer` class, or passing in the `use_fast=False` option for the `AutoTokenizer` class. See the following example for usage.
60
+
61
+ ```python
62
+ import torch
63
+ from transformers import LlamaTokenizer, LlamaForCausalLM
64
+
65
+ # model_path = 'openlm-research/open_llama_3b'
66
+ # model_path = 'openlm-research/open_llama_7b'
67
+ model_path = 'openlm-research/open_llama_13b'
68
+
69
+ tokenizer = LlamaTokenizer.from_pretrained(model_path)
70
+ model = LlamaForCausalLM.from_pretrained(
71
+ model_path, torch_dtype=torch.float16, device_map='auto',
72
+ )
73
+
74
+ prompt = 'Q: What is the largest animal?\nA:'
75
+ input_ids = tokenizer(prompt, return_tensors="pt").input_ids
76
+
77
+ generation_output = model.generate(
78
+ input_ids=input_ids, max_new_tokens=32
79
+ )
80
+ print(tokenizer.decode(generation_output[0]))
81
+ ```
82
+
83
+ For more advanced usage, please follow the [transformers LLaMA documentation](https://huggingface.co/docs/transformers/main/model_doc/llama).
84
+
85
+ ### Evaluating with LM-Eval-Harness
86
+ The model can be evaluated with [lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness). However, due to the aforementioned tokenizer issue, we need to avoid using the fast tokenizer to obtain the correct results. This can be achieved by passing in `use_fast=False` to [this part of lm-eval-harness](https://github.com/EleutherAI/lm-evaluation-harness/blob/4b701e228768052cfae9043dca13e82052ca5eea/lm_eval/models/huggingface.py#LL313C9-L316C10), as shown in the example below:
87
+
88
+ ```python
89
+ tokenizer = self.AUTO_TOKENIZER_CLASS.from_pretrained(
90
+ pretrained if tokenizer is None else tokenizer,
91
+ revision=revision + ("/" + subfolder if subfolder is not None else ""),
92
+ use_fast=False
93
+ )
94
+ ```
95
+
96
+ ### Loading the Weights with EasyLM
97
+
98
+ For using the weights in our EasyLM framework, please refer to the [LLaMA documentation of EasyLM](https://github.com/young-geng/EasyLM/blob/main/docs/llama.md). Note that unlike the original LLaMA model, our OpenLLaMA tokenizer and weights are trained completely from scratch so it is no longer needed to obtain the original LLaMA tokenizer and weights. Note that we use BOS (beginning of sentence) token (id=1) during training, so it is best to prepend this token for best performance during few-shot evaluation.
99
+
100
+
101
+
102
+ ## Dataset and Training
103
+
104
+ We train our models on the [RedPajama](https://www.together.xyz/blog/redpajama) dataset released by [Together](https://www.together.xyz/), which is a reproduction of the LLaMA training dataset containing over 1.2 trillion tokens. We follow the exactly same preprocessing steps and training hyperparameters as the original LLaMA paper, including model architecture, context length, training steps, learning rate schedule, and optimizer. The only difference between our setting and the original one is the dataset used: OpenLLaMA employs the RedPajama dataset rather than the one utilized by the original LLaMA.
105
+
106
+ We train the models on cloud TPU-v4s using [EasyLM](https://github.com/young-geng/EasyLM), a JAX based training pipeline we developed for training and fine-tuning large language models. We employ a combination of normal data parallelism and [fully sharded data parallelism (also know as ZeRO stage 3)](https://engineering.fb.com/2021/07/15/open-source/fsdp/) to balance the training throughput and memory usage. Overall we reach a throughput of over 2200 tokens / second / TPU-v4 chip for our 7B model.
107
+
108
+
109
+ ## Evaluation
110
+ We evaluated OpenLLaMA on a wide range of tasks using [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). The LLaMA results are generated by running the original LLaMA model on the same evaluation metrics. We note that our results for the LLaMA model differ slightly from the original LLaMA paper, which we believe is a result of different evaluation protocols. Similar differences have been reported in [this issue of lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness/issues/443). Additionally, we present the results of GPT-J, a 6B parameter model trained on the [Pile](https://pile.eleuther.ai/) dataset by [EleutherAI](https://www.eleuther.ai/).
111
+
112
+ The original LLaMA model was trained for 1 trillion tokens and GPT-J was trained for 500 billion tokens. We present the results in the table below. OpenLLaMA exhibits comparable performance to the original LLaMA and GPT-J across a majority of tasks, and outperforms them in some tasks.
113
+
114
+
115
+ | **Task/Metric** | GPT-J 6B | LLaMA 7B | LLaMA 13B | OpenLLaMA 7B | OpenLLaMA 3B | OpenLLaMA 13B |
116
+ | ---------------------- | -------- | -------- | --------- | ------------ | ------------ | ------------- |
117
+ | anli_r1/acc | 0.32 | 0.35 | 0.35 | 0.33 | 0.33 | 0.33 |
118
+ | anli_r2/acc | 0.34 | 0.34 | 0.36 | 0.36 | 0.32 | 0.33 |
119
+ | anli_r3/acc | 0.35 | 0.37 | 0.39 | 0.38 | 0.35 | 0.40 |
120
+ | arc_challenge/acc | 0.34 | 0.39 | 0.44 | 0.37 | 0.34 | 0.41 |
121
+ | arc_challenge/acc_norm | 0.37 | 0.41 | 0.44 | 0.38 | 0.37 | 0.44 |
122
+ | arc_easy/acc | 0.67 | 0.68 | 0.75 | 0.72 | 0.69 | 0.75 |
123
+ | arc_easy/acc_norm | 0.62 | 0.52 | 0.59 | 0.68 | 0.65 | 0.70 |
124
+ | boolq/acc | 0.66 | 0.75 | 0.71 | 0.71 | 0.68 | 0.75 |
125
+ | hellaswag/acc | 0.50 | 0.56 | 0.59 | 0.53 | 0.49 | 0.56 |
126
+ | hellaswag/acc_norm | 0.66 | 0.73 | 0.76 | 0.72 | 0.67 | 0.76 |
127
+ | openbookqa/acc | 0.29 | 0.29 | 0.31 | 0.30 | 0.27 | 0.31 |
128
+ | openbookqa/acc_norm | 0.38 | 0.41 | 0.42 | 0.40 | 0.40 | 0.43 |
129
+ | piqa/acc | 0.75 | 0.78 | 0.79 | 0.76 | 0.75 | 0.77 |
130
+ | piqa/acc_norm | 0.76 | 0.78 | 0.79 | 0.77 | 0.76 | 0.79 |
131
+ | record/em | 0.88 | 0.91 | 0.92 | 0.89 | 0.88 | 0.91 |
132
+ | record/f1 | 0.89 | 0.91 | 0.92 | 0.90 | 0.89 | 0.91 |
133
+ | rte/acc | 0.54 | 0.56 | 0.69 | 0.60 | 0.58 | 0.64 |
134
+ | truthfulqa_mc/mc1 | 0.20 | 0.21 | 0.25 | 0.23 | 0.22 | 0.25 |
135
+ | truthfulqa_mc/mc2 | 0.36 | 0.34 | 0.40 | 0.35 | 0.35 | 0.38 |
136
+ | wic/acc | 0.50 | 0.50 | 0.50 | 0.51 | 0.48 | 0.47 |
137
+ | winogrande/acc | 0.64 | 0.68 | 0.70 | 0.67 | 0.62 | 0.70 |
138
+ | Average | 0.52 | 0.55 | 0.57 | 0.55 | 0.53 | 0.57 |
139
+
140
+
141
+ We removed the task CB and WSC from our benchmark, as our model performs suspiciously well on these two tasks. We hypothesize that there could be a benchmark data contamination in the training set.
142
+
143
+
144
+ ## Contact
145
+
146
+ We would love to get feedback from the community. If you have any questions, please open an issue or contact us.
147
+
148
+ OpenLLaMA is developed by:
149
+ [Xinyang Geng](https://young-geng.xyz/)* and [Hao Liu](https://www.haoliu.site/)* from Berkeley AI Research.
150
+ *Equal Contribution
151
+
152
+
153
+
154
+ ## Acknowledgment
155
+
156
+ We thank the [Google TPU Research Cloud](https://sites.research.google/trc/about/) program for providing part of the computation resources. We’d like to specially thank Jonathan Caton from TPU Research Cloud for helping us organizing compute resources, Rafi Witten from the Google Cloud team and James Bradbury from the Google JAX team for helping us optimizing our training throughput. We’d also want to thank Charlie Snell, Gautier Izacard, Eric Wallace, Lianmin Zheng and our user community for the discussions and feedback.
157
+
158
+ The OpenLLaMA 13B model is trained in collaboration with [Stability AI](https://stability.ai/), and we thank Stability AI for providing the computation resources. We’d like to especially thank David Ha and Shivanshu Purohit for the coordinating the logistics and providing engineering support.
159
+
160
+
161
+ ## Reference
162
+
163
+ If you found OpenLLaMA useful in your research or applications, please cite using the following BibTeX:
164
+ ```
165
+ @software{openlm2023openllama,
166
+ author = {Geng, Xinyang and Liu, Hao},
167
+ title = {OpenLLaMA: An Open Reproduction of LLaMA},
168
+ month = May,
169
+ year = 2023,
170
+ url = {https://github.com/openlm-research/open_llama}
171
+ }
172
+ ```
173
+ ```
174
+ @software{together2023redpajama,
175
+ author = {Together Computer},
176
+ title = {RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset},
177
+ month = April,
178
+ year = 2023,
179
+ url = {https://github.com/togethercomputer/RedPajama-Data}
180
+ }
181
+ ```
182
+ ```
183
+ @article{touvron2023llama,
184
+ title={Llama: Open and efficient foundation language models},
185
+ author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and others},
186
+ journal={arXiv preprint arXiv:2302.13971},
187
+ year={2023}
188
+ }
189
+ ```
190
+
191
+ ***End of original Model File***
192
+ ---
193
+
194
+
195
+ ## Please consider to support my work
196
+ **Coming Soon:** I'm in the process of launching a sponsorship/crowdfunding campaign for my work. I'm evaluating Kickstarter, Patreon, or the new GitHub Sponsors platform, and I am hoping for some support and contribution to the continued availability of these kind of models. Your support will enable me to provide even more valuable resources and maintain the models you rely on. Your patience and ongoing support are greatly appreciated as I work to make this page an even more valuable resource for the community.
197
+
198
+ <center>
199
+
200
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-io-button.png)](https://maddes8cht.github.io)
201
+ [![Stack Exchange](https://stackexchange.com/users/flair/26485911.png)](https://stackexchange.com/users/26485911)
202
+ [![GitHub](https://maddes8cht.github.io/assets/buttons/github-button.png)](https://github.com/maddes8cht)
203
+ [![HuggingFace](https://maddes8cht.github.io/assets/buttons/huggingface-button.png)](https://huggingface.co/maddes8cht)
204
+ [![Twitter](https://maddes8cht.github.io/assets/buttons/twitter-button.png)](https://twitter.com/maddes1966)
205
+
206
+ </center>