Update README.md
Browse files
README.md
CHANGED
@@ -6,14 +6,14 @@ datasets:
|
|
6 |
- togethercomputer/RedPajama-Data-1T
|
7 |
---
|
8 |
|
9 |
-
# RedPajama-INCITE-
|
10 |
|
11 |
-
RedPajama-INCITE-
|
12 |
The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
|
13 |
|
14 |
-
- Base Model: [RedPajama-INCITE-
|
15 |
-
- Instruction-tuned Version: [RedPajama-INCITE-
|
16 |
-
- Chat Version: [RedPajama-INCITE-
|
17 |
|
18 |
|
19 |
## Model Details
|
@@ -42,8 +42,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
42 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
43 |
|
44 |
# init
|
45 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-
|
46 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-
|
47 |
model = model.to('cuda:0')
|
48 |
# infer
|
49 |
prompt = "Alan Turing is"
|
@@ -84,8 +84,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
84 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
85 |
|
86 |
# init
|
87 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-
|
88 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-
|
89 |
|
90 |
# infer
|
91 |
prompt = "Alan Turing is"
|
@@ -114,8 +114,8 @@ MIN_TRANSFORMERS_VERSION = '4.25.1'
|
|
114 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
115 |
|
116 |
# init
|
117 |
-
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-
|
118 |
-
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-
|
119 |
# infer
|
120 |
prompt = "Alan Turing is"
|
121 |
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
|
@@ -145,13 +145,13 @@ It is the responsibility of the end user to ensure that the model is used in a r
|
|
145 |
|
146 |
#### Out-of-Scope Use
|
147 |
|
148 |
-
`RedPajama-INCITE-
|
149 |
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
150 |
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
151 |
|
152 |
#### Misuse and Malicious Use
|
153 |
|
154 |
-
`RedPajama-INCITE-
|
155 |
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
|
156 |
|
157 |
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
@@ -168,7 +168,7 @@ Using the model to generate content that is cruel to individuals is a misuse of
|
|
168 |
|
169 |
## Limitations
|
170 |
|
171 |
-
`RedPajama-INCITE-
|
172 |
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
173 |
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
174 |
|
@@ -184,7 +184,7 @@ Please refer to [togethercomputer/RedPajama-Data-1T](https://huggingface.co/data
|
|
184 |
- **Optimizer:** Apex FusedAdam
|
185 |
- **Parallelism:** Pipeline parallel 12, tensor parallel 2
|
186 |
- **Gradient Accumulations**: 8 (global batch size 4M tokens)
|
187 |
-
- **Num of Tokens:**
|
188 |
- **Learning rate:** 0.00012
|
189 |
|
190 |
## Benchmark
|
|
|
6 |
- togethercomputer/RedPajama-Data-1T
|
7 |
---
|
8 |
|
9 |
+
# RedPajama-INCITE-7B-Base
|
10 |
|
11 |
+
RedPajama-INCITE-7B-Base was developed by Together and leaders from the open-source AI community including Ontocord.ai, ETH DS3Lab, AAI CERC, Université de Montréal, MILA - Québec AI Institute, Stanford Center for Research on Foundation Models (CRFM), Stanford Hazy Research research group and LAION.
|
12 |
The training was done on 3,072 V100 GPUs provided as part of the INCITE 2023 project on Scalable Foundation Models for Transferrable Generalist AI, awarded to MILA, LAION, and EleutherAI in fall 2022, with support from the Oak Ridge Leadership Computing Facility (OLCF) and INCITE program.
|
13 |
|
14 |
+
- Base Model: [RedPajama-INCITE-7B-Base](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Base)
|
15 |
+
- Instruction-tuned Version: [RedPajama-INCITE-7B-Instruct](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Instruct)
|
16 |
+
- Chat Version: [RedPajama-INCITE-7B-Chat](https://huggingface.co/togethercomputer/RedPajama-INCITE-7B-Chat)
|
17 |
|
18 |
|
19 |
## Model Details
|
|
|
42 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
43 |
|
44 |
# init
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base")
|
46 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base", torch_dtype=torch.float16)
|
47 |
model = model.to('cuda:0')
|
48 |
# infer
|
49 |
prompt = "Alan Turing is"
|
|
|
84 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
85 |
|
86 |
# init
|
87 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base")
|
88 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base", device_map='auto', torch_dtype=torch.float16, load_in_8bit=True)
|
89 |
|
90 |
# infer
|
91 |
prompt = "Alan Turing is"
|
|
|
114 |
assert transformers.__version__ >= MIN_TRANSFORMERS_VERSION, f'Please upgrade transformers to version {MIN_TRANSFORMERS_VERSION} or higher.'
|
115 |
|
116 |
# init
|
117 |
+
tokenizer = AutoTokenizer.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base")
|
118 |
+
model = AutoModelForCausalLM.from_pretrained("togethercomputer/RedPajama-INCITE-7B-Base", torch_dtype=torch.bfloat16)
|
119 |
# infer
|
120 |
prompt = "Alan Turing is"
|
121 |
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
|
|
|
145 |
|
146 |
#### Out-of-Scope Use
|
147 |
|
148 |
+
`RedPajama-INCITE-7B-Base` is a language model and may not perform well for other use cases outside of its intended scope.
|
149 |
For example, it may not be suitable for use in safety-critical applications or for making decisions that have a significant impact on individuals or society.
|
150 |
It is important to consider the limitations of the model and to only use it for its intended purpose.
|
151 |
|
152 |
#### Misuse and Malicious Use
|
153 |
|
154 |
+
`RedPajama-INCITE-7B-Base` is designed for language modeling.
|
155 |
Misuse of the model, such as using it to engage in illegal or unethical activities, is strictly prohibited and goes against the principles of the project.
|
156 |
|
157 |
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
|
|
|
168 |
|
169 |
## Limitations
|
170 |
|
171 |
+
`RedPajama-INCITE-7B-Base`, like other language models, has limitations that should be taken into consideration.
|
172 |
For example, the model may not always provide accurate or relevant answers, particularly for questions that are complex, ambiguous, or outside of its training data.
|
173 |
We therefore welcome contributions from individuals and organizations, and encourage collaboration towards creating a more robust and inclusive chatbot.
|
174 |
|
|
|
184 |
- **Optimizer:** Apex FusedAdam
|
185 |
- **Parallelism:** Pipeline parallel 12, tensor parallel 2
|
186 |
- **Gradient Accumulations**: 8 (global batch size 4M tokens)
|
187 |
+
- **Num of Tokens:** 1T Tokens
|
188 |
- **Learning rate:** 0.00012
|
189 |
|
190 |
## Benchmark
|