kylel commited on
Commit
8df85ad
1 Parent(s): 674356d

fixes to model card

Browse files
Files changed (1) hide show
  1. README.md +12 -12
README.md CHANGED
@@ -23,23 +23,23 @@ We release all code, checkpoints, logs, and details involved in training these m
23
  The core models released in this batch are the following:
24
  | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
25
  |------|--------|---------|-------------|-----------------|----------------|
26
- | [OLMo 1B](https://huggingface.co/allenai/OLMo-1B) | 3 Trillion |16 | 2048 | 16 | 2048 |
27
- | [OLMo 7B](https://huggingface.co/allenai/OLMo-7B) | 2.5 Trillion | 32 | 4096 | 32 | 2048 |
28
- | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
29
- | [OLMo 1.7-7B](https://huggingface.co/allenai/OLMo-1.7-7B) | 2.05 Trillion | 32 | 4096 | 32 | 4096 |
30
 
31
- *Note: OLMo 1.7-1B also includes QKV clipping.*
 
 
32
 
33
  To load a specific model revision with HuggingFace, simply add the argument `revision`:
34
  ```bash
35
- olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-1B-hf", revision="step1000-tokens2B")
36
  ```
37
 
38
  All revisions/branches are listed in the file `revisions.txt`.
39
  Or, you can access all the revisions for the models via the following code snippet:
40
  ```python
41
  from huggingface_hub import list_repo_refs
42
- out = list_repo_refs("allenai/OLMo-1.7-1B-hf")
43
  branches = [b.name for b in out.branches]
44
  ```
45
 
@@ -62,7 +62,7 @@ branches = [b.name for b in out.branches]
62
  - Evaluation code: https://github.com/allenai/OLMo-Eval
63
  - Further fine-tuning code: https://github.com/allenai/open-instruct
64
  - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
65
- <!-- - **W&B Logs:** [pretraining](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B), [annealing](https://wandb.ai/ai2-llm/OLMo-7B/groups/OLMo-1.7-7B-anneal) -->
66
 
67
  ## Uses
68
 
@@ -71,8 +71,8 @@ branches = [b.name for b in out.branches]
71
  Install Transformers. Then proceed as usual with HuggingFace:
72
  ```python
73
  from transformers import AutoModelForCausalLM, AutoTokenizer
74
- olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-1B-hf")
75
- tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1.7-1B-hf")
76
  message = ["Language modeling is "]
77
  inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
78
  # optional verifying cuda
@@ -85,12 +85,12 @@ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
85
  Alternatively, with the pipeline abstraction:
86
  ```python
87
  from transformers import pipeline
88
- olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1.7-1B-hf")
89
  print(olmo_pipe("Language modeling is "))
90
  >> 'Language modeling is a branch of natural language processing that aims to...'
91
  ```
92
 
93
- Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1.7-1B-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
94
  The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
95
 
96
  ### Fine-tuning
 
23
  The core models released in this batch are the following:
24
  | Size | Training Tokens | Layers | Hidden Size | Attention Heads | Context Length |
25
  |------|--------|---------|-------------|-----------------|----------------|
26
+ | [OLMo 1B July 2024](https://huggingface.co/allenai/OLMo-1B-0724-hf) | 3.05 Trillion | 16 | 2048 | 16 | 4096 |
27
+ | [OLMo 7B July 2024](https://huggingface.co/allenai/OLMo-7B-0724-hf) | 2.75 Trillion | 32 | 4096 | 32 | 4096 |
 
 
28
 
29
+
30
+ [Coming soon] We are releasing many checkpoints for these models, for every 1000 training steps.
31
+ The naming convention is `stepXXX-tokensYYYB`.
32
 
33
  To load a specific model revision with HuggingFace, simply add the argument `revision`:
34
  ```bash
35
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-0724-hf", revision="step1000-tokens4B")
36
  ```
37
 
38
  All revisions/branches are listed in the file `revisions.txt`.
39
  Or, you can access all the revisions for the models via the following code snippet:
40
  ```python
41
  from huggingface_hub import list_repo_refs
42
+ out = list_repo_refs("allenai/OLMo-1B-0724-hf")
43
  branches = [b.name for b in out.branches]
44
  ```
45
 
 
62
  - Evaluation code: https://github.com/allenai/OLMo-Eval
63
  - Further fine-tuning code: https://github.com/allenai/open-instruct
64
  - **Paper:** [Link](https://arxiv.org/abs/2402.00838)
65
+
66
 
67
  ## Uses
68
 
 
71
  Install Transformers. Then proceed as usual with HuggingFace:
72
  ```python
73
  from transformers import AutoModelForCausalLM, AutoTokenizer
74
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-0724-hf")
75
+ tokenizer = AutoTokenizer.from_pretrained("allenai/OLMo-1B-0724-hf")
76
  message = ["Language modeling is "]
77
  inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
78
  # optional verifying cuda
 
85
  Alternatively, with the pipeline abstraction:
86
  ```python
87
  from transformers import pipeline
88
+ olmo_pipe = pipeline("text-generation", model="allenai/OLMo-1B-0724-hf")
89
  print(olmo_pipe("Language modeling is "))
90
  >> 'Language modeling is a branch of natural language processing that aims to...'
91
  ```
92
 
93
+ Or, you can make this slightly faster by quantizing the model, e.g. `AutoModelForCausalLM.from_pretrained("allenai/OLMo-1B-0724-hf", torch_dtype=torch.float16, load_in_8bit=True)` (requires `bitsandbytes`).
94
  The quantized model is more sensitive to typing / cuda, so it is recommended to pass the inputs as `inputs.input_ids.to('cuda')` to avoid potential issues.
95
 
96
  ### Fine-tuning