natolambert commited on
Commit
c390c47
1 Parent(s): e99d4f4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -10
README.md CHANGED
@@ -35,28 +35,22 @@ The core models released in this batch are the following:
35
  | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
36
  | [OLMo 7B v1.7](https://huggingface.co/allenai/OLMo-7B-v1.7) | 2.TODO | 32 | 4096 | 32 | 2048 |
37
 
 
 
38
  We are releasing many checkpoints for these models, for every 1000 traing steps.
39
  The naming convention is `step1000-tokens4B`.
40
- In particular, we focus on four revisions of the 7B models:
41
-
42
- | Name | HF Repo | Model Revision | Tokens | Note |
43
- |------------|---------|----------------|-------------------|------|
44
- |OLMo 7B| [allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|`main`| 2.5T|The base OLMo 7B model|
45
- |OLMo 7B (not annealed)|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)|step556000-tokens2460B|2.5T| learning rate not annealed to 0|
46
- |OLMo 7B-2T|[allenai/OLMo-7B](https://huggingface.co/allenai/OLMo-7B)| step452000-tokens2000B |2T| OLMo checkpoint at 2T tokens|
47
- |OLMo-7B-Twin-2T|[allenai/OLMo-7B-Twin-2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T)|`main`|2T| Twin version on different hardware|
48
 
49
  To load a specific model revision with HuggingFace, simply add the argument `revision`:
50
  ```bash
51
  import hf_olmo # pip install ai2-olmo
52
- olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B", revision="step1000-tokens4B")
53
  ```
54
 
55
  All revisions/branches are listed in the file `revisions.txt`.
56
  Or, you can access all the revisions for the models via the following code snippet:
57
  ```python
58
  from huggingface_hub import list_repo_refs
59
- out = list_repo_refs("allenai/OLMo-7B")
60
  branches = [b.name for b in out.branches]
61
  ```
62
  A few revisions were lost due to an error, but the vast majority are present.
 
35
  | [OLMo 7B Twin 2T](https://huggingface.co/allenai/OLMo-7B-Twin-2T) | 2 Trillion | 32 | 4096 | 32 | 2048 |
36
  | [OLMo 7B v1.7](https://huggingface.co/allenai/OLMo-7B-v1.7) | 2.TODO | 32 | 4096 | 32 | 2048 |
37
 
38
+ *Note: OLMo 7B v1.7 also includes QKV clipping.*
39
+
40
  We are releasing many checkpoints for these models, for every 1000 traing steps.
41
  The naming convention is `step1000-tokens4B`.
 
 
 
 
 
 
 
 
42
 
43
  To load a specific model revision with HuggingFace, simply add the argument `revision`:
44
  ```bash
45
  import hf_olmo # pip install ai2-olmo
46
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/OLMo-7B-v1.7", revision="step1000-tokens4B")
47
  ```
48
 
49
  All revisions/branches are listed in the file `revisions.txt`.
50
  Or, you can access all the revisions for the models via the following code snippet:
51
  ```python
52
  from huggingface_hub import list_repo_refs
53
+ out = list_repo_refs("allenai/OLMo-7B-v1.7")
54
  branches = [b.name for b in out.branches]
55
  ```
56
  A few revisions were lost due to an error, but the vast majority are present.