KempnerInstitute commited on
Commit
1fa49a6
1 Parent(s): 298e4dd

Model card readability

Browse files

Small edits to improve the readability of the model card.

Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -1,6 +1,6 @@
1
  # Model description
2
 
3
- This repo contains over 500 model checkpoints ranging in size from 20M parameters up to 3.3B parameters and FLOP budgets from 2e17 to 1e21 FLOPs across 6 different pretraining datasets.
4
 
5
  Each subdirectory name contains four different parameters to identify the model in that subdirectory:
6
 
@@ -11,13 +11,13 @@ Each subdirectory name contains four different parameters to identify the model
11
 
12
  For example, a model trained on `starcoder` with 1.1e08 parameters on 3.0e08 tokens for a total of 2.0e17 FLOPs would have the name: `L2L_starcoder_N1.1e08_D3.0e08_C2.0e17/`
13
 
14
- Full training details for the models can be found in the training repo or paper.
15
 
16
  # How to load a model
17
 
18
- First, follow the instruction to install our fork of the [OLMo](https://github.com/allenai/OLMo) package from here: https://github.com/KempnerInstitute/loss-to-loss-olmo/tree/main
19
 
20
- With this installed, you can then use the huggingface hub and transformers to load a model with the following snippet:
21
  ```python
22
  from olmo.model import HFMixinOLMo
23
  from huggingface_hub import snapshot_download
 
1
  # Model description
2
 
3
+ This repository contains over 500 model checkpoints ranging in size from 20M parameters up to 3.3B parameters and FLOP budgets from 2e17 to 1e21 FLOPs across 6 different pretraining datasets.
4
 
5
  Each subdirectory name contains four different parameters to identify the model in that subdirectory:
6
 
 
11
 
12
  For example, a model trained on `starcoder` with 1.1e08 parameters on 3.0e08 tokens for a total of 2.0e17 FLOPs would have the name: `L2L_starcoder_N1.1e08_D3.0e08_C2.0e17/`
13
 
14
+ Full training details for the models can be found in the [training repository](https://github.com/KempnerInstitute/loss-to-loss-olmo/) or paper.
15
 
16
  # How to load a model
17
 
18
+ First, follow the instructions in the [training repository](https://github.com/KempnerInstitute/loss-to-loss-olmo/) to install our fork of the [OLMo](https://github.com/allenai/OLMo) package.
19
 
20
+ With this installed, you can then use the `huggingface_hub` and `transformers` packages to load a model with the following snippet:
21
  ```python
22
  from olmo.model import HFMixinOLMo
23
  from huggingface_hub import snapshot_download