NeMo
PyTorch
nemotron
srvm commited on
Commit
c648d1c
1 Parent(s): 34eef63

Update with v2 model and config

Browse files
.gitattributes CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  nemo/*.nemo filter=lfs diff=lfs merge=lfs -text
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  nemo/*.nemo filter=lfs diff=lfs merge=lfs -text
37
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -5,7 +5,7 @@ license_link: >-
5
  https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
  ---
7
 
8
- # Minitron 8B Base
9
 
10
  Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.
11
 
@@ -15,22 +15,20 @@ Minitron models are for research and development only.
15
 
16
  ## HuggingFace Quickstart
17
 
18
- The [PR](https://github.com/huggingface/transformers/pull/31699) to support our models in Hugging Face in under review and expected to be merged soon. In the meantime, this [branch](https://github.com/suiyoubi/transformers/tree/aot/nemotron-support) at [commit ID 63d9cb0](https://github.com/suiyoubi/transformers/commit/63d9cb0afd2bf5d4cb5431ba1b2c4e353752a937) can be used for Minitron models:
19
 
20
  ```
21
- git clone git@github.com:suiyoubi/transformers.git
22
- cd transformers
23
- git checkout 63d9cb0
24
- pip install .
25
  ```
26
- The following code provides an example of how to load the Minitron-8B model and use it to perform text generation.
27
 
28
  ```python
29
  import torch
30
  from transformers import AutoTokenizer, AutoModelForCausalLM
31
 
32
  # Load the tokenizer and model
33
- model_path = "nvidia/Minitron-8B-Base"
34
  tokenizer = AutoTokenizer.from_pretrained(model_path)
35
 
36
  device='cuda'
@@ -59,13 +57,13 @@ Minitron is released under the [NVIDIA Open Model License Agreement](https://dev
59
 
60
  | Average |
61
  | :---- |
62
- | 63.8 |
63
 
64
  *Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
65
 
66
  HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
67
  | :------------- | :------------- | :------------- | :------------- | :------------- |
68
- | 80.7 | 79.0 | 51.3 | 52.6 | 31.2
69
 
70
 
71
  *Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
 
5
  https://developer.download.nvidia.com/licenses/nvidia-open-model-license-agreement-june-2024.pdf
6
  ---
7
 
8
+ # Nemotron-4 Minitron 8B Base
9
 
10
  Minitron is a family of small language models (SLMs) obtained by pruning NVIDIA's [Nemotron-4 15B](https://arxiv.org/abs/2402.16819) model. We prune model embedding size, attention heads, and MLP intermediate dimension, following which, we perform continued training with distillation to arrive at the final models.
11
 
 
15
 
16
  ## HuggingFace Quickstart
17
 
18
+ The [pull request](https://github.com/huggingface/transformers/pull/32495) to support this model in Hugging Face Transformers is under review and expected to be merged soon. In the meantime, please follow the installation instructions below:
19
 
20
  ```
21
+ $ git clone -b aot/head_dim_rope --single-branch https://github.com/suiyoubi/transformers.git && cd transformers
22
+ $ pip install -e .
 
 
23
  ```
24
+ The following code provides an example of how to load the Nemotron-4-Minitron-8B model and use it to perform text generation.
25
 
26
  ```python
27
  import torch
28
  from transformers import AutoTokenizer, AutoModelForCausalLM
29
 
30
  # Load the tokenizer and model
31
+ model_path = "nvidia/Nemotron-4-Minitron-8B-Base"
32
  tokenizer = AutoTokenizer.from_pretrained(model_path)
33
 
34
  device='cuda'
 
57
 
58
  | Average |
59
  | :---- |
60
+ | 64.5 |
61
 
62
  *Zero-shot performance.* Evaluated using select datasets from the [LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) with additions:
63
 
64
  HellaSwag | Winogrande | GSM8K| ARC-C | XLSum |
65
  | :------------- | :------------- | :------------- | :------------- | :------------- |
66
+ | 81.6 | 80.3 | 54.2 | 49.2 | 31.1
67
 
68
 
69
  *Code generation performance*. Evaluated using [HumanEval](https://github.com/openai/human-eval):
config.json CHANGED
@@ -13,15 +13,14 @@
13
  "model_type": "nemotron",
14
  "num_attention_heads": 48,
15
  "num_hidden_layers": 32,
16
- "kv_channels": 128,
17
  "num_key_value_heads": 8,
18
  "norm_eps": 1e-05,
19
  "rope_theta": 10000,
20
- "rope_percent": 0.5,
21
- "rope_scaling": null,
22
  "tie_word_embeddings": false,
23
  "torch_dtype": "bfloat16",
24
- "transformers_version": "4.32.0.dev0",
25
  "use_cache": true,
26
- "vocab_size": 256000
27
- }
 
 
13
  "model_type": "nemotron",
14
  "num_attention_heads": 48,
15
  "num_hidden_layers": 32,
 
16
  "num_key_value_heads": 8,
17
  "norm_eps": 1e-05,
18
  "rope_theta": 10000,
19
+ "partial_rotary_factor": 0.5,
 
20
  "tie_word_embeddings": false,
21
  "torch_dtype": "bfloat16",
22
+ "transformers_version": "4.44.0",
23
  "use_cache": true,
24
+ "vocab_size": 256000,
25
+ "head_dim": 128
26
+ }
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d0c84c2ed78c94d103bbf81f4205206a8e022e0c7c1135c693e7574535fa470
3
  size 16543512498
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:debb954a1720801fa4c054f0c761fe344758ef659419d45e5b9a5e7b10722a11
3
  size 16543512498
special_tokens_map.json CHANGED
@@ -1,23 +1,4 @@
1
  {
2
- "bos_token": {
3
- "content": "<s>",
4
- "lstrip": false,
5
- "normalized": false,
6
- "rstrip": false,
7
- "single_word": false
8
- },
9
- "eos_token": {
10
- "content": "</s>",
11
- "lstrip": false,
12
- "normalized": false,
13
- "rstrip": false,
14
- "single_word": false
15
- },
16
- "unk_token": {
17
- "content": "<unk>",
18
- "lstrip": false,
19
- "normalized": false,
20
- "rstrip": false,
21
- "single_word": false
22
- }
23
  }
 
1
  {
2
+ "bos_token": "<s>",
3
+ "eos_token": "</s>"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  }
tokenizer.model → tokenizer.json RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6dfd8b970f437002fc445214304969fe59e64d4f48500bd0b77ba55340f2d811
3
- size 4545602
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:83d0648daa0467fb02ddef7ff25460321dab2fbb20c280ae0bc1ea8052f7df90
3
+ size 18143149
tokenizer_config.json CHANGED
The diff for this file is too large to render. See raw diff