Update README.md
Browse files
README.md
CHANGED
@@ -13,6 +13,8 @@ library_name: nemo
|
|
13 |
|
14 |
Nemotron-4-340B-Instruct is a large language model (LLM) that can be used as part of a synthetic data generation pipeline to create training data that helps researchers and developers build their own LLMs. It is a fine-tuned version of the Nemotron-4-340B-Base model, optimized for English-based single and multi-turn chat use-cases. It supports a context length of 4,096 tokens.
|
15 |
|
|
|
|
|
16 |
The base model was pre-trained on a corpus of 9 trillion tokens consisting of a diverse assortment of English based texts, 50+ natural languages, and 40+ coding languages. Subsequently the Nemotron-4-340B-Instruct model went through additional alignment steps including:
|
17 |
|
18 |
- Supervised Fine-tuning (SFT)
|
|
|
13 |
|
14 |
Nemotron-4-340B-Instruct is a large language model (LLM) that can be used as part of a synthetic data generation pipeline to create training data that helps researchers and developers build their own LLMs. It is a fine-tuned version of the Nemotron-4-340B-Base model, optimized for English-based single and multi-turn chat use-cases. It supports a context length of 4,096 tokens.
|
15 |
|
16 |
+
Try this model on (build.nvidia.com)[https://build.nvidia.com/nvidia/nemotron-4-340b-instruct] now.
|
17 |
+
|
18 |
The base model was pre-trained on a corpus of 9 trillion tokens consisting of a diverse assortment of English based texts, 50+ natural languages, and 40+ coding languages. Subsequently the Nemotron-4-340B-Instruct model went through additional alignment steps including:
|
19 |
|
20 |
- Supervised Fine-tuning (SFT)
|