markpreemo
commited on
Commit
•
a604554
1
Parent(s):
7449cf9
Update README.md
Browse files
README.md
CHANGED
@@ -17,6 +17,8 @@ Gradient incorporates your data to deploy autonomous assistants that power criti
|
|
17 |
|
18 |
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
|
19 |
|
|
|
|
|
20 |
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
|
21 |
|
22 |
**Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.**
|
|
|
17 |
|
18 |
For more info see our [End-to-end development service for custom LLMs and AI systems](https://gradient.ai/development-lab)
|
19 |
|
20 |
+
[Join our Discord](https://discord.com/invite/2QVy2qt2mf)
|
21 |
+
|
22 |
This model extends LLama-3 8B's context length from 8k to > 1040K, developed by Gradient, sponsored by compute from [Crusoe Energy](https://huggingface.co/crusoeai). It demonstrates that SOTA LLMs can learn to operate on long context with minimal training by appropriately adjusting RoPE theta. We trained on 830M tokens for this stage, and 1.4B tokens total for all stages, which is < 0.01% of Llama-3's original pre-training data.
|
23 |
|
24 |
**Update (5/3): We further fine-tuned our model to strengthen its assistant-like chat ability as well. The NIAH result is updated.**
|