Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,40 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
language:
|
4 |
-
- en
|
5 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
|
7 |
+
## Model Card for Fox-1-1.6B-Instruct
|
8 |
+
|
9 |
+
> [!IMPORTANT]
|
10 |
+
> This model is an instruction tuned model which requires alignment before it can be used in production. We will release
|
11 |
+
> the chat version soon.
|
12 |
+
|
13 |
+
|
14 |
+
Fox-1 is a decoder-only transformer-based small language model (SLM) with 1.6B total parameters developed
|
15 |
+
by [TensorOpera AI](https://tensoropera.ai/). The model was pre-trained with a 3-stage data curriculum on 3 trillion
|
16 |
+
tokens of text and code data in 8K sequence length. Fox-1 uses Grouped Query Attention (GQA) with 4 key-value heads and
|
17 |
+
16 attention heads for faster inference.
|
18 |
+
|
19 |
+
Fox-1-Instruct-v0.1 is an instruction-tuned (SFT) version of Fox-1-1.6B that has an 8K native context length. The model
|
20 |
+
was finetuned with 5B tokens of instruction following and multi-turn conversation data.
|
21 |
+
|
22 |
+
For the full details of this model please read
|
23 |
+
our [release blog post](https://blog.tensoropera.ai/tensoropera-unveils-fox-foundation-model-a-pioneering-open-source-slm-leading-the-way-against-tech-giants).
|
24 |
+
|
25 |
+
## Benchmarks
|
26 |
+
|
27 |
+
We evaluated Fox-1 on ARC Challenge (25-shot), HellaSwag (10-shot), TruthfulQA (0-shot), MMLU (5-shot),
|
28 |
+
Winogrande (5-shot), and GSM8k (5-shot). We follow the Open LLM Leaderboard's evaluation setup and report the average
|
29 |
+
score of the 6 benchmarks. The model was evaluated on a machine with 8*H100 GPUs.
|
30 |
+
|
31 |
+
| | Fox-1-1.6B-Instruct-v0.1 | Fox-1-1.6B | Qwen1.5-1.8B-Chat | Gemma-2B-It | OpenELM-1.1B-Instruct |
|
32 |
+
|---------------|--------------------------|------------|-------------------|-------------|-----------------------|
|
33 |
+
| GSM8k | 39.20% | 36.39% | 18.20% | 4.47% | 0.91% |
|
34 |
+
| MMLU | 44.99% | 43.05% | 45.77% | 37.70% | 25.70% |
|
35 |
+
| ARC Challenge | 43.60% | 41.21% | 38.99% | 43.34% | 40.36% |
|
36 |
+
| HellaSwag | 63.39% | 62.82% | 60.31% | 62.72% | 71.67% |
|
37 |
+
| TruthfulQA | 44.12% | 38.66% | 40.57% | 45.86% | 45.96% |
|
38 |
+
| Winogrande | 62.67% | 60.62% | 59.51% | 61.33% | 61.96% |
|
39 |
+
| Average | 49.66% | 47.13% | 43.89% | 42.57% | 41.09% |
|
40 |
+
|