Text Generation
Transformers
Safetensors
English
falcon_mamba
conversational
Inference Endpoints
File size: 13,129 Bytes
7afa01f
4897148
 
 
 
 
7afa01f
 
4897148
7afa01f
4897148
7afa01f
4897148
 
 
 
 
7afa01f
 
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
 
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
7afa01f
4897148
 
7afa01f
e9b7a3f
 
7afa01f
e9b7a3f
 
 
 
 
7afa01f
4897148
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
7afa01f
4897148
 
 
7afa01f
e9b7a3f
 
7afa01f
e9b7a3f
 
 
 
 
7afa01f
4897148
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
7afa01f
4897148
 
 
7afa01f
e9b7a3f
 
7afa01f
4897148
7afa01f
e9b7a3f
 
 
 
 
7afa01f
4897148
 
 
7afa01f
4897148
7afa01f
 
4897148
7afa01f
4897148
7afa01f
4897148
 
7afa01f
4897148
 
 
 
7afa01f
e9b7a3f
 
7afa01f
e9b7a3f
 
 
 
 
7afa01f
4897148
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
7afa01f
4897148
 
 
7afa01f
e9b7a3f
 
7afa01f
e9b7a3f
 
 
 
 
7afa01f
4897148
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
 
 
 
7afa01f
4897148
 
7afa01f
4897148
7afa01f
e9b7a3f
 
4897148
 
7afa01f
4897148
7afa01f
4897148
 
 
 
 
 
 
7afa01f
 
4897148
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
 
4897148
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7afa01f
 
4897148
7afa01f
 
4897148
 
 
 
 
 
 
 
 
 
 
 
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
 
7afa01f
4897148
7afa01f
 
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
 
 
 
 
 
 
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
7afa01f
4897148
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
---
datasets:
- tiiuae/falcon-refinedweb
- HuggingFaceFW/fineweb-edu
language:
- en
---

<img src="https://huggingface.co/datasets/tiiuae/documentation-images/resolve/main/falcon_mamba/thumbnail.png" alt="drawing" width="800"/>

#  Table of Contents

0. [TL;DR](#TL;DR)
1. [Model Details](#model-details)
2. [Usage](#usage)
3. [Training Details](#training-details)
4. [Evaluation](#evaluation)


# TL;DR

# Model Details

## Model Description

- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
- **Model type:** Causal decoder-only
- **Architecture:** Mamba
- **Language(s) (NLP):** Mainly English
- **License:** TII Falcon-Mamba License 2.0

<br>

# Usage

Find below some example scripts on how to use the model in `transformers` (Make sure to have the latest transformers, or the one built from source):

## Using the Pytorch model

### Running the model on a CPU

<details>
<summary> Click to expand </summary>

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>

### Running the model on a GPU

<details>
<summary> Click to expand </summary>

```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto")

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids.to("cuda")

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>

### Running the model on a GPU using `torch.compile`

<details>
<summary> Click to expand </summary>

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", torch_dtype=torch.bfloat16).to(0)

model = torch.compile(model)

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids.to("cuda")

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>


### Running the model on a GPU using different precisions

#### FP16

<details>
<summary> Click to expand </summary>

```python
# pip install accelerate
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto", torch_dtype=torch.float16)

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids.to("cuda")

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>

#### 4-bit

<details>
<summary> Click to expand </summary>

```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

tokenizer = AutoTokenizer.from_pretrained("tiiuae/falcon-mamba-7b-instruct")
model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-mamba-7b-instruct", device_map="auto", quantization_config=BitsAndBytesConfig(load_in_4bit=True))

# We use the tokenizer's chat template to format each message - see https://huggingface.co/docs/transformers/main/en/chat_templating
messages = [
    {"role": "user", "content": "How many helicopters can a human eat in one sitting?"},
]
input_ids = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True).input_ids.to("cuda")

outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```

</details>

<br>

# Training Details

## Training Data

Falcon-Mamba has been trained with ~ 5,500 GT mainly coming from [Refined-Web](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a large volume web-only dataset filtered and deduplicated.
Similar to the others [Falcon](https://huggingface.co/tiiuae/falcon-11B) suite models, Falcon-Mamba has been trained leveraging a multi-stage training strategy to increase the context-length from 2,048 to 8,192. 
Moreover, inspired by the concept of Curriculum Learning, we carefully selected data mixtures throughout the training stages, considering both data diversity and complexity. 
Note that at inference the context-length is not relevant as the Mamba architecture has no limit on long range dependency.
At the last training stage, small portion of high-quality curated data was used to further enhance performance.

Overall, the data sources included RefinedWeb-English, high quality technical data, code data and math data extracted from public sources.
In particular, we used samples coming from [Fineweb-edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) during our last training stage.

The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7B)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer.

After pre-training, the model has been further fine-tuned on instruction data.

## Training Procedure
Falcon-Mamba-7B was trained on 256 H100 80GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=1, PP=1, DP=256) combined with ZeRO.

### Training Hyperparameters

| **Hyperparameter** | **Value**  | **Comment**                               |
|--------------------|------------|-------------------------------------------|
| Precision          | `bfloat16` |                                           |
| Optimizer          | AdamW      |                                           |
| Max learning rate  | 6.4e-4     | Following a WSD (warmup-stable-decay) learning rate schedule |
| Weight decay       | 1e-1       |                                           |
| Batch size         | 2048       |                                           |


The model was trained AdamW optimizer, WSD (warmup-stable-decay) learning rate schedule, and a batch size rampup from \\(b_{\mathrm{min}}=128\\) to \\(b_{\mathrm{max}}=2048\\) during first 50 GT of training. 
In the stable phase we used maximal learning rate \\(\eta_{\mathrm{max}}=6.4 \times 10^{-4}\\), and decayed it to the minimal value \\(\eta_{\mathrm{min}}=\frac{\eta_{\mathrm{max}}}{256}\\) with exponential schedule over 500 GT. 
Also, we applied *BatchScaling* during the rampup — rescaling learning rate \\(\eta\\) so that the Adam noise temperature \\(T_{\mathrm{noise}}\equiv\frac{\eta}{\sqrt{b}}\\) is kept constant.  

### Speeds, Sizes, Times

The model training took roughly two months. 

<br>

# Evaluation

## Benchmarks

We evaluate our model on all benchmarks of the new leaderboard's version using the `lm-evaluation-harness` package, and then normalize the evaluation results with HuggingFace score normalization.


| `model name`              |`IFEval`| `BBH` |`MATH LvL5`| `GPQA`| `MUSR`|`MMLU-PRO`|`Average`| 
|:--------------------------|:------:|:-----:|:---------:|:-----:|:-----:|:--------:|:-------:|
| ***Pure SSM models***     |        |       |           |       |       |          |         |
| `FalconMamba-7B`          |  33.36 | 19.88 |    3.63   |8.05   |10.86  | 14.47    |**15.04**|
| `TRI-ML/mamba-7b-rw`<sup>*</sup>| 22.46  | 6.71  | 0.45      | 1.12  | 5.51  | 1.69     | 6.25    |
|***Hybrid SSM-attention models***   |       |           |       |       |          |         |
|`recurrentgemma-9b`        | 30.76  | 14.80 | 4.83      | 4.70  | 6.60  | 17.88    |  13.20  |
| `Zyphra/Zamba-7B-v1`<sup>*</sup>      | 24.06  | 21.12 | 3.32      | 3.03  | 7.74  | 16.02    | 12.55   |
|***Transformer models***   |        |       |           |       |       |          |         |
| `Falcon2-11B`             | 32.61  | 21.94 |    2.34   | 2.80  | 7.53  | 15.44    |  13.78  |
| `Meta-Llama-3-8B`         | 14.55  | 24.50 |    3.25   | 7.38  | 6.24  | 24.55    |  13.41  |
| `Meta-Llama-3.1-8B`       | 12.70  | 25.29 |    4.61   | 6.15  | 8.98  | 24.95    |  13.78  |
| `Mistral-7B-v0.1`         | 23.86  | 22.02 |    2.49   | 5.59  | 10.68 | 22.36    |  14.50  |
| `Mistral-Nemo-Base-2407 (12B)`       | 16.83  | 29.37 |    4.98   | 5.82  | 6.52  | 27.46    |  15.08  |
| `gemma-7B`                | 26.59  | 21.12 |    6.42   | 4.92  | 10.98 | 21.64    |**15.28**|


Also, we evaluate our model on the benchmarks of the first leaderboard using `lighteval`.


| `model name`                 |`ARC`|`HellaSwag`   |`MMLU` |`Winogrande`|`TruthfulQA`|`GSM8K`|`Average`         | 
|:-----------------------------|:------:|:---------:|:-----:|:----------:|:----------:|:-----:|:----------------:|
| ***Pure SSM models***        |        |           |       |            |            |       |                  |
| `FalconMamba-7B`<sup>*</sup>          | 62.03 |   80.82   | 62.11 |   73.64    |  53.42  | 52.54 |  **64.09**       |
| `TRI-ML/mamba-7b-rw`<sup>*</sup>         | 51.25  | 80.85     | 33.41 | 71.11      | 32.08      | 4.70  | 45.52            |
|***Hybrid SSM-attention models***|     |           |       |            |            |       |                  |
| `recurrentgemma-9b`<sup>**</sup>          |52.00   |   80.40   | 60.50 |   73.60    |   38.60    | 42.60 |  57.95           |
| `Zyphra/Zamba-7B-v1`<sup>*</sup>         | 56.14  | 82.23     | 58.11 | 79.87      | 52.88      | 30.78 |  60.00           |
|***Transformer models***      |        |           |       |            |            |       |                  |
| `Falcon2-11B`                | 59.73  | 82.91     | 58.37 | 78.30      | 52.56      | 53.83 | **64.28**        |
| `Meta-Llama-3-8B`            | 60.24  | 82.23     | 66.70 | 78.45      | 42.93      | 45.19 | 62.62            |
| `Meta-Llama-3.1-8B`            | 58.53  | 82.13     | 66.43 | 74.35      | 44.29      | 47.92 | 62.28            |
| `Mistral-7B-v0.1`            | 59.98  | 83.31     | 64.16 | 78.37      | 42.15      | 37.83 | 60.97            |
| `gemma-7B`                   | 61.09  |   82.20   | 64.56 |   79.01    |   44.79    | 50.87 |  63.75           |

Mostly, we took evaluation results from both leaderboards. For the models marked by *star* we evaluated the tasks internally, while for the models marked by two *stars* the results were taken from paper or model card.

## Throughput

This model can achieve comparable throughput and performance compared to other transformer based models that use optimized kernels such as Flash Attention 2. Make sure to install the optimized Mamba kernels with the following commands:

```bash
pip install "causal-conv1d>=1.4.0" mamba-ssm
```

Refer to our technical report for more details about performance evaluation.


<br>

# Technical Specifications 

## Model Architecture and Objective

Falcon-Mamba-7B is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).

The model is based on the Mamba architecture ([Gu et al., 2023](https://arxiv.org/abs/2312.00752)).

| **Hyperparameter** | **Value** | **Comment**                            |
|--------------------|-----------|----------------------------------------|
| Layers             | 64        | Number of layers                       |
| `d_model`          | 4096      | Hidden dimension                       |
| `d_state`          | 16        | The SSM state dimension                |
| Vocabulary         | 65024     | Vocabulary Size                        |
| Sequence length    | 8192      | During the last training stages        |

## Compute Infrastructure

### Hardware

Falcon-Mamba-7B was trained on AWS SageMaker, using on average 256 H100 80GB GPUs in 32 p5 instances. 

### Software

Falcon-Mamba-7B was trained on an internal distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels.

<br>

# Citation

*Paper coming soon* 😊.