File size: 2,676 Bytes
c7c5368
 
 
 
 
 
f7e69e7
c7c5368
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f7e69e7
c7c5368
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64

---
license: apache-2.0
language:
- fas
- prs
- per
datasets:
- allenai/nllb
- cis-lmu/Glot500
library_name: transformers
pipeline_tag: text-generation
tags:
- goldfish

---

# prs_arab_full

Goldfish is a suite of monolingual language models trained for 350 languages.
This model is the <b>Dari</b> (Arabic script) model trained on 162MB of data (all our data in the language), after accounting for an estimated byte premium of 1.66; content-matched text in Dari takes on average 1.66x as many UTF-8 bytes to encode as English.
The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).

Note: prs_arab is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. Macrolanguage code fas_arab (Persian) is included in Goldfish. Consider using that model depending on your use case.

All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://github.com/tylerachang/goldfish/blob/main/goldfish_paper_20240815.pdf).

Training code and sample usage: https://github.com/tylerachang/goldfish

Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing)

## Model details:

To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/blob/main/model_details.json.
All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
Details for this model specifically:

* Architecture: gpt2
* Parameters: 124770816
* Maximum sequence length: 512 tokens
* Training text data (raw): 270.74MB
* Training text data (byte premium scaled): 162.705MB
* Training tokens: 37549568 (x10 epochs)
* Vocabulary size: 50000
* Compute cost: 1.91613547118592e+17 FLOPs or ~18.1 NVIDIA A6000 GPU hours

Training datasets (percentages prior to deduplication):
* 99.28229%: [NLLB (CommonCrawl and ParaCrawl)](https://huggingface.co/datasets/allenai/nllb)
* 0.71771%: [Glot500](https://huggingface.co/datasets/cis-lmu/Glot500), including [NLLB_seed](https://github.com/facebookresearch/flores/blob/main/nllb_seed/README.md), [TICO](https://tico-19.github.io/)


## Citation

If you use this model, please cite:

```
@article{chang-etal-2024-goldfish,
  title={Goldfish: Monolingual Language Models for 350 Languages},
  author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
  journal={Preprint},
  year={2024},
}
```