goldfish-models commited on
Commit
f8bde44
1 Parent(s): fac3081

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +66 -0
README.md ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: apache-2.0
4
+ language:
5
+ - mlt
6
+ datasets:
7
+ - cis-lmu/Glot500
8
+ - allenai/c4
9
+ - legacy-datasets/wikipedia
10
+ - allenai/nllb
11
+ - oscar-corpus/OSCAR-2109
12
+ library_name: transformers
13
+ pipeline_tag: text-generation
14
+ tags:
15
+ - goldfish
16
+
17
+ ---
18
+
19
+ # mlt_latn_1000mb
20
+
21
+ Goldfish is a suite of monolingual language models trained for 350 languages.
22
+ This model is the <b>Maltese</b> (Latin script) model trained on 1000MB of data, after accounting for an estimated byte premium of 1.09; content-matched text in Maltese takes on average 1.09x as many UTF-8 bytes to encode as English.
23
+ The Goldfish models are trained primarily for comparability across languages and for low-resource languages; Goldfish performance for high-resource languages is not designed to be comparable with modern large language models (LLMs).
24
+
25
+ Note: mlt_latn is an [individual language](https://iso639-3.sil.org/code_tables/639/data) code. It is not contained in any macrolanguage codes contained in Goldfish (for script latn).
26
+
27
+ All training and hyperparameter details are in our paper, [Goldfish: Monolingual Language Models for 350 Languages (Chang et al., 2024)](https://github.com/tylerachang/goldfish/blob/main/goldfish_paper_20240815.pdf).
28
+
29
+ Training code and sample usage: https://github.com/tylerachang/goldfish
30
+
31
+ Sample usage also in this Google Colab: [link](https://colab.research.google.com/drive/1rHFpnQsyXJ32ONwCosWZ7frjOYjbGCXG?usp=sharing)
32
+
33
+ ## Model details:
34
+
35
+ To access all Goldfish model details programmatically, see https://github.com/tylerachang/goldfish/model_details.json.
36
+ All models are trained with a [CLS] (same as [BOS]) token prepended, and a [SEP] (same as [EOS]) token separating sequences.
37
+ Details for this model specifically:
38
+
39
+ * Architecture: gpt2
40
+ * Parameters: 124770816
41
+ * Maximum sequence length: 512 tokens
42
+ * Training text data (raw): 1088.47MB
43
+ * Training text data (byte premium scaled): 1000.005MB
44
+ * Training tokens: 283158528 (x10 epochs)
45
+ * Vocabulary size: 50000
46
+ * Compute cost: 1.445110073524224e+18 FLOPs or ~136.6 NVIDIA A6000 GPU hours
47
+
48
+ Training datasets (percentages prior to deduplication):
49
+ * 94.92627%: [Glot500](https://huggingface.co/datasets/cis-lmu/Glot500), including [Wortschatz Leipzig Data](https://wortschatz.uni-leipzig.de/en/download), [MaCoCu](https://macocu.eu/), [MC4](https://huggingface.co/datasets/allenai/c4), [OSCAR](https://oscar-project.org/), [Wikipedia Hugging Face](https://huggingface.co/datasets/legacy-datasets/wikipedia)
50
+ * 4.23828%: [NLLB (CommonCrawl and ParaCrawl)](https://huggingface.co/datasets/allenai/nllb)
51
+ * 0.42133%: [OSCAR 2021/09](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109)
52
+ * 0.41412%: [Wikipedia 2023/08](https://dumps.wikimedia.org/)
53
+
54
+
55
+ ## Citation
56
+
57
+ If you use this model, please cite:
58
+
59
+ ```
60
+ @article{chang-etal-2024-goldfish,
61
+ title={Goldfish: Monolingual Language Models for 350 Languages},
62
+ author={Chang, Tyler A. and Arnett, Catherine and Tu, Zhuowen and Bergen, Benjamin K.},
63
+ journal={Preprint},
64
+ year={2024},
65
+ }
66
+ ```