File size: 4,803 Bytes
1291016
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a2644dc
1291016
 
 
a2644dc
 
1291016
 
 
 
a2644dc
1291016
a2644dc
1291016
a2644dc
1291016
a2644dc
 
 
1291016
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170

---
license: cc
language: 
- abc
- ahk
- bfn
- bjn
- bkx
- brb
- brv
- bya
- bzi
- ceb
- cgc
- cmo
- ddg
- dmg
- dnw
- dtp
- dtr
- enc
- fil
- gal
- hil
- hre
- hro
- idt
- ilo
- ind
- jra
- kak
- khb
- khm
- kqr
- krr
- ksw
- kvt
- lao
- lhu
- llg
- lsi
- lwl
- mdr
- mgm
- mhx
- mkz
- mnw
- mqj
- mry
- msb
- mya
- nod
- nst
- nxa
- nxl
- pag
- pce
- pdu
- pea
- pmf
- sea
- sgd
- shn
- sml
- snl
- tdt
- tet
- tha
- tkd
- tnt
- tom
- tpu
- vie
- war
- wms
- wnk
- xmm
- yet
- yin
- zlm
pretty_name: Bloom Lm
task_categories: 
- self-supervised-pretraining
tags: 
- self-supervised-pretraining
---


This is a Bloom Library dataset developed for the self-supervised language modeling task.
It covers 74 languages indigenous to SEA overall, amounting to total data of 21K.
This dataset belongs to a CC license, where its datapoints has specific license attached to it.
Before using this dataloader, please accept the acknowledgement at https://huggingface.co/datasets/sil-ai/bloom-lm and use huggingface-cli login for authentication.


## Languages

abc, ahk, bfn, bjn, bkx, brb, brv, bya, bzi, ceb, cgc, cmo, ddg, dmg, dnw, dtp, dtr, enc, fil, gal, hil, hre, hro, idt, ilo, ind, jra, kak, khb, khm, kqr, krr, ksw, kvt, lao, lhu, llg, lsi, lwl, mdr, mgm, mhx, mkz, mnw, mqj, mry, msb, mya, nod, nst, nxa, nxl, pag, pce, pdu, pea, pmf, psp_ceb, sea, sgd, shn, sml, snl, tdt, tet, tha, tkd, tnt, tom, tpu, vie, war, wms, wnk, xmm, yet, yin, zlm

## Supported Tasks

Self Supervised Pretraining

## Dataset Usage
### Using `datasets` library
```
from datasets import load_dataset
dset = datasets.load_dataset("SEACrowd/bloom_lm", trust_remote_code=True)
```
### Using `seacrowd` library
```import seacrowd as sc
# Load the dataset using the default config
dset = sc.load_dataset("bloom_lm", schema="seacrowd")
# Check all available subsets (config names) of the dataset
print(sc.available_config_names("bloom_lm"))
# Load the dataset using a specific config
dset = sc.load_dataset_by_config_name(config_name="<config_name>")
```

More details on how to load the `seacrowd` library can be found [here](https://github.com/SEACrowd/seacrowd-datahub?tab=readme-ov-file#how-to-use).


## Dataset Homepage

[https://huggingface.co/datasets/sil-ai/bloom-lm](https://huggingface.co/datasets/sil-ai/bloom-lm)

## Dataset Version

Source: 0.1.0. SEACrowd: 2024.06.20.

## Dataset License

Creative Commons license family (cc)

## Citation

If you are using the **Bloom Lm** dataloader in your work, please cite the following:
```

@inproceedings{leong-etal-2022-bloom,
    title = "Bloom Library: Multimodal Datasets in 300+ Languages for a Variety of Downstream Tasks",
    author = "Leong, Colin  and
      Nemecek, Joshua  and
      Mansdorfer, Jacob  and
      Filighera, Anna  and
      Owodunni, Abraham  and
      Whitenack, Daniel",
    editor = "Goldberg, Yoav  and
      Kozareva, Zornitsa  and
      Zhang, Yue",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.emnlp-main.590",
    doi = "10.18653/v1/2022.emnlp-main.590",
    pages = "8608--8621",
}


@article{lovenia2024seacrowd,
    title={SEACrowd: A Multilingual Multimodal Data Hub and Benchmark Suite for Southeast Asian Languages}, 
    author={Holy Lovenia and Rahmad Mahendra and Salsabil Maulana Akbar and Lester James V. Miranda and Jennifer Santoso and Elyanah Aco and Akhdan Fadhilah and Jonibek Mansurov and Joseph Marvin Imperial and Onno P. Kampman and Joel Ruben Antony Moniz and Muhammad Ravi Shulthan Habibi and Frederikus Hudi and Railey Montalan and Ryan Ignatius and Joanito Agili Lopo and William Nixon and Börje F. Karlsson and James Jaya and Ryandito Diandaru and Yuze Gao and Patrick Amadeus and Bin Wang and Jan Christian Blaise Cruz and Chenxi Whitehouse and Ivan Halim Parmonangan and Maria Khelli and Wenyu Zhang and Lucky Susanto and Reynard Adha Ryanda and Sonny Lazuardi Hermawan and Dan John Velasco and Muhammad Dehan Al Kautsar and Willy Fitra Hendria and Yasmin Moslem and Noah Flynn and Muhammad Farid Adilazuarda and Haochen Li and Johanes Lee and R. Damanhuri and Shuo Sun and Muhammad Reza Qorib and Amirbek Djanibekov and Wei Qi Leong and Quyet V. Do and Niklas Muennighoff and Tanrada Pansuwan and Ilham Firdausi Putra and Yan Xu and Ngee Chia Tai and Ayu Purwarianti and Sebastian Ruder and William Tjhi and Peerat Limkonchotiwat and Alham Fikri Aji and Sedrick Keh and Genta Indra Winata and Ruochen Zhang and Fajri Koto and Zheng-Xin Yong and Samuel Cahyawijaya},
    year={2024},
    eprint={2406.10118},
    journal={arXiv preprint arXiv: 2406.10118}
}

```