File size: 5,209 Bytes
955c7a4 198cb2c 955c7a4 198cb2c 955c7a4 198cb2c 955c7a4 198cb2c 955c7a4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 |
---
language:
- eu
configs:
- config_name: booktegi
data_files:
- split: train
path: booktegi/train.jsonl.gz
- split: validation
path: booktegi/valid.jsonl.gz
- split: test
path: booktegi/test.jsonl.gz
- config_name: colossal-oscar
data_files:
- split: train
path: colossal-oscar/train.jsonl.gz
- split: validation
path: colossal-oscar/valid.jsonl.gz
- split: test
path: colossal-oscar/test.jsonl.gz
- config_name: culturax
data_files:
- split: train
path: CulturaX/train.jsonl.gz
- split: validation
path: CulturaX/valid.jsonl.gz
- split: test
path: CulturaX/test.jsonl.gz
- config_name: egunkaria
data_files:
- split: train
path: egunkaria/train.jsonl.gz
- split: validation
path: egunkaria/valid.jsonl.gz
- split: test
path: egunkaria/test.jsonl.gz
- config_name: euscrawl-v1.1
data_files:
- split: train
path: euscrawl-v1.1/train.jsonl.gz
- split: validation
path: euscrawl-v1.1/valid.jsonl.gz
- split: test
path: euscrawl-v1.1/test.jsonl.gz
- config_name: hplt-v1
data_files:
- split: train
path: hplt-v1/train.jsonl.gz
- split: validation
path: hplt-v1/valid.jsonl.gz
- split: test
path: hplt-v1/test.jsonl.gz
- config_name: wikipedia
data_files:
- split: train
path: wikipedia/train.jsonl.gz
- split: validation
path: wikipedia/valid.jsonl.gz
- split: test
path: wikipedia/test.jsonl.gz
task_categories:
- fill-mask
- text-generation
---
# Latxa Corpus v1.1
This is the training corpus of the Latxa v1.1 base language model, a LLama 2 model trained on Basque text.
- **Repository:** [https://github.com/hitz-zentroa/latxa](https://github.com/hitz-zentroa/latxa)
- **Papers:** [Latxa: An Open Language Model and Evaluation Suite for Basque](https://arxiv.org/)
- **Curated by:** HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- **Language(s):** eu
## Summary
Latxa's training corpus combines various existing datasets, as well as some new ones that we hereby release.
The raw document mix has been deduplicated and processed; here you'll find the final version of the corpus.
Our data sources are introduced briefly below.
For more details, consult our [paper]().
* **Euscrawl v1.1 <sup color="red">[new]</sup>**: An updated version of [EusCrawl v1](https://www.ixa.eus/euscrawl/) [1], including new content up to November 2023.
* **Egunkaria <sup color="red">[new]</sup>**: Content from the Egunkaria daily newspaper.
* **Booktegi <sup color="red">[new]</sup>**: Content from [https://www.booktegi.eus/](https://www.booktegi.eus/) EPUB books.
* **Wikipedia**: Basque Wikipedia's [dump](https://dumps.wikimedia.org/) from November 2023.
* **CulturaX**: The Basque portion of the [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) corpus [2].
* **Colossal OSCAR**: The Basque portion of several [Colossal OSCAR](https://huggingface.co/datasets/oscar-corpus/colossal-oscar-1.0) releases.
* **HPLT v1**: The Basque portion of the [HPLT v1](https://hplt-project.org/datasets/v1) [3] corpus.
## Statistics
The size of each dataset in terms of number of documents can be found below:
| | Train | Valid | Test |
|----------------|----------:|-------:|-------:|
| CulturaX | 1,283,429 | 13,096 | 13,098 |
| EusCrawl v1.1 | 1,758,084 | 17,861 | 17,736 |
| HPLT v1 | 367,238 | 3,797 | 3,699 |
| Colossal OSCAR | 233,753 | 2,483 | 2,276 |
| Wikipedia | 400,902 | 4,063 | 4,092 |
| Egunkaria | 172,876 | 1,766 | 1,764 |
| Booktegi | 161 | 4 | 1 |
## Citation
To cite our work, please use:
```bibtex
@misc{etxaniz2024latxa,
title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque},
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
year={2024},
eprint={},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## References
[1] Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de Viñaspre, and Aitor Soroa. 2022.
[Does corpus quality really matter for low-resource languages?](https://doi.org/10.18653/v1/2022.emnlp-main.499).
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7383–7390, Abu Dhabi, United Arab Emirates.
Association for Computational Linguistics.
[2] Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, and Thien Huu Nguyen. 2023.
[CulturaX: A cleaned, enormous, and multilingual dataset for large language models in 167 languages](https://arxiv.org/abs/2309.09400).
arXiv preprint arXiv:2309.09400
[3] Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ramírez-Sánchez, Jörg Tiedemann, Jelmer van der Linde, and Jaume Zaragoza. 2023.
[HPLT: High performance language technologies](https://aclanthology.org/2023.eamt-1.61).
In Proceedings of the 24th Annual Conference of the European Association for Machine Transla tion, pages 517–518, Tampere, Finland.
European Association for Machine Translation.
|