Datasets:
HiTZ
/

Modalities:
Text
Formats:
json
Languages:
Basque
ArXiv:
Libraries:
Datasets
pandas
latxa-corpus-v1.1 / README.md
nperez's picture
Update README.md
198cb2c verified
|
raw
history blame
5.21 kB
metadata
language:
  - eu
configs:
  - config_name: booktegi
    data_files:
      - split: train
        path: booktegi/train.jsonl.gz
      - split: validation
        path: booktegi/valid.jsonl.gz
      - split: test
        path: booktegi/test.jsonl.gz
  - config_name: colossal-oscar
    data_files:
      - split: train
        path: colossal-oscar/train.jsonl.gz
      - split: validation
        path: colossal-oscar/valid.jsonl.gz
      - split: test
        path: colossal-oscar/test.jsonl.gz
  - config_name: culturax
    data_files:
      - split: train
        path: CulturaX/train.jsonl.gz
      - split: validation
        path: CulturaX/valid.jsonl.gz
      - split: test
        path: CulturaX/test.jsonl.gz
  - config_name: egunkaria
    data_files:
      - split: train
        path: egunkaria/train.jsonl.gz
      - split: validation
        path: egunkaria/valid.jsonl.gz
      - split: test
        path: egunkaria/test.jsonl.gz
  - config_name: euscrawl-v1.1
    data_files:
      - split: train
        path: euscrawl-v1.1/train.jsonl.gz
      - split: validation
        path: euscrawl-v1.1/valid.jsonl.gz
      - split: test
        path: euscrawl-v1.1/test.jsonl.gz
  - config_name: hplt-v1
    data_files:
      - split: train
        path: hplt-v1/train.jsonl.gz
      - split: validation
        path: hplt-v1/valid.jsonl.gz
      - split: test
        path: hplt-v1/test.jsonl.gz
  - config_name: wikipedia
    data_files:
      - split: train
        path: wikipedia/train.jsonl.gz
      - split: validation
        path: wikipedia/valid.jsonl.gz
      - split: test
        path: wikipedia/test.jsonl.gz
task_categories:
  - fill-mask
  - text-generation

Latxa Corpus v1.1

This is the training corpus of the Latxa v1.1 base language model, a LLama 2 model trained on Basque text.

Summary

Latxa's training corpus combines various existing datasets, as well as some new ones that we hereby release. The raw document mix has been deduplicated and processed; here you'll find the final version of the corpus. Our data sources are introduced briefly below. For more details, consult our paper.

  • Euscrawl v1.1 [new]: An updated version of EusCrawl v1 [1], including new content up to November 2023.
  • Egunkaria [new]: Content from the Egunkaria daily newspaper.
  • Booktegi [new]: Content from https://www.booktegi.eus/ EPUB books.
  • Wikipedia: Basque Wikipedia's dump from November 2023.
  • CulturaX: The Basque portion of the CulturaX corpus [2].
  • Colossal OSCAR: The Basque portion of several Colossal OSCAR releases.
  • HPLT v1: The Basque portion of the HPLT v1 [3] corpus.

Statistics

The size of each dataset in terms of number of documents can be found below:

Train Valid Test
CulturaX 1,283,429 13,096 13,098
EusCrawl v1.1 1,758,084 17,861 17,736
HPLT v1 367,238 3,797 3,699
Colossal OSCAR 233,753 2,483 2,276
Wikipedia 400,902 4,063 4,092
Egunkaria 172,876 1,766 1,764
Booktegi 161 4 1

Citation

To cite our work, please use:

@misc{etxaniz2024latxa,
      title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque}, 
      author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
      year={2024},
      eprint={},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

References

[1] Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de Viñaspre, and Aitor Soroa. 2022. Does corpus quality really matter for low-resource languages?. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 7383–7390, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.

[2] Thuat Nguyen, Chien Van Nguyen, Viet Dac Lai, Hieu Man, Nghia Trung Ngo, Franck Dernoncourt, Ryan A. Rossi, and Thien Huu Nguyen. 2023. CulturaX: A cleaned, enormous, and multilingual dataset for large language models in 167 languages. arXiv preprint arXiv:2309.09400

[3] Mikko Aulamo, Nikolay Bogoychev, Shaoxiong Ji, Graeme Nail, Gema Ramírez-Sánchez, Jörg Tiedemann, Jelmer van der Linde, and Jaume Zaragoza. 2023. HPLT: High performance language technologies. In Proceedings of the 24th Annual Conference of the European Association for Machine Transla tion, pages 517–518, Tampere, Finland. European Association for Machine Translation.