metadata
language:
- eu
configs:
- config_name: euscrawl-v1.1
data_files:
- split: train
path: euscrawl-v1.1/train.jsonl.gz
- split: validation
path: euscrawl-v1.1/valid.jsonl.gz
- split: test
path: euscrawl-v1.1/test.jsonl.gz
- config_name: egunkaria
data_files:
- split: train
path: egunkaria/train.jsonl.gz
- split: validation
path: egunkaria/valid.jsonl.gz
- split: test
path: egunkaria/test.jsonl.gz
- config_name: booktegi
data_files:
- split: train
path: booktegi/train.jsonl.gz
- split: validation
path: booktegi/valid.jsonl.gz
- split: test
path: booktegi/test.jsonl.gz
- config_name: wikipedia
data_files:
- split: train
path: wikipedia/train.jsonl.gz
- split: validation
path: wikipedia/valid.jsonl.gz
- split: test
path: wikipedia/test.jsonl.gz
- config_name: culturax
data_files:
- split: train
path: CulturaX/train.jsonl.gz
- split: validation
path: CulturaX/valid.jsonl.gz
- split: test
path: CulturaX/test.jsonl.gz
- config_name: colossal-oscar
data_files:
- split: train
path: colossal-oscar/train.jsonl.gz
- split: validation
path: colossal-oscar/valid.jsonl.gz
- split: test
path: colossal-oscar/test.jsonl.gz
- config_name: hplt-v1
data_files:
- split: train
path: hplt-v1/train.jsonl.gz
- split: validation
path: hplt-v1/valid.jsonl.gz
- split: test
path: hplt-v1/test.jsonl.gz
task_categories:
- fill-mask
- text-generation
task_ids:
- language-modeling
- masked-language-modeling
annotations_creators:
- no-annotation
multilinguality:
- monolingual
Latxa Corpus v1.1
This is the training corpus of Latxa v1.1, a family of large language models for Basque based on Llama 2.
- 💻 Repository: https://github.com/hitz-zentroa/latxa
- 📒 Blog Post: Latxa: An Open Language Model and Evaluation Suite for Basque
- 📖 Paper: Latxa: An Open Language Model and Evaluation Suite for Basque
- 📧 Point of Contact: [email protected]
Dataset Summary
- Curated by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- Language(s): eu-ES
Latxa's training corpus combines various existing datasets, as well as some new ones that we hereby release. The raw document mix has been deduplicated and processed; here you'll find the final version of the corpus. Our data sources are introduced briefly below. For more details, consult our paper.
- Euscrawl v1.1 [new]: An updated version of EusCrawl v1 (Artetxe et al., 2022), including new content up to November 2023.
- Egunkaria [new]: Content from the Egunkaria daily newspaper.
- Booktegi [new]: Content from https://www.booktegi.eus/ EPUB books.
- Wikipedia: Basque Wikipedia's dump from November 2023.
- CulturaX: The Basque portion of the CulturaX corpus (Nguyen et al., 2023).
- Colossal OSCAR: The Basque portion of several Colossal OSCAR releases.
- HPLT v1: The Basque portion of the HPLT v1 corpus (Aulamo et al., 2023).
For detailed information regarding the licenses associated with each individual corpus comprising this training dataset, please refer to the respective references listed alongside each corpus entry.
Statistics
The size of each dataset in terms of number of documents can be found below:
Train | Valid | Test | |
---|---|---|---|
CulturaX | 1,283,429 | 13,096 | 13,098 |
EusCrawl v1.1 | 1,758,084 | 17,861 | 17,736 |
HPLT v1 | 367,238 | 3,797 | 3,699 |
Colossal OSCAR | 233,753 | 2,483 | 2,276 |
Wikipedia | 400,902 | 4,063 | 4,092 |
Egunkaria | 172,876 | 1,766 | 1,764 |
Booktegi | 161 | 4 | 1 |
Citation
To cite our work, please use:
@misc{etxaniz2024latxa,
title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque},
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
year={2024},
eprint={2403.20266},
archivePrefix={arXiv},
primaryClass={cs.CL}
}