MADLAD-400: A Multilingual And Document-Level Large Audited Dataset
Abstract
We introduce MADLAD-400, a manually audited, general domain 3T token monolingual dataset based on CommonCrawl, spanning 419 languages. We discuss the limitations revealed by self-auditing MADLAD-400, and the role data auditing had in the dataset creation process. We then train and release a 10.7B-parameter multilingual machine translation model on 250 billion tokens covering over 450 languages using publicly available data, and find that it is competitive with models that are significantly larger, and report the results on different domains. In addition, we train a 8B-parameter language model, and assess the results on few-shot translation. We make the baseline models available to the research community.
Community
Does someone have information on the dataset release? Please share in this thread.
From the datasheet in the paper:
- How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? MADLAD-400 is made available through a GCP bucket.
- When will the dataset be distributed? June 2023
- Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? AI2 has made a version of this data available under the ODC-BY license. Users are also bound by the CommonCrawl terms of use in respect of the content contained in the dataset.
- Who is supporting/hosting/maintaining the dataset? An external organization, AI2 is hosting the dataset.
Related GitHub issue: https://github.com/google-research/google-research/issues/1741
Looks like the dataset has now been released here https://huggingface.co/datasets/allenai/MADLAD-400
MADLAD-400: Revolutionizing Multilingual NLP with a 419-Language Dataset
Links π:
π Subscribe: https://www.youtube.com/@Arxflix
π Twitter: https://x.com/arxflix
π LMNT (Partner): https://lmnt.com/