--- license: cc-by-sa-4.0 language: - ja --- # Dataset Card for llm-japanese-dataset-vanilla in the Aya format This dataset is a format conversion from its original v1.0.0 format and released here under the same CC-BY-SA 4.0 license and conditions. It contains Japanese instruction-like data intended for LLM construction/tuning. The dataset only contains a 'train' split, with ~2.46M rows of data. Thanks Jian Wu (@wujian123) for the help in converting and validating the dataset. ## Citation If you utilize this dataset version, feel free to cite/footnote this huggingface dataset repo, but please also cite the original dataset publication. **BibTeX:** ``` @preprint{Suzuki2023-llmvanilla, title={{From Base to Conversational: Japanese Instruction Dataset and Tuning Large Language Models}}, autor={Masahiro Suzuki and Masanori Hirano and Hiroki Sakaji}, doi={10.48550/arXiv.2309.03412}, archivePrefix={arXiv}, arxivId={2309.03412}, year={2023} } ``` ## Dataset Details For the original llm-japanese-dataset-vanilla and more details, please check https://huggingface.co/datasets/izumi-lab/llm-japanese-dataset-vanilla. ## Format Conversion Details The original dataset row utilize three columns ('instruction', 'input', and 'output'), with 'input' being optional. Upon analysis of the dataset, if 'input' content exists, it can be appended to 'instruction'. Another common identified scenario has 'instruction'/'input' acting as a question, and 'output' being only a very short answer. For those case, we prepend a general answer prefix sentence to the short answer. "この質問の答えは", meaning "The answer to this question is". The resulting converted dataset only uses the two columns specific by the Aya format: 'inputs' and 'targets'.