Felladrin commited on
Commit
4e0d205
1 Parent(s): e7eed82

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ annotations_creators:
7
+ - crowdsourced
8
+ - expert-generated
9
+ language:
10
+ - amh
11
+ - arb
12
+ - ary
13
+ - ars
14
+ - acq
15
+ - arz
16
+ - apc
17
+ - ben
18
+ - ceb
19
+ - dan
20
+ - deu
21
+ - ell
22
+ - eng
23
+ - eus
24
+ - fil
25
+ - fin
26
+ - fra
27
+ - gle
28
+ - guj
29
+ - hat
30
+ - hau
31
+ - hin
32
+ - hun
33
+ - ibo
34
+ - ind
35
+ - ita
36
+ - jav
37
+ - jpn
38
+ - kan
39
+ - kir
40
+ - kor
41
+ - kur
42
+ - lit
43
+ - mal
44
+ - mar
45
+ - mlg
46
+ - msa
47
+ - mya
48
+ - nep
49
+ - nld
50
+ - nso
51
+ - nya
52
+ - pan
53
+ - pes
54
+ - pol
55
+ - por
56
+ - pus
57
+ - rus
58
+ - sin
59
+ - sna
60
+ - snd
61
+ - som
62
+ - spa
63
+ - sqi
64
+ - srp
65
+ - sun
66
+ - swa
67
+ - swe
68
+ - tam
69
+ - tel
70
+ - tha
71
+ - tur
72
+ - ukr
73
+ - urd
74
+ - vie
75
+ - wol
76
+ - xho
77
+ - yor
78
+ - zho
79
+ - zul
80
+ language_creators:
81
+ - crowdsourced
82
+ - expert-generated
83
+ multilinguality:
84
+ - multilingual
85
+ size_categories:
86
+ - 100K<n<1M
87
+ ---
88
+
89
+ [CohereForAI/aya_dataset](https://huggingface.co/datasets/CohereForAI/aya_dataset) in ChatML format, ready to use in [HuggingFace TRL's SFT Trainer](https://huggingface.co/docs/trl/main/en/sft_trainer).
90
+
91
+ Python code used for conversion:
92
+
93
+ ```python
94
+ from datasets import load_dataset
95
+ from transformers import AutoTokenizer
96
+
97
+ tokenizer = AutoTokenizer.from_pretrained("Felladrin/Llama-160M-Chat-v1")
98
+
99
+ dataset = load_dataset("CohereForAI/aya_dataset", split="train")
100
+
101
+ def format(columns):
102
+ messages = [
103
+ {
104
+ "role": "user",
105
+ "content": columns["inputs"].strip(),
106
+ },
107
+ {
108
+ "role": "assistant",
109
+ "content": columns["targets"].strip(),
110
+ },
111
+ ]
112
+
113
+ return { "text": tokenizer.apply_chat_template(messages, tokenize=False) }
114
+
115
+ dataset.map(format).select_columns(['text', 'language', 'language_code', 'annotation_type', 'user_id']).to_parquet("train.parquet")
116
+ ```