Files changed (1) hide show
  1. README.md +59 -0
README.md ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dataset Card for Flan V2
2
+
3
+ <!-- Provide a quick summary of the dataset. -->
4
+
5
+
6
+ ### Dataset Description
7
+
8
+ <!-- Provide a longer summary of what this dataset is. -->
9
+
10
+
11
+
12
+ - **Homepage:** [ https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html]
13
+ - **Repository::** [ https://github.com/google-research/FLAN/tree/main/flan/v2]
14
+ - **Paper:** [https://arxiv.org/abs/2301.13688]
15
+
16
+ ### Dataset Sources [optional]
17
+
18
+ <!-- Provide the basic links for the dataset. -->
19
+
20
+ - **Repository:** [More Information Needed]
21
+ - **Paper [optional]:** [More Information Needed]
22
+ - **Demo [optional]:** [More Information Needed]
23
+
24
+ ### Dataset Summary
25
+
26
+ This is a processed version of the Flan V2 dataset.
27
+
28
+ I'm not affiliated with the creators, I'm just releasing the files in an easier-to-access format after processing.
29
+
30
+ The authors of the Flan Collection recommend experimenting with different mixing ratio's of tasks to get optimal results downstream.
31
+
32
+ These are minor hits to the total size of the collection (orders of MB compared to GB) but once those are fixed I will upload a complete version.
33
+
34
+
35
+ ### Dataset Structure
36
+
37
+
38
+ ## Data Instances
39
+
40
+ Flan 2021 (flan), P3 (t0), Super-Natural Instructions (niv2), Chain-of-thought (cot), and Dialog (dialog)
41
+
42
+
43
+
44
+ ## Data Fields
45
+
46
+ Instruction data comes in a few formats:
47
+ Few Shot (fs)
48
+ Zero Shot (zs)
49
+ Options Provided in context (i.e. multiple choice pick one) (opt)
50
+ No Options Provided (noopt)
51
+
52
+
53
+ Each combination of the above tasks + formats are saved as a JSONL with following schema {"input": ..., "target": ..., "task": ...}
54
+
55
+ <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
56
+
57
+ #### Data Splits
58
+
59
+ Everything is saved as a train split