Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
HugoLaurencon HF staff commited on
Commit
ce6e440
1 Parent(s): 2945aab

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -38,6 +38,20 @@ size_categories:
38
 
39
  English
40
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  ### Visualization of OBELISC documents
42
 
43
  https://huggingface.co/spaces/HuggingFaceM4/obelisc_visualization
 
38
 
39
  English
40
 
41
+ ## Data Fields
42
+
43
+ There are 4 fields: `images`, `texts`, `metadata` and `general_metadata`.
44
+
45
+ For each example, the data in the columns `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`.
46
+
47
+ For example, for the web document `<image_1>text<image_2>`, in `images`, we have `[image_1,None,image_2]` and in `texts` we have `[None,text,None]`.
48
+
49
+ The images are replaced by their URLs, and the users have to download them themselves, for example with the library `img2dataset`.
50
+
51
+ In `metadata`, there is a string that can be transformed into a list with `json.loads(example["metadata"])`. This list will have the same size as the lists of images and texts, and will have a dictionary for each index where there is an image, and a `None` value when there is a text. This dictionary will contain the metadata of the images (original source document, unformatted source, alt-text if present, ...).
52
+
53
+ Finally, in `general_metadata`, there is a string that can be transformed into a dictionary, containing the URL of the document, and information about its location in the Common Crawl data.
54
+
55
  ### Visualization of OBELISC documents
56
 
57
  https://huggingface.co/spaces/HuggingFaceM4/obelisc_visualization