Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 4,860 Bytes
aea48e3
4c0dba4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aea48e3
4c0dba4
aea48e3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4c0dba4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aea48e3
261267c
aea48e3
66a024b
 
261267c
 
66a024b
 
 
 
261267c
66a024b
4834ad1
 
66a024b
 
 
 
ce6e440
 
 
 
 
 
 
 
 
 
f742802
ce6e440
 
 
f742802
 
c7f4ced
f742802
 
 
c7f4ced
f742802
6f833eb
 
 
 
 
 
 
 
 
 
 
 
 
 
261267c
a0287b7
261267c
a0287b7
aff6f31
 
 
 
a0287b7
 
261267c
a0287b7
2945aab
66a024b
2945aab
66a024b
 
 
2945aab
66a024b
 
 
 
1004b72
66a024b
 
261267c
66a024b
4766f38
1004b72
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
language:
- en
license: cc-by-4.0
size_categories:
- 100M<n<1B
pretty_name: OBELISC
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
- config_name: opt_out_docs_removed
  data_files:
  - split: train
    path: opt_out_docs_removed/train-*
dataset_info:
- config_name: default
  features:
  - name: images
    sequence: string
  - name: metadata
    dtype: string
  - name: general_metadata
    dtype: string
  - name: texts
    sequence: string
  splits:
  - name: train
    num_bytes: 715724717192
    num_examples: 141047697
  download_size: 71520629655
  dataset_size: 715724717192
- config_name: opt_out_docs_removed
  features:
  - name: images
    sequence: string
  - name: metadata
    dtype: string
  - name: general_metadata
    dtype: string
  - name: texts
    sequence: string
  splits:
  - name: train
    num_bytes: 684638314215
    num_examples: 134648855
  download_size: 266501092920
  dataset_size: 684638314215
---
# Dataset Card for OBELISC

## Dataset Description

- **Repository: https://github.com/huggingface/OBELISC**
- **Paper: OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents**
- **Point of Contact: [email protected]**

### Dataset Summary

`OBELISC` is an open, massive and curated collection of interleaved image-text web documents, containing 141M documents, 115B text tokens and 353M images.

This dataset can be used to train large multimodal models, significantly improving their reasoning abilities compared to models trained solely on image/text pairs. Please refer to our paper for further details about the construction of the dataset, quantitative and qualitative analyses of `OBELISC`, and experiments we conducted.

### Languages

English

## Data Fields

There are 4 fields: `images`, `texts`, `metadata` and `general_metadata`.

For each example, the data in the columns `images` and `texts` are two lists of the same size, where for each index, one element and only one is not `None`.

For example, for the web document `<image_1>text<image_2>`, in `images`, we have `[image_1,None,image_2]` and in `texts` we have `[None,text,None]`.

The images are replaced by their URLs, and the users have to download them themselves, for example with the library `img2dataset`.

In `metadata`, there is a string that can be transformed into a list with `json.loads(example["metadata"])`. This list will have the same size as the lists of images and texts, and will have a dictionary for each index where there is an image, and a `None` value when there is a text. This dictionary will contain the metadata of the image (original source document, unformatted source, alt-text if present, ...).

Finally, in `general_metadata`, there is a string that can be transformed into a dictionary, containing the URL of the document, and information about its location in the Common Crawl data.

## Data Splits

There is only one split, `train`, that contains 141,047,697 examples.

## Size

`OBELISC` with images replaced by their URLs weighs 666.6 GB (unwanted!) in arrow format and 377 GB in this uploaded `parquet` format.

## Configs

The default config, downloaded when nothing is specified in the config argument, with
```
from datasets import load_dataset

ds = load_dataset("HuggingFaceM4/OBELISC")
```
corresponds to the original version of the dataset.

When building the dataset, we sent every image URL to the Spawning AI API and removed all the opted-out images.
However, we noticed afterward that some images might not be opted-out, but the whole web page containing them is.
This is why we created another config of the dataset to additionally filter out opted-out web pages, that can be loaded with `ds = load_dataset("HuggingFaceM4/OBELISC", config_name="opt_out_docs_removed")`.

### Visualization of OBELISC documents

https://huggingface.co/spaces/HuggingFaceM4/obelisc_visualization

### Research paper

https://arxiv.org/abs/2306.16527

### GitHub repository

https://github.com/huggingface/OBELISC

## Terms of Use

By using the dataset, you agree to comply with the original licenses of the source content as well as the dataset license (CC-BY-4.0). Additionally, if you use this dataset to train a Machine Learning model, you agree to disclose your use of the dataset when releasing the model or an ML application using the model.

### Licensing Information

License CC-BY-4.0.

### Citation Information

If you are using this dataset, please cite
```
@inproceedings{
lauren{\c{c}}on2023obe,
title={OBELISC: An Open Web-Scale Filtered Dataset of Interleaved Image-Text Documents},
author={Hugo Lauren{\c{c}}on and Lucile Saulnier and L{\'e}o Tronchon and Stas Bekman and Amanpreet Singh and Anton Lozhkov and Thomas Wang and Siddharth Karamcheti and Alexander M Rush and Douwe Kiela and Matthieu Cord and Victor Sanh},
year={2023}
}
```