Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 6,092 Bytes
1f8df9b
 
 
7873de8
1f8df9b
7873de8
 
1f8df9b
7873de8
 
1f8df9b
7873de8
1f8df9b
 
2185c7c
1f8df9b
aae309d
 
2185c7c
1f8df9b
2185c7c
 
 
 
 
375045e
2185c7c
 
 
 
 
 
 
 
 
1f8df9b
3efcc8d
2185c7c
 
 
3efcc8d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f8df9b
 
 
3efcc8d
2185c7c
1f8df9b
2185c7c
1f8df9b
2185c7c
 
1f8df9b
2185c7c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f8df9b
3efcc8d
 
 
 
1f8df9b
 
2185c7c
1f8df9b
2185c7c
1f8df9b
 
aae309d
 
 
 
 
 
 
 
 
7873de8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
---
license: odc-by
task_categories:
- text-generation
language:
- en
- zh
tags:
- pretrain
- multi-modal
size_categories:
- 10B<n<100B
---

# InfiMM-WebMath-40B Dataset

[ArXiv](https://arxiv.org/abs/2409.12568)| [PDF](https://arxiv.org/pdf/2409.12568)

**InfiMM-WebMath-40B** is a large-scale, open-source multimodal dataset specifically designed for mathematical reasoning tasks. It incorporates both text and images, extracted from web documents, to advance the pre-training of Multimodal Large Language Models (MLLMs). The dataset is tailored to support sophisticated reasoning tasks that involve understanding both text and visual elements like diagrams, figures, and geometric plots.

## Dataset Overview

The **InfiMM-WebMath-40B** dataset includes:

- **24 million** web documents.
- **85 million** image URLs.
- **40 billion** text tokens.

These documents were sourced from **Common Crawl** data snapshots (2019–2023), filtered to focus on high-quality mathematical and scientific content in both English and Chinese.

## Data Structure

The dataset is organized in a format that captures both text and images in their original order, ensuring accurate interleaving between the two modalities. The structure is as follows:

```json
{
  "URL": "...",           # The URL of the source document.
  "text_list": [...],     # List of extracted text segments, None if the element is an image.
  "image_list": [...],    # List of image URLs, None if the element is a text segment.
  "metadata": {...}       # Metadata containing information about the extraction process (e.g., processing details, timestamps).
  "metadata": {           # Metadata containing information about the extraction process (e.g., processing details, timestamps).
    "ft_lang_label",      # Type of languages detected by fastText
    "ft_lang_prob",       # Probability of type of language detected by fastText
    "math_prob",          # First round math content detection with high recal fastText model
    "size",
    "snap",               # Timestamp of Common Crawl snapshot
    "text_gpt3_token_len",
    "char_repetition_ratio",
    "word_repetition_ratio",
    "special_character_ratio",
    "punctuation_ratio",  
    "nsfw_num_words",     # Number of words which are NSFW
    "has_unicode_error",  # If there's any unicode error exists
    "math_prob_llama3",   # Probability of second round math detection with high precision fastText model
  }
}
```


### Interleaved Text and Images

The **text_list** and **image_list** are designed as parallel arrays, maintaining the sequence of the document. This interleaving structure allows models to reconstruct the flow of the original document:

- **If `text_list[i]` contains text**, then `image_list[i]` is `None`, indicating that the content at this position is text.
- **If `text_list[i]` is `None`**, then `image_list[i]` contains a URL to an image at that position in the document.

This interleaving of text and images ensures that models trained on this dataset can process the content in the same way a human would, following the logical flow between text explanations and accompanying visual aids.

## Data Collection and Filtering Pipeline

The **InfiMM-WebMath-40B** dataset was created through a comprehensive multi-stage filtering and extraction process, starting with over 120 billion web pages from the Common Crawl repository. The key steps in this pipeline are outlined below::

1. **Language Filtering**: The first step involved filtering for English and Chinese content. We utilized **Trafilatura** to extract text from web pages, and **LangDetect** to efficiently identify the language, ensuring only relevant multilingual content was retained..
2. **High Recall Math Filtering**: To capture as much math-related content as possible, we employed a modified version of **Resiliparse** for HTML parsing. In conjunction with a FastText model optimized for high recall, this phase ensured any potential mathematical data are preserved.
3. **Deduplication**: MinHash were used for fuzzy text deduplication and web page URL exact matching for neighboring Common Crawl snapshots.
4. **Rule-Based Filtering**: This step applied specific filtering rules to remove irrelevant or low-quality content, such as documents containing NSFW material or boilerplate “lorem ipsum,” enhancing the dataset’s overall quality.
5. **High Precision Math Filtering**: A second pass was performed using a FastText model, this time tuned for high precision, to ensure only highly relevant mathematical content remained in the dataset. This refinement step further improved the dataset’s focus and relevance for mathematical reasoning tasks.
6. **Image Filtering**: Finally, rule-based filtering was applied to images, removing irrelevant or extraneous visuals (e.g., logos, banners) to ensure that the remaining images were aligned with the mathematical content.

## How to Use the Dataset

1. **Base Text Download**: The dataset is available for download as a set of web documents with interleaved text and image URLs.
2. **Image Download**: Users need to download images according to the image URLs provided.

### Note

If you want more data with more precision, you can always use higher thresholds with `math_prob` and `math_prob_llama3` fields in `metadata`.

# License

**InfiMM-WebMath-40B** is made available under an ODC-By 1.0 license; users should also abide by the CommonCrawl ToU: [https://commoncrawl.org/terms-of-use/](https://commoncrawl.org/terms-of-use/). We do not alter the license of any of the underlying data.

# Citation

```
@misc{han2024infimmwebmath40badvancingmultimodalpretraining,
      title={InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning}, 
      author={Xiaotian Han and Yiren Jian and Xuefeng Hu and Haogeng Liu and Yiqi Wang and Qihang Fan and Yuang Ai and Huaibo Huang and Ran He and Zhenheng Yang and Quanzeng You},
      year={2024},
      eprint={2409.12568},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2409.12568}, 
}
```