anas-awadalla
commited on
Commit
β’
60a5406
1
Parent(s):
2f3d4ff
Update README.md
Browse files
README.md
CHANGED
@@ -16,7 +16,7 @@ size_categories:
|
|
16 |
π MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
|
17 |
</h1>
|
18 |
|
19 |
-
π MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. MINT-1T is designed to facilitate research in multimodal pretraining. MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
|
20 |
|
21 |
You are currently viewing the HTML subset of π MINT-1T. For PDF and ArXiv subsets, please refer to the [π MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
|
22 |
|
|
|
16 |
π MINT-1T:<br>Scaling Open-Source Multimodal Data by 10x:<br> A Multimodal Dataset with One Trillion Tokens
|
17 |
</h1>
|
18 |
|
19 |
+
π MINT-1T is an open-source **M**ultimodal **INT**erleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. π MINT-1T is designed to facilitate research in multimodal pretraining. π MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.
|
20 |
|
21 |
You are currently viewing the HTML subset of π MINT-1T. For PDF and ArXiv subsets, please refer to the [π MINT-1T collection](https://huggingface.co/collections/mlfoundations/mint-1t-6690216ca4d0df7e518dde1c).
|
22 |
|