Datasets:
url.txt
stringlengths 20
1.84k
| syn.json
dict | paug.json
dict | npz
dict | json
dict |
---|---|---|---|---|
{"syn_text":["an white desktop computer with white infographics","a white tablet is displaying pictu(...TRUNCATED) | {"param_aug":"[[[32, 291, 657, 709], [[2, -0.09000000357627869], [4, -30.45317268371582]]], [[394, 3(...TRUNCATED) | {"image_emb":[[0.03482759743928909,0.03062126412987709,0.004877782426774502,-0.02437542751431465,0.0(...TRUNCATED) | {"uid":"d611a2aa18f33ba0166b2937f8fa665f","sha256":"9fc1b8b6554bd3552209f669deb843e88258e1aaebcf6d14(...TRUNCATED) |
|
http://tse1.mm.bing.net/th?id=OIP.k3QKuynYTddAYGhzMcr2EwAAAA | {"syn_text":["a word search puzzle with a picture of a person of martin luther king jr","a black wom(...TRUNCATED) | {"param_aug":"[[[36, 12, 157, 153], [[7, 0.26999998092651367], [1, -0.09000000357627869]]], [[182, 6(...TRUNCATED) | {"image_emb":[[-0.01975909061729908,-0.002861254382878542,-0.006349699106067419,-0.01451169513165950(...TRUNCATED) | {"uid":"318659708861038b9e9f339232990c1c","sha256":"0d65b1da4e01748725a0c8b7dd84d1ae9358d4512f822d4b(...TRUNCATED) |
{"syn_text":["homemade birfe is a drinking remedy with two ingredients","a close up of two glasses o(...TRUNCATED) | {"param_aug":"[[[7, 28, 34, 27], [[9, 0.26999998092651367], [7, -0.26999998092651367]]], [[23, 41, 4(...TRUNCATED) | {"image_emb":[[0.006601239554584026,0.03047332726418972,-0.0021719690412282944,-0.012367967516183853(...TRUNCATED) | {"uid":"ccc2683dcfbdea0d7567572ee95678ce","sha256":"9e91f9aaea7075aa91b260fbcefaadd1633fd1d0cc2e71d9(...TRUNCATED) |
|
{"syn_text":["a close up of different coins laying on a table","a collection of coins taken apart to(...TRUNCATED) | {"param_aug":"[[[28, 64, 76, 78], [[0, 0.0], [12, 0.0]]], [[47, 20, 113, 125], [[12, 0.0], [11, 178.(...TRUNCATED) | {"image_emb":[[0.016358084976673126,0.04037415608763695,-0.00030679639894515276,0.04692002385854721,(...TRUNCATED) | {"uid":"428fd151527521f208f742afd663a3e1","sha256":"2e7dfbc01b9f58c23881ef8fe4957d7c38c709a1e897c87f(...TRUNCATED) |
|
{"syn_text":["a necklace with some glass beads inside of it","a blue , green and pink necklace sitti(...TRUNCATED) | {"param_aug":"[[[411, 58, 610, 748], [[0, 0.0], [13, 0.0]]], [[1, 97, 1000, 863], [[4, -30.453172683(...TRUNCATED) | {"image_emb":[[-0.005103737115859985,-0.00025735379313118756,0.0007001835037954152,0.046321298927068(...TRUNCATED) | {"uid":"127e05b297ca2f3838757c630cb7f830","sha256":"df897143005c3161d0ff569da004e51a8ee796c3f1ebb132(...TRUNCATED) |
|
{"syn_text":["two black gloves that have a dead face on them","a pair of gloves with skulls on them (...TRUNCATED) | {"param_aug":"[[[51, 162, 218, 204], [[9, -0.26999998092651367], [8, 0.26999998092651367]]], [[246, (...TRUNCATED) | {"image_emb":[[0.0075726318173110485,0.018115408718585968,-0.027006926015019417,0.027285374701023102(...TRUNCATED) | {"uid":"f99666965c6d5fb8c01ac2c20eebe4b6","sha256":"36d7d98f6ef6a333ff93844a225e72f7d4d826b80e86d3e9(...TRUNCATED) |
|
{"syn_text":["paper lanterns white paper lanterns 1 0 pack","1 0 pcs of rice paper lanterns for part(...TRUNCATED) | {"param_aug":"[[[24, 15, 750, 762], [[0, 0.0], [7, -0.26999998092651367]]], [[162, 5, 563, 700], [[5(...TRUNCATED) | {"image_emb":[[0.024775227531790733,0.04016094654798508,0.0025725390296429396,0.03969571366906166,-0(...TRUNCATED) | {"uid":"134822ff5d50cfdcbf23c8c678bfbaf5","sha256":"57b7813ac3a39a7ffa7014aa94ef518fa61ffc5bf4ab02c6(...TRUNCATED) |
|
http://images.body-pillow.org/butter-soft-cotton-jersey-knit-2J2xYX-9y9R6Qw.jpg | {"syn_text":["a pillow is leaning against a handle","there is a pillow that is comfortably attached (...TRUNCATED) | {"param_aug":"[[[4, 2, 249, 233], [[5, 9.0], [5, -9.0]]], [[9, 65, 236, 181], [[10, 7.0], [9, 0.2699(...TRUNCATED) | {"image_emb":[[0.03924741968512535,0.03416052833199501,0.010387081652879715,0.010222524404525757,0.0(...TRUNCATED) | {"uid":"f5ca478409bb2aad2a366da6b50734c0","sha256":"a90b47913d536ecd336eea9368bdc09cf69a2289ffa15370(...TRUNCATED) |
http://ecx.images-amazon.com/images/I/51ZrFiBivhL._SL160_.jpg | {"syn_text":["a large red bow tied in a knot","red and black dotted bow tie in stock","a red and bla(...TRUNCATED) | {"param_aug":"[[[28, 2, 108, 98], [[7, 0.26999998092651367], [7, -0.26999998092651367]]], [[75, 84, (...TRUNCATED) | {"image_emb":[[0.019300544634461403,0.028848877176642418,0.031585317105054855,0.04346656799316406,-0(...TRUNCATED) | {"uid":"77fcbbe7f7102e116b576595aa6c9f9b","sha256":"0ba8153f6d4d02583b2a889aa61c428bcdd7a23bca0a881c(...TRUNCATED) |
{"syn_text":["a black surround sound audio and video player , on a black background","a marenti blac(...TRUNCATED) | {"param_aug":"[[[38, 169, 177, 184], [[3, 30.45317268371582], [3, 30.45317268371582]]], [[23, 35, 20(...TRUNCATED) | {"image_emb":[[-0.00993283186107874,0.06430082023143768,-0.01647499017417431,-0.028922978788614273,0(...TRUNCATED) | {"uid":"3bdd75e3038103a1fe380a09973ed71f","sha256":"18c4d9c7fd82aac932c60a4c8c57af470dd643ad1c302f16(...TRUNCATED) |
Dataset Card for DataCompDR-1B
This dataset contains synthetic captions, embeddings, and metadata for DataCompDR-1B. The metadata has been generated using pretrained image-text models on DataComp-1B. For details on how to use the metadata, please visit our github repository.
Dataset Details
Dataset Description
DataCompDR is an image-text dataset and an enhancement to the DataComp dataset.
We reinforce the DataComp dataset using our multi-modal dataset reinforcement strategy.
In particular, we create DataCompDR-1B and DataCompDR-12M by reinforcing the DataComp-1B (BestPool filtering) and a uniform subset of 12.8M samples, DataCompDR-12M.
We have a one-time generation process, the cost of which is amortized over multiple architectures and extensive ablations.
We generate 5 synthetic captions per image using the coca_ViT-L-14
model in OpenCLIP, and strong random image augmentations (10 for DataCompDR-1B and 30 for DataCompDR-12M).
We compute embeddings of an ensemble of two strong teachers (ViT-L-14
with pretrained weights datacomp_xl_s13b_b90k
and openai in OpenCLIP) on augmented images as well as real and synthetic captions.
Embeddings are 1536-D concatenations of 2x768-D vectors.
One seen sample for DataCompDR is a triplet of one randomly augmented image, one ground-truth caption, and one randomly picked synthetic caption.
- Curated by: Original data by DataComp and metadata by Apple.
- License: We distribute our metadata under our license. The original image url-text samples and metadata were released by DataComp under Creative Common CC-BY-4.0 license. The individual images are under their own copyrights.
- Repository: ml-mobileclip GitHub
- Paper: MobileCLIP paper
- Demo: Coming Soon
Uses
Training with DataCompDR shows significant learning efficiency improvement compared to the standard CLIP training. For example, with a single node of 8×A100 GPUs, we achieve 61.7% zero-shot classification on ImageNet-val in approximately one day when training a ViT-B/16 based CLIP from scratch on DataCompDR-12M. Training with DataCompDR-1B sets new state-of-the-art performance on several metrics (Fig. 2) while still using a fraction of the training compute budget compared to previous works. Using DataCompDR, we demonstrate 10x-1000x learning efficiency in comparison to DataComp.
Dataset Structure
- <uid>.url.txt: Image URL (string)
- <uid>.syn.json:
- syn_text: List of synthetic captions (list[string])
- <uid>.paug.json:
- param_aug: List of augmentation parameters (list[list[Union[int,float]]])
- <uid>.npz
- image_emb: List of image embeddings for multiple image augmentations (list[list[float]])
- text_emb: List of text embeddings for ground-truth/synthetic captions (list[list[float]])
- <uid>.json
- uid: UID of image-text sample in DataComp (string)
- sha256: SHA256 hash of the image (string)
Citation
MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training. (CVPR 2024) Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
@InProceedings{mobileclip2024,
author = {Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel},
title = {MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2024},
}
- Downloads last month
- 64,790
Data Sourcing report
Some elements in this dataset have been identified as opted-out, or opted-in, by their creator.