Datasets:
RobbeSneyders
commited on
Commit
•
2a84a43
1
Parent(s):
840f625
Add dataset card
Browse files
README.md
CHANGED
@@ -2,7 +2,85 @@
|
|
2 |
license: cc-by-4.0
|
3 |
configs:
|
4 |
- config_name: embeddings
|
5 |
-
data_files:
|
6 |
- config_name: id_mapping
|
7 |
-
data_files:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
license: cc-by-4.0
|
3 |
configs:
|
4 |
- config_name: embeddings
|
5 |
+
data_files: data/*.parquet
|
6 |
- config_name: id_mapping
|
7 |
+
data_files: id_mapping/*.parquet
|
8 |
+
task_categories:
|
9 |
+
- image-to-text
|
10 |
+
- image-to-image
|
11 |
+
tags:
|
12 |
+
- images
|
13 |
+
- CLIP
|
14 |
+
- embeddings
|
15 |
+
- FAISS
|
16 |
+
size_categories:
|
17 |
+
- 1M<n<10M
|
18 |
---
|
19 |
+
|
20 |
+
# Dataset Card for fondant-ai/datacomp-small-clip
|
21 |
+
|
22 |
+
<!-- Provide a quick summary of the dataset. -->
|
23 |
+
|
24 |
+
This is a dataset containing image urls and their CLIP embeddings, based on the [datacomp_small](https://huggingface.co/datasets/mlfoundations/datacomp_small) dataset, and processed with [fondant](https://github.com/ml6team/fondant).
|
25 |
+
|
26 |
+
## Dataset Details
|
27 |
+
|
28 |
+
### Dataset Description
|
29 |
+
|
30 |
+
<!-- Provide a longer summary of what this dataset is. -->
|
31 |
+
|
32 |
+
Large (image) datasets are often unwieldy to use due to their sheer size. Assume for instance
|
33 |
+
that we would like to extract all the cat images from such a dataset. We would have to look at
|
34 |
+
every image to classify if it's a cat image or not. And if we want to extract all the dog images
|
35 |
+
next, we again need to look at every image.
|
36 |
+
|
37 |
+
Instead, we can look at every image once, and calculate a (CLIP) embedding representing its
|
38 |
+
content. Combining these embeddings into an index, we can efficiently search through the dataset
|
39 |
+
with a query, finding specific images, without having to look at each one.
|
40 |
+
|
41 |
+
![CLIP index](https://cdn-uploads.huggingface.co/production/uploads/6454cb0e1a543cf97b1b6fd6/Mgl9UAqiwJrV4WDb8Y2-k.png)
|
42 |
+
|
43 |
+
This is what LAION did for their [LAION-5b dataset](https://laion.ai/blog/laion-5b/), which made
|
44 |
+
it possible to use, like we did in our
|
45 |
+
[ControlNet example](https://github.com/ml6team/fondant-usecase-controlnet).
|
46 |
+
Unfortunately, the LAION-5b dataset and index have been
|
47 |
+
[taken offline](https://laion.ai/notes/laion-maintanence/) (temporarily) and there
|
48 |
+
[aren't any alternatives](https://github.com/rom1504/clip-retrieval/issues/324). This is
|
49 |
+
why we built an index for the Datacomp-12M dataset. While it is a lot smaller than LAION-5b, it
|
50 |
+
should already enable a lot of use cases again, and can hopefully be the start towards building
|
51 |
+
indices for more and larger datasets.
|
52 |
+
|
53 |
+
- **License:** cc-by-4.0
|
54 |
+
|
55 |
+
### Dataset Sources
|
56 |
+
|
57 |
+
<!-- Provide the basic links for the dataset. -->
|
58 |
+
|
59 |
+
- **Original data:** [datacomp_small](https://huggingface.co/datasets/mlfoundations/datacomp_small)
|
60 |
+
- **Repository:** [fondant-clip-index](https://github.com/ml6team/fondant-clip-index)
|
61 |
+
|
62 |
+
## Uses
|
63 |
+
|
64 |
+
<!-- Address questions around how the dataset is intended to be used. -->
|
65 |
+
|
66 |
+
We provide an [example use case](https://github.com/ml6team/fondant-usecase-controlnet) which uses the FAISS index of this dataset to create a dataset of interior design images, used for the fine-tuning of a ControlNet model:
|
67 |
+
|
68 |
+
## Dataset Structure
|
69 |
+
|
70 |
+
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
|
71 |
+
|
72 |
+
The data repository is
|
73 |
+
structured as follows:
|
74 |
+
- [data/](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/viewer): The dataset
|
75 |
+
containing ids, urls, and CLIP embeddings
|
76 |
+
- [faiss](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/blob/main/faiss):
|
77 |
+
The faiss index
|
78 |
+
- [id_mapping/](https://huggingface.co/datasets/fondant-ai/datacomp-small-clip/tree/main/id_mapping):
|
79 |
+
The mapping of the faiss ids to the original urls
|
80 |
+
|
81 |
+
## Terms and Conditions
|
82 |
+
Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit resulting from the publication or use of the dataset.
|
83 |
+
|
84 |
+
## Dataset Card Contact
|
85 |
+
- Email: [[email protected]](mailto:[email protected])
|
86 |
+
- Discord: [https://discord.gg/HnTdWhydGp](https://discord.gg/HnTdWhydGp)
|