Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -62,7 +62,7 @@ The [demo notebook](./Megalith_Demo_Notebook.ipynb) shows a random sample of 100
62
 
63
  Based on this random sample, I would estimate the following dataset statistics:
64
 
65
- * 5-7% of images may have minor edits or annotatations (timestamps, color grading, borders, etc.)
66
  * 1-2% of images may be copyright-constrained (watermarks or text descriptions cast doubt on the license metadata)
67
  * 1-2% of images may be non-wholesome (guns, suggestive poses, etc.)
68
  * 1-2% of images may be non-photos (paintings, screenshots, etc.)
@@ -70,7 +70,7 @@ Based on this random sample, I would estimate the following dataset statistics:
70
  ### Is 10 million images really enough to teach a neural network about the visual world?
71
 
72
  For the parts of the visual world that are well-represented in Megalith-10m, definitely!
73
- Projects like [CommonCanvas](https://arxiv.org/abs/2310.16825), [Mitsua Diffusion](https://huggingface.co/Mitsua/mitsua-diffusion-one), and [Matroyshka Diffusion](https://arxiv.org/abs/2310.15111)
74
  have shown that you can train useable generative models on similarly-sized image datasets.
75
  Of course, many parts of the world aren't well-represented in Megalith-10m, so you'd need additional data to learn about those.
76
 
 
62
 
63
  Based on this random sample, I would estimate the following dataset statistics:
64
 
65
+ * 5-7% of images may have minor edits or annotations (timestamps, color grading, borders, etc.)
66
  * 1-2% of images may be copyright-constrained (watermarks or text descriptions cast doubt on the license metadata)
67
  * 1-2% of images may be non-wholesome (guns, suggestive poses, etc.)
68
  * 1-2% of images may be non-photos (paintings, screenshots, etc.)
 
70
  ### Is 10 million images really enough to teach a neural network about the visual world?
71
 
72
  For the parts of the visual world that are well-represented in Megalith-10m, definitely!
73
+ Projects like [CommonCanvas](https://arxiv.org/abs/2310.16825), [Mitsua Diffusion](https://huggingface.co/Mitsua/mitsua-diffusion-one), and [Matryoshka Diffusion](https://arxiv.org/abs/2310.15111)
74
  have shown that you can train useable generative models on similarly-sized image datasets.
75
  Of course, many parts of the world aren't well-represented in Megalith-10m, so you'd need additional data to learn about those.
76