Datasets:
File size: 4,010 Bytes
1167b53 e6201a1 6c2d49e e6201a1 b017974 e6201a1 b017974 e6201a1 b017974 e6201a1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
---
annotations_creators: []
language:
- en
language_creators:
- found
license:
- mit
multilinguality:
- monolingual
pretty_name: laion-aesthetics-12m-umap
size_categories: []
source_datasets: []
tags:
- laion
- stable-diffuson
- text2img
task_categories: []
task_ids: []
---
# LAION-Aesthetics :: CLIP → UMAP
This dataset is a CLIP (text) → UMAP embedding of the [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dataset to images with scores of > 6 under the "aesthetic" filtering model.
Thanks LAION for this amazing corpus!
---
The dataset here includes coordinates for 3x separate UMAP fits using different values for the `n_neighbors` parameter - `10`, `30`, and `60` - which are broken out as separate columns with different suffixes:
- `n_neighbors=10` → (`x_nn10`, `y_nn10`)
- `n_neighbors=30` → (`x_nn30`, `y_nn30`)
- `n_neighbors=60` → (`x_nn60`, `y_nn60`)
### `nn10`
![nn10](https://user-images.githubusercontent.com/814168/189763846-efa9ecc9-3d57-469b-9d4e-02ddc1723265.jpg)
### `nn30`
![nn30](https://user-images.githubusercontent.com/814168/189763863-a67d4bb1-e043-48ec-8c5a-38dce960731b.jpg)
### `nn60`
(The version from [Twitter](https://twitter.com/clured/status/1565399157606580224).)
![nn60](https://user-images.githubusercontent.com/814168/189763872-5847cde5-e03b-45e1-a9be-d95966bc5ded.jpg)
## Pipeline
The script for producing this can be found here:
https://github.com/davidmcclure/loam-viz/blob/laion/laion.py
And is very simple - just using the `openai/clip-vit-base-patch32` model out-of-the-box to encode the text captions:
```python
@app.command()
def clip(
src: str,
dst: str,
text_col: str = 'TEXT',
limit: Optional[int] = typer.Option(None),
batch_size: int = typer.Option(512),
):
"""Embed with CLIP."""
df = pd.read_parquet(src)
if limit:
df = df.head(limit)
tokenizer = CLIPTokenizerFast.from_pretrained('openai/clip-vit-base-patch32')
model = CLIPTextModel.from_pretrained('openai/clip-vit-base-patch32')
model = model.to(device)
texts = df[text_col].tolist()
embeds = []
for batch in chunked_iter(tqdm(texts), batch_size):
enc = tokenizer(
batch,
return_tensors='pt',
padding=True,
truncation=True,
)
enc = enc.to(device)
with torch.no_grad():
res = model(**enc)
embeds.append(res.pooler_output.to('cpu'))
embeds = torch.cat(embeds).numpy()
np.save(dst, embeds)
print(embeds.shape)
```
Then using `cuml.GaussianRandomProjection` to do an initial squeeze to 64d (which gets the embedding tensor small enough to fit onto a single GPU for the UMAP) -
```python
@app.command()
def random_projection(src: str, dst: str, dim: int = 64):
"""Random projection on an embedding matrix."""
rmm.reinitialize(managed_memory=True)
embeds = np.load(src)
rp = cuml.GaussianRandomProjection(n_components=dim)
embeds = rp.fit_transform(embeds)
np.save(dst, embeds)
print(embeds.shape)
```
And then `cuml.UMAP` to get from 64d -> 2d -
```python
@app.command()
def umap(
df_src: str,
embeds_src: str,
dst: str,
n_neighbors: int = typer.Option(30),
n_epochs: int = typer.Option(1000),
negative_sample_rate: int = typer.Option(20),
):
"""UMAP to 2d."""
rmm.reinitialize(managed_memory=True)
df = pd.read_parquet(df_src)
embeds = np.load(embeds_src)
embeds = embeds.astype('float16')
print(embeds.shape)
print(embeds.dtype)
reducer = cuml.UMAP(
n_neighbors=n_neighbors,
n_epochs=n_epochs,
negative_sample_rate=negative_sample_rate,
verbose=True,
)
x = reducer.fit_transform(embeds)
df['x'] = x[:,0]
df['y'] = x[:,1]
df.to_parquet(dst)
print(df)
``` |