File size: 1,522 Bytes
bd5f7ca f965dc7 bd5f7ca cc02dfa bd5f7ca cc02dfa bd5f7ca 586f028 cc02dfa fac55b1 cc02dfa bd5f7ca cc02dfa bd5f7ca |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 |
---
task_categories:
- image-feature-extraction
---
# Google Image Malaysia Location Dedup
Original dataset https://huggingface.co/datasets/malaysia-ai/crawl-google-image-malaysia-location
Source code at https://github.com/mesolitica/malaysian-dataset/tree/master/vlm/dedup-malaysia-location
## Dedup 50% similar
[dedup-0.5.jsonl](dedup-0.5.jsonl), total deduped 227937 images,
```
{'filename': 'train-00812-of-01000.parquet',
'keyword': 'Taman Megah Jaya Ayer Tawar',
'no': 16,
'selected_indices': [2556, 2559, 2575, 2577, 2586, 2587, 2595]}
```
## Dedup 60% similar
[dedup-0.6.jsonl](dedup-0.6.jsonl), total deduped 487301 images,
```
{'filename': 'train-00404-of-01000.parquet',
'keyword': 'Kampung Tok Wan Nik Padang Besar',
'no': 92,
'selected_indices': [2100, 2102, 2103, 2104]}
```
- `filename` is the parquet file from the original repository.
- `selected_indices` is the index of dataframe of that filename.
## Embedding
We convert to embedding using https://huggingface.co/google/siglip-base-patch16-512, we use MosaicML for faster indexing,
```python
from streaming import MDSWriter
from streaming.base.format.mds.encodings import Encoding, _encodings
from streaming import LocalDataset
import streaming
import numpy as np
from tqdm import tqdm
class Float32(Encoding):
def encode(self, obj) -> bytes:
return obj.tobytes()
def decode(self, data: bytes):
return np.frombuffer(data, np.float32)
_encodings['float32'] = Float32
dataset = LocalDataset('embedding')
``` |