osanseviero
commited on
Commit
•
145b19a
1
Parent(s):
5724056
Specify right model card metadata
Browse filesThis PR specifies the right metadata for this repository
README.md
CHANGED
@@ -1,7 +1,10 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
3 |
datasets:
|
4 |
- Ejafa/ye-pop
|
|
|
5 |
---
|
6 |
|
7 |
A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [LLaVA 1.5](https://github.com/haotian-liu/LLaVA)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks).
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
tags:
|
4 |
+
- llava
|
5 |
datasets:
|
6 |
- Ejafa/ye-pop
|
7 |
+
pipeline_tag: image-text-to-text
|
8 |
---
|
9 |
|
10 |
A ViT-B/32 CLIP model trained for 4 epochs on the [ye-pop](https://huggingface.co/datasets/Ejafa/ye-pop) dataset (491,520 images and [LLaVA 1.5](https://github.com/haotian-liu/LLaVA)-generated detailed captions). Research artifact of [clip-synthetic-captions](https://github.com/nopperl/clip-synthetic-captions). Outperforms the CLIP model trained using the original alt-texts on the [DataComp benchmark suite](https://datacomp.ai) (38 image classification and retrieval tasks).
|