Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,8 @@ Finetuning a Vision-and-Language Pre-training (VLP) model for a fashion-related
|
|
24 |
|
25 |
- **Model type:** Vision Question Answering, ViLT
|
26 |
- **License:** MIT
|
27 |
-
-
|
28 |
-
- **
|
29 |
|
30 |
### Model Sources
|
31 |
|
|
|
24 |
|
25 |
- **Model type:** Vision Question Answering, ViLT
|
26 |
- **License:** MIT
|
27 |
+
<!-- - **:** [dandelin/vilt-b32-finetuned-vqa](https://huggingface.co/dandelin/vilt-b32-finetuned-vqa) -->
|
28 |
+
- **Train/test dataset:** [yanka9/deepfashion-for-VQA](https://huggingface.co/datasets/yanka9/deepfashion-for-VQA), derived from [DeepFashion](https://github.com/yumingj/DeepFashion-MultiModal?tab=readme-ov-file)
|
29 |
|
30 |
### Model Sources
|
31 |
|