Update README.md
Browse files
README.md
CHANGED
@@ -22,12 +22,13 @@ metrics:
|
|
22 |
- recall
|
23 |
- MRR
|
24 |
---
|
25 |
-
# Marqo
|
26 |
Marqo-FashionCLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
|
27 |
The model was fine-tuned from ViT-B-16 (laion2b_s34b_b88k).
|
28 |
|
29 |
**Github Page**: [Marqo-FashionCLIP](https://github.com/marqo-ai/marqo-FashionCLIP)
|
30 |
|
|
|
31 |
|
32 |
## Usage
|
33 |
The model can be seamlessly used with [OpenCLIP](https://github.com/mlfoundations/open_clip) by
|
|
|
22 |
- recall
|
23 |
- MRR
|
24 |
---
|
25 |
+
# Marqo-FashionCLIP Model Card
|
26 |
Marqo-FashionCLIP leverages Generalised Contrastive Learning ([GCL](https://www.marqo.ai/blog/generalized-contrastive-learning-for-multi-modal-retrieval-and-ranking)) which allows the model to be trained on not just text descriptions but also categories, style, colors, materials, keywords and fine-details to provide highly relevant search results on fashion products.
|
27 |
The model was fine-tuned from ViT-B-16 (laion2b_s34b_b88k).
|
28 |
|
29 |
**Github Page**: [Marqo-FashionCLIP](https://github.com/marqo-ai/marqo-FashionCLIP)
|
30 |
|
31 |
+
**Blog**: [Marqo Blog](https://www.marqo.ai/blog/search-model-for-fashion)
|
32 |
|
33 |
## Usage
|
34 |
The model can be seamlessly used with [OpenCLIP](https://github.com/mlfoundations/open_clip) by
|