Update README.md
Browse files
README.md
CHANGED
@@ -66,7 +66,7 @@ This interactive demo showcases the model's capability to effectively retrieve v
|
|
66 |
## Evaluations
|
67 |
|
68 |
To evaluate the model's performance we used the last last 10,000 video clips and their accompanying text from the Webvid dataset.
|
69 |
-
We evaluate R1,R5,R10,MedianR and MeanR on:
|
70 |
1. Zero-shot pretrained clip-vit-base-patch32 model
|
71 |
2. CLIP4Clip based weights trained on the dataset [MSR-VTT](https://paperswithcode.com/dataset/msr-vtt), consisting of 10,000 video-text pairs
|
72 |
3. CLIP4Clip based weights trained on a 150K subset of the dataset Webvid-2M
|
@@ -84,7 +84,7 @@ For an elaborate description of the evaluation refer to the notebook
|
|
84 |
[GSI_VideoRetrieval-Evaluation](https://huggingface.co/Diangle/clip4clip-webvid/blob/main/Notebooks/GSI_VideoRetrieval-Evaluation.ipynb).
|
85 |
|
86 |
<div id="footnote1">
|
87 |
-
<p>[1] For overall search acceleration capabilities, in order to boost you search application, please refer to searchium.ai</p>
|
88 |
</div>
|
89 |
|
90 |
|
|
|
66 |
## Evaluations
|
67 |
|
68 |
To evaluate the model's performance we used the last last 10,000 video clips and their accompanying text from the Webvid dataset.
|
69 |
+
We evaluate R1, R5, R10, MedianR, and MeanR on:
|
70 |
1. Zero-shot pretrained clip-vit-base-patch32 model
|
71 |
2. CLIP4Clip based weights trained on the dataset [MSR-VTT](https://paperswithcode.com/dataset/msr-vtt), consisting of 10,000 video-text pairs
|
72 |
3. CLIP4Clip based weights trained on a 150K subset of the dataset Webvid-2M
|
|
|
84 |
[GSI_VideoRetrieval-Evaluation](https://huggingface.co/Diangle/clip4clip-webvid/blob/main/Notebooks/GSI_VideoRetrieval-Evaluation.ipynb).
|
85 |
|
86 |
<div id="footnote1">
|
87 |
+
<p>[1] For overall search acceleration capabilities, in order to boost you search application, please refer to [Searchium.ai](https://www.searchium.ai)</p>
|
88 |
</div>
|
89 |
|
90 |
|