Safetensors
clip_vision_model
xiangan commited on
Commit
0862ef0
1 Parent(s): 3c51324

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -4,6 +4,8 @@ datasets:
4
  - kakaobrain/coyo-700m
5
  ---
6
 
 
 
7
  This model is trained using the COYO700M dataset. The results below are from linear probe evaluations, demonstrating the model's performance on various benchmarks.
8
 
9
  | Dataset | CLIP | MLCD |
 
4
  - kakaobrain/coyo-700m
5
  ---
6
 
7
+ [[Paper]](https://arxiv.org/abs/2407.17331) [[GitHub]](https://github.com/deepglint/unicom)
8
+
9
  This model is trained using the COYO700M dataset. The results below are from linear probe evaluations, demonstrating the model's performance on various benchmarks.
10
 
11
  | Dataset | CLIP | MLCD |