Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,7 @@ Molmo is a family of open vision-language models developed by the Allen Institut
|
|
22 |
|
23 |
Molmo 7B-D is based on [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
|
24 |
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
|
|
|
25 |
|
26 |
This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
|
27 |
|
|
|
22 |
|
23 |
Molmo 7B-D is based on [Qwen2-7B](https://huggingface.co/Qwen/Qwen2-7B) and uses [OpenAI CLIP](https://huggingface.co/openai/clip-vit-large-patch14-336) as vision backbone.
|
24 |
It performs comfortably between GPT-4V and GPT-4o on both academic benchmarks and human evaluation.
|
25 |
+
It powers the **Molmo demo at** [**molmo.allenai.org**](https://molmo.allenai.org).
|
26 |
|
27 |
This checkpoint is a **preview** of the Molmo release. All artifacts used in creating Molmo (PixMo dataset, training code, evaluations, intermediate checkpoints) will be made available at a later date, furthering our commitment to open-source AI development and reproducibility.
|
28 |
|