anas-awadalla
commited on
Commit
•
0c34b75
1
Parent(s):
ae5c917
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ datasets:
|
|
6 |
|
7 |
# OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B-Dolly)
|
8 |
|
9 |
-
[Blog post]() | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
|
10 |
|
11 |
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
|
12 |
This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction-tuned [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) language model.
|
|
|
6 |
|
7 |
# OpenFlamingo-3B (CLIP ViT-L/14, MPT-1B-Dolly)
|
8 |
|
9 |
+
[Blog post](https://laion.ai/blog/open-flamingo-v2/) | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
|
10 |
|
11 |
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
|
12 |
This 3B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and an instruction-tuned [MPT-1B](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b-dolly) language model.
|