anas-awadalla
commited on
Commit
•
11aa468
1
Parent(s):
9985e34
Update README.md
Browse files
README.md
CHANGED
@@ -6,7 +6,7 @@ datasets:
|
|
6 |
|
7 |
# OpenFlamingo-4B (CLIP ViT-L/14, RedPajama-INCITE-Base-3B-v1)
|
8 |
|
9 |
-
[Blog post]() | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo]()
|
10 |
|
11 |
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
|
12 |
This 4B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [RedPajama-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) language model.
|
|
|
6 |
|
7 |
# OpenFlamingo-4B (CLIP ViT-L/14, RedPajama-INCITE-Base-3B-v1)
|
8 |
|
9 |
+
[Blog post]() | [Code](https://github.com/mlfoundations/open_flamingo) | [Demo](https://huggingface.co/spaces/openflamingo/OpenFlamingo)
|
10 |
|
11 |
OpenFlamingo is an open source implementation of DeepMind's [Flamingo](https://www.deepmind.com/blog/tackling-multiple-tasks-with-a-single-visual-language-model) models.
|
12 |
This 4B-parameter model uses a [CLIP ViT-L/14](https://huggingface.co/openai/clip-vit-large-patch14) vision encoder and [RedPajama-3B](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-3B-v1) language model.
|