luodian's picture
Update README.md
33caa26
|
raw
history blame
647 Bytes
metadata
license: mit

TLDR: We trained a Flamingo with Llama2-Chat7B as LLM on CC3M in less than 5 hours using just 4 A100s.

The model showed promising zero-shot captioning skills. High-quality captioning data really helps fast alignment.

You could test it via following code. Be sure to visit Otter to get necessary Flamingo/Otter models.

from flamingo.modeling_flamingo import FlamingoForConditionalGeneration
flamingo_model = FlamingoForConditionalGeneration.from_pretrained("luodian/Flamingo-Llama2-Chat7B-CC3M", device_map=auto)
prompt = "<image>an image of"
simple_prompt = "<image>"