VictorSanh HugoLaurencon commited on
Commit
12d27ed
1 Parent(s): 84bc60f

Update README.md (#1)

Browse files

- Update README.md (34b9c94ea077c5fecdcbba6b6ab71d0c63d4fd9f)


Co-authored-by: Hugo Laurençon <[email protected]>

Files changed (1) hide show
  1. README.md +3 -0
README.md CHANGED
@@ -38,6 +38,7 @@ Idefics2 is an open multimodal model that accepts arbitrary sequences of image a
38
  We release under the Apache 2.0 license 2 checkpoints:
39
  - [idefics2-8b-base](https://huggingface.co/HuggingFaceM4/idefics2-8b-base): the base model
40
  - [idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b): the base model fine-tuned on a mixture of supervised and instruction datasets (text-only and multimodal datasets)
 
41
 
42
 
43
  # Model Summary
@@ -59,6 +60,8 @@ We release under the Apache 2.0 license 2 checkpoints:
59
 
60
  For optimal results, we recommend fine-tuning `idefics2-8b` on one's specific use-case and data. In fact, the instruction-fine-tuned model (`idefics2-8b`) is significantly better at following instructions from users and thus should be preferred when using the models out-of-the-box or as a starting point for fine-tuning.
61
 
 
 
62
  As a starting point, we provide fine-tuning codes that can be adapted for one's particular scenario:
63
  - With the [TRL library](https://github.com/huggingface/trl): [Script](https://gist.github.com/edbeeching/228652fc6c2b29a1641be5a5778223cb)
64
  - With the [Hugging Face Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#api-reference%20][%20transformers.Trainer): [Tutorial notebook](https://colab.research.google.com/drive/1NtcTgRbSBKN7pYD3Vdx1j9m8pt3fhFDB?usp=sharing)
 
38
  We release under the Apache 2.0 license 2 checkpoints:
39
  - [idefics2-8b-base](https://huggingface.co/HuggingFaceM4/idefics2-8b-base): the base model
40
  - [idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b): the base model fine-tuned on a mixture of supervised and instruction datasets (text-only and multimodal datasets)
41
+ - idefics2-8b-chatty (coming soon): `idefics2-8b` further fine-tuned on long conservations
42
 
43
 
44
  # Model Summary
 
60
 
61
  For optimal results, we recommend fine-tuning `idefics2-8b` on one's specific use-case and data. In fact, the instruction-fine-tuned model (`idefics2-8b`) is significantly better at following instructions from users and thus should be preferred when using the models out-of-the-box or as a starting point for fine-tuning.
62
 
63
+ `idefics2-8b` usually generates very short answers. For long generations, use `idefics2-8b-chatty`, which was further fine-tuned on long conversations.
64
+
65
  As a starting point, we provide fine-tuning codes that can be adapted for one's particular scenario:
66
  - With the [TRL library](https://github.com/huggingface/trl): [Script](https://gist.github.com/edbeeching/228652fc6c2b29a1641be5a5778223cb)
67
  - With the [Hugging Face Trainer](https://huggingface.co/docs/transformers/main/en/main_classes/trainer#api-reference%20][%20transformers.Trainer): [Tutorial notebook](https://colab.research.google.com/drive/1NtcTgRbSBKN7pYD3Vdx1j9m8pt3fhFDB?usp=sharing)