GeorgeBredis
commited on
Commit
•
7e7f221
1
Parent(s):
54069ef
Update README.md
Browse files
README.md
CHANGED
@@ -35,3 +35,49 @@ This is the model card of a 🤗 transformers model that has been pushed on the
|
|
35 |
- **Language(s) (NLP):** Russian
|
36 |
- **License:** Apache-2.0
|
37 |
- **Finetuned from model [optional]:** Idefics2
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
- **Language(s) (NLP):** Russian
|
36 |
- **License:** Apache-2.0
|
37 |
- **Finetuned from model [optional]:** Idefics2
|
38 |
+
|
39 |
+
# How to Get Started
|
40 |
+
|
41 |
+
This section shows snippets of code for generation for `idefics2-8b-base` and `idefics2-8b`. The codes only differ by the input formatting. Let's first define some common imports and inputs.
|
42 |
+
|
43 |
+
```python
|
44 |
+
import requests
|
45 |
+
import torch
|
46 |
+
from PIL import Image
|
47 |
+
from io import BytesIO
|
48 |
+
|
49 |
+
from transformers import AutoProcessor, AutoModelForVision2Seq
|
50 |
+
from transformers.image_utils import load_image
|
51 |
+
|
52 |
+
DEVICE = "cuda:0"
|
53 |
+
|
54 |
+
image1 = load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg")
|
55 |
+
image2 = load_image("https://cdn.britannica.com/59/94459-050-DBA42467/Skyline-Chicago.jpg")
|
56 |
+
image3 = load_image("https://cdn.britannica.com/68/170868-050-8DDE8263/Golden-Gate-Bridge-San-Francisco.jpg")
|
57 |
+
|
58 |
+
processor = AutoProcessor.from_pretrained("GeorgeBredis/ruIdefics2-ruLLaVA-merged")
|
59 |
+
model = AutoModelForVision2Seq.from_pretrained(
|
60 |
+
"GeorgeBredis/ruIdefics2-ruLLaVA-merged",
|
61 |
+
).to(DEVICE)
|
62 |
+
|
63 |
+
# Create inputs
|
64 |
+
messages = [
|
65 |
+
{
|
66 |
+
"role": "user",
|
67 |
+
"content": [
|
68 |
+
{"type": "image"},
|
69 |
+
{"type": "text", "text": "Что изображено на данной картинке?"},
|
70 |
+
]
|
71 |
+
}
|
72 |
+
]
|
73 |
+
prompt = processor.apply_chat_template(messages, add_generation_prompt=True)
|
74 |
+
inputs = processor(text=prompt, images=[image1, image2], return_tensors="pt")
|
75 |
+
inputs = {k: v.to(DEVICE) for k, v in inputs.items()}
|
76 |
+
|
77 |
+
|
78 |
+
generated_ids = model.generate(**inputs, max_new_tokens=500)
|
79 |
+
generated_texts = processor.batch_decode(generated_ids, skip_special_tokens=True)
|
80 |
+
|
81 |
+
print(generated_texts)
|
82 |
+
```
|
83 |
+
|