Fix typos
#1
by
Xenova
HF staff
- opened
README.md
CHANGED
@@ -46,7 +46,7 @@ https://llava-vl.github.io/
|
|
46 |
## How to use the model
|
47 |
|
48 |
First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`.
|
49 |
-
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by
|
50 |
|
51 |
### Using `pipeline`:
|
52 |
|
@@ -74,7 +74,7 @@ conversation = [
|
|
74 |
],
|
75 |
},
|
76 |
]
|
77 |
-
prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
|
78 |
|
79 |
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
|
80 |
print(outputs)
|
|
|
46 |
## How to use the model
|
47 |
|
48 |
First, make sure to have `transformers` installed from [branch](https://github.com/huggingface/transformers/pull/32673) or `transformers >= 4.45.0`.
|
49 |
+
The model supports multi-image and multi-prompt generation. Meaning that you can pass multiple images in your prompt. Make sure also to follow the correct prompt template by applying the chat template:
|
50 |
|
51 |
### Using `pipeline`:
|
52 |
|
|
|
74 |
],
|
75 |
},
|
76 |
]
|
77 |
+
prompt = pipe.processor.apply_chat_template(conversation, add_generation_prompt=True)
|
78 |
|
79 |
outputs = pipe(image, prompt=prompt, generate_kwargs={"max_new_tokens": 200})
|
80 |
print(outputs)
|