mistakes

#3
by goodasdgood - opened

It did not work in colab t4

Qwen/Qwen2-VL-2B-Instruct

Run this, before running the given code ------> !python -m pip install git+https://github.com/huggingface/transformers

i have follow the steps, using colab T4 .
colab showing error cuda out of memory

The colab T4 GPU is sufficient to run the Qwen2-VL-2B-Instruct model. If you already tried with the 2B model, maybe you have to clear the run time and try to run again.

@destinykii
!pip install git+https://github.com/huggingface/transformers accelerate
!pip install qwen-vl-utils

from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info

model = Qwen2VLForConditionalGeneration.from_pretrained(
"Qwen/Qwen2-VL-2B-Instruct", torch_dtype="auto", device_map="auto",)

min_pixels = 256 * 28 * 28
max_pixels = 1280 * 28 * 28
processor = AutoProcessor.from_pretrained(
"Qwen/Qwen2-VL-7B-Instruct-GPTQ-Int4", min_pixels=min_pixels, max_pixels=max_pixels
)

For T4 important control size image min_pixels=min_pixels, max_pixels=max_pixels.
It will be work good

Yeah ,its working, thanks Nikitosik1 and vasukumar

But it isn't working with video :(

Sign up or log in to comment