How to use NVLM only for text task? [wrong pixel_values size: torch.Size([1, 5])]

#28
by vedantbahel - opened

Getting the error: ValueError: wrong pixel_values size: torch.Size([1, 5])

Following is my code:
I need to use it on CPU

from transformers import AutoModel, AutoTokenizer
import torch

Load model with device map set to "cpu" for CPU usage

path = "nvidia/NVLM-D-72B"
model = AutoModel.from_pretrained(
path,
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
use_flash_attn=False,
trust_remote_code=True,
device_map={"": "cpu"} # Ensure everything is on CPU
).eval()

Set device to CPU, since CUDA is not available

device = 'cpu' # Force device to CPU if no GPU available
print(device) # Ensure it prints 'cpu'
model = model.to(device) # Explicitly set the model to CPU

Load tokenizer (no need to move to device)

tokenizer = AutoTokenizer.from_pretrained(path, trust_remote_code=True, use_fast=False)

Set up generation configuration

generation_config = dict(max_new_tokens=1024, do_sample=False)

Query for the model

query = 'What is transformer model?'

Tokenize the query

inputs = tokenizer(query, return_tensors="pt").to(device)

Generate response using the model

with torch.no_grad():
outputs = model.generate(inputs["input_ids"], max_length=1024)

Decode and print the response

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

NVIDIA org

Hi @vedantbahel ,

We provide an example in README for text-only gen as follows:

# pure-text conversation
question = 'Hello, who are you?'
response, history = model.chat(tokenizer, None, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')

Please let me know if you have further issues.

Thanks.
Best,
Boxin

Sign up or log in to comment