Inference Error: Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:2!

#2
by LDY - opened

I find RuntimeError when I use the official code on inference:

Expected all tensors to be on the same device, but found at least two devices, cuda:7 and cuda:2!

Test code:

model_path = ".cache/huggingface/hub/models--Salesforce--blip2-flan-t5-xl/snapshots/cc2bb7bce2f7d4d1c37753c7e9c05a443a226614/"
processor = Blip2Processor.from_pretrained(model_path)
model = Blip2ForConditionalGeneration.from_pretrained(model_path, device_map="auto")

img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' 
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB')

question = "how many dogs are in the picture?"
inputs = processor(raw_image, question, return_tensors="pt").to("cuda")

print("model: ",model.hf_device_map)

out = model.generate(**inputs)
print(processor.decode(out[0], skip_special_tokens=True))

System Info:

  • OS: 18.04.2 LTS
  • One mechine with 8x tesla p100-pcie-16gb

How can I fix this bug?

Sign up or log in to comment