multi-gpu support?
#9
by
matatonic
- opened
I'm seeing errors like this when using device_map=auto
File "modules/transformers_modules/OpenGVLab/InternVL-Chat-V1-5-Int8/0d0a0d47a718aaef231cb00c6cf9ce3fb905269b/modeling_internvl_chat.py", line 348, in generate
input_embeds[selected] = vit_embeds.reshape(-1, C)
~~~~~~~~~~~~^^^^^^^^^^
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
Gracefully stopping... (press Ctrl+C again to force)
Is this known? Is there a fix?
I have the same problem..
This comment has been hidden
็ธๅ็้ฎ้ข
This has been fixed, see the latest readme.
czczup
changed discussion status to
closed