Hi Can someone please tell me the Computational server requirements to run OpenGVLab/InternVL2-8B model.

#6
by vobbilisettyjayadeep - opened

Hi can someone help me with the computational requirements that this model require.
Please share me the system requirements mainly gpu requirement for this model.
OpenGVLab/InternVL2-8B does this model have any detailed documentation available ?

I didn't find anything for the 8B model, but for the 26B, 40B and 76B model you can find the requirements in the Quick Start section of the huggingface model card in the code.

For the 26B Model:
If you have an 80G A100 GPU, you can put the entire model on a single GPU.

For the 40B model:
If you set load_in_8bit=True, you will need one 80GB GPUs.
If you set load_in_8bit=False, you will need at least two 80GB GPUs.

And for the 76B model:
If you set load_in_8bit=True, you will need two 80GB GPUs.
If you set load_in_8bit=False, you will need at least three 80GB GPUs.

I am aware that this is not super helpful but maybe you get an idea and can better estimate what you need. Hope this helps a little bit.

Thanks for the reply. @flipm

OpenGVLab org

I think a 24GB GPU should be sufficient, even without any quantization.

czczup changed discussion status to closed

Sign up or log in to comment