how to pass a large entry, or split the entry, to get the use of 100K tokens

#23
by Jamil-Brian - opened

Hello everyone,
I would like to know if someone could help me with this or provide some suggestions on what to do.
What I want is to use the theoretical 100K input for the codellama-instruct input prompt.
I am using 8 GPUs with 32GB each, but when I try to send a very long input prompt, I get this message.

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 69.32 GiB. GPU 0 has a total capacity of 31.74 GiB of which 24.29 GiB is free. Including non-PyTorch memory, this process has 7.44 GiB memory in use. Of the allocated memory 6.50 GiB is allocated by PyTorch, and 593.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=disableable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
#
It's as if it's only using 1 GPU to process the input and ignoring the others.

This issue doesn't occur when generating a long output since all the GPUs work correctly. It only happens when I send a very long input prompt.

I would appreciate any help or suggestions you may have.

Sign up or log in to comment