OOM Issue with 16GB GPU
#9
by
ftopal
- opened
Hi, this could be caused by either the length of the text being too long or the batch_size being too large. Perhaps you could try processing a much smaller amount of data at a time to see if it runs successfully.
Did you make sure to go
model.eval()
with torch.no_grad():
....
izhx
changed discussion status to
closed