Problem with Lora finetuning, Out of memory

#13
by zokica - opened

When doing Peft finetuning, I always get OOM error at the time when the model gets saved.

There is enough memory for sure, 48GB card, uses less than 20GB and then a huge spike at the time of saving the model. This does not happen with Gemma-2 9B model.

I train 9B model with the same data, and there is no problem at all.

#############
OutOfMemoryError: CUDA out of memory. Tried to allocate 15.26 GiB. GPU 0 has a total capacty of 47.40 GiB of which 11.35 GiB is free. Process 2397559 has 35.67 GiB memory in use. Of the allocated memory 34.28 GiB is allocated by PyTorch, and 903.30 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
###################################

trainer = transformers.Trainer(
model=model,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_val_dataset,
args=transformers.TrainingArguments(
output_dir=output_dir,
warmup_steps=2,
per_device_train_batch_size=batch_size,
gradient_accumulation_steps=gradient_accumulation_steps,
num_train_epochs=num_epochs,
#max_steps=10000,
learning_rate=2e-4,
bf16=True,
#optim="paged_adamw_8bit",
optim="paged_adamw_32bit",
logging_steps=5,
logging_dir="./logs",
save_strategy="steps",
save_steps=5,
evaluation_strategy="steps",
eval_steps=5,
do_eval=True,
#use_flash_attn: True,
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)

works with changed params, but still it should work without these changes as well:

optim="paged_adamw_8bit",
evaluation_strategy="no"
do_eval=False,

Not sure if optim="paged_adamw_32bit", caused the problem, together with evaluation. evaluation had the same dataset params

Google org

Hi @zokica , Could you please share the reproducible code to replicate the error and understand the issue better. Thank you.

Thanks, no need for that , it started to work after next time I installed the software. This is probably a bug related to software install or hardware, not sure what.

Also Gemma is an awesome model, than for open sourcing it, probably the best one so far, except for flash attention problems, but this will be solved.

Sign up or log in to comment