Text Generation
Transformers
PyTorch
Safetensors
English
hf_olmo
custom_code

Error in Finetuning

#7
by DrNicefellow - opened

TypeError: OLMoForCausalLM.forward() got an unexpected keyword argument 'inputs_embeds'.

That appears when I run:

trainer = SFTTrainer(
model = model,
tokenizer = tokenizer,
train_dataset = dataset,
dataset_text_field = "text",
max_seq_length = max_seq_length,
#data_collator=collator,
dataset_num_proc = 32,
packing = False, # Can make training 5x faster for short sequences.
args = TrainingArguments(
num_train_epochs = 1,
save_total_limit = 3,
save_steps = 50,
per_device_train_batch_size = 20,
gradient_accumulation_steps = 4,
warmup_steps = 5,
learning_rate = 2e-4,
fp16 = not torch.cuda.is_bf16_supported(),
bf16 = torch.cuda.is_bf16_supported(),
logging_steps = 1,
optim = "adamw_8bit",
weight_decay = 0.01,
lr_scheduler_type = "linear",
seed = 3407,
output_dir = "olmo7b_outputs",
),
)

trainer_stats = trainer.train()

Ai2 org

Fixing this appears to require changes in both the OLMo repo and this repo. The OLMo repo change was completed today (https://github.com/allenai/OLMo/pull/442), so once a new minor release of that repo is done, this repo will be fixed.

@shanearora Thank you very much for the information! Please close this issue when both the new release of the repos are ready.

Sign up or log in to comment