llama-3-86-lora-pretrain_v2 / eval_results.json
ytcheng's picture
End of training
9653e8f verified
raw
history blame
211 Bytes
{
"epoch": 2.9964796996010326,
"eval_loss": 2.235288619995117,
"eval_runtime": 115.517,
"eval_samples_per_second": 8.198,
"eval_steps_per_second": 4.103,
"perplexity": 9.349179823068619
}