Llama-8b-SimPO-Plus11 / train_results.json
tengxiao1
TX
32e6b0e
raw
history blame contribute delete
233 Bytes
{
"epoch": 0.998691442030882,
"total_flos": 0.0,
"train_loss": -0.08717740175609069,
"train_runtime": 14342.367,
"train_samples": 61135,
"train_samples_per_second": 4.263,
"train_steps_per_second": 0.033
}