language_model / all_results.json
orendar
First version of the language model and tokenizer.
be83c4a
raw
history blame contribute delete
192 Bytes
{
"epoch": 3.0,
"train_loss": 2.338944840370547,
"train_runtime": 6731.0386,
"train_samples": 2100,
"train_samples_per_second": 0.936,
"train_steps_per_second": 0.117
}