LORA file for transduction test time finetune
Collection
2 items
•
Updated
This model is a fine-tuned version of barc0/engineer1-heavy-barc-llama3.1-8b-ins-fft-transduction_lr1e-5_epoch3 on the barc0/transduction_augmented_test_timearc_all_evaluation_new_seperate, the barc0/transduction_rearc_dataset_400k and the barc0/transduction_heavy_100k_jsonl datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.0331 | 1.0 | 1334 | 0.0359 |
0.028 | 2.0 | 2668 | 0.0299 |
0.0009 | 3.0 | 4002 | 0.0333 |
Base model
meta-llama/Llama-3.1-8B