LLM for ARC
Collection
4 items
•
Updated
This model is a fine-tuned version of meta-llama/Meta-Llama-3.1-8B-Instruct on the barc0/transduction_heavy_100k_jsonl, the barc0/transduction_heavy_suggestfunction_100k_jsonl, the barc0/transduction_rearc_dataset_400k, the barc0/transduction_angmented_100k-gpt4-description-gpt4omini-code_generated_problems and the barc0/transduction_angmented_100k_gpt4o-mini_generated_problems datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.0378 | 1.0 | 3729 | 0.0330 |
0.0234 | 2.0 | 7458 | 0.0227 |
0.0116 | 3.0 | 11187 | 0.0219 |
Base model
meta-llama/Llama-3.1-8B