--- library_name: transformers license: llama3.1 base_model: meta-llama/Llama-3.1-8B tags: - llama-factory - full - generated_from_trainer model-index: - name: llama_8b_lima_40 results: [] --- # llama_8b_lima_40 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B) on the open_webui_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.9296 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-06 - train_batch_size: 3 - eval_batch_size: 2 - seed: 66 - distributed_type: multi-GPU - num_devices: 2 - gradient_accumulation_steps: 6 - total_train_batch_size: 36 - total_eval_batch_size: 4 - optimizer: Use OptimizerNames.ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: polynomial - lr_scheduler_warmup_ratio: 0.04 - num_epochs: 1.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1355 | 0.0494 | 40 | 1.0442 | | 0.9066 | 0.0987 | 80 | 1.0029 | | 0.9609 | 0.1481 | 120 | 0.9815 | | 1.0128 | 0.1974 | 160 | 0.9743 | | 0.9558 | 0.2468 | 200 | 0.9663 | | 0.9236 | 0.2962 | 240 | 0.9598 | | 1.0243 | 0.3455 | 280 | 0.9488 | | 0.9767 | 0.3949 | 320 | 0.9481 | | 0.763 | 0.4443 | 360 | 0.9467 | | 0.944 | 0.4936 | 400 | 0.9442 | | 0.937 | 0.5430 | 440 | 0.9415 | | 0.8981 | 0.5923 | 480 | 0.9385 | | 0.9358 | 0.6417 | 520 | 0.9364 | | 0.8273 | 0.6911 | 560 | 0.9349 | | 0.8544 | 0.7404 | 600 | 0.9339 | | 0.8621 | 0.7898 | 640 | 0.9315 | | 0.8641 | 0.8392 | 680 | 0.9320 | | 0.7915 | 0.8885 | 720 | 0.9314 | | 0.8847 | 0.9379 | 760 | 0.9302 | | 0.9551 | 0.9872 | 800 | 0.9304 | ### Framework versions - Transformers 4.46.1 - Pytorch 2.4.0 - Datasets 3.1.0 - Tokenizers 0.20.3