--- license: apache-2.0 base_model: JackFram/llama-68m tags: - generated_from_trainer metrics: - accuracy model-index: - name: recreate_llama_68M_vanilla results: [] --- # recreate_llama_68M_vanilla This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the anon8231489123/ShareGPT_Vicuna_unfiltered/ShareGPT_V3_unfiltered_cleaned_split.json dataset. It achieves the following results on the evaluation set: - Loss: 2.3558 - Accuracy: 0.5820 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 24 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.05 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:------:|:-----:|:---------------:|:--------:| | 3.406 | 0.2644 | 1000 | 3.2345 | 0.5035 | | 2.8119 | 0.5288 | 2000 | 2.8216 | 0.5365 | | 2.6076 | 0.7932 | 3000 | 2.6553 | 0.5501 | | 2.4729 | 1.0576 | 4000 | 2.5761 | 0.5581 | | 2.4323 | 1.3221 | 5000 | 2.5363 | 0.5617 | | 2.3824 | 1.5865 | 6000 | 2.4913 | 0.5660 | | 2.3719 | 1.8509 | 7000 | 2.4664 | 0.5686 | | 2.3021 | 2.1153 | 8000 | 2.4404 | 0.5716 | | 2.2848 | 2.3797 | 9000 | 2.4080 | 0.5755 | | 2.2653 | 2.6441 | 10000 | 2.3834 | 0.5785 | | 2.2447 | 2.9085 | 11000 | 2.3603 | 0.5811 | ### Framework versions - Transformers 4.41.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1