--- base_model: google/mt5-small license: apache-2.0 tags: - generated_from_trainer model-index: - name: massive_intent results: [] --- [Visualize in Weights & Biases](https://wandb.ai/halakoo-mohammadreza-sad-warrior/huggingface/runs/r891swua) [Visualize in Weights & Biases](https://wandb.ai/halakoo-mohammadreza-sad-warrior/huggingface/runs/r891swua) [Visualize in Weights & Biases](https://wandb.ai/halakoo-mohammadreza-sad-warrior/huggingface/runs/r891swua) # massive_intent This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5894 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 9.6225 | 1.0 | 352 | 4.8167 | | 3.975 | 2.0 | 704 | 2.0456 | | 2.2634 | 3.0 | 1056 | 1.2890 | | 1.8345 | 4.0 | 1408 | 0.9723 | | 1.3851 | 5.0 | 1760 | 0.7764 | | 1.3563 | 6.0 | 2112 | 0.6790 | | 1.2511 | 7.0 | 2464 | 0.6373 | | 1.2352 | 8.0 | 2816 | 0.6073 | | 1.0519 | 9.0 | 3168 | 0.5942 | | 1.0937 | 10.0 | 3520 | 0.5894 | ### Framework versions - Transformers 4.42.3 - Pytorch 2.1.2 - Datasets 2.20.0 - Tokenizers 0.19.1