--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: medium-base-News_About_Gold results: [] language: - en pipeline_tag: text-classification --- # medium-base-News_About_Gold This model is a fine-tuned version of [funnel-transformer/medium-base](https://huggingface.co/funnel-transformer/medium-base). It achieves the following results on the evaluation set: - Loss: 0.2838 - Accuracy: 0.9172 - Weighted f1: 0.9170 - Micro f1: 0.9172 - Macro f1: 0.8854 - Weighted recall: 0.9172 - Micro recall: 0.9172 - Macro recall: 0.8859 - Weighted precision: 0.9171 - Micro precision: 0.9172 - Macro precision: 0.8853 ## Model description For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/News%20About%20Gold%20-%20Sentiment%20Analysis%20-%20Funnel%20with%20W%26B.ipynb This project is part of a comparison of seven (7) transformers. Here is the README page for the comparison: https://github.com/DunnBC22/NLP_Projects/tree/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison) ## Intended uses & limitations This model is intended to demonstrate my ability to solve a complex problem using technology. ## Training and evaluation data Dataset Source: https://www.kaggle.com/datasets/ankurzing/sentiment-analysis-in-commodity-market-gold _Input Word Length:_ ![Length of Input Text (in Words)](https://github.com/DunnBC22/NLP_Projects/raw/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/Images/Input%20Word%20Length.png) _Class Distribution:_ ![Length of Input Text (in Words)](https://github.com/DunnBC22/NLP_Projects/raw/main/Sentiment%20Analysis/Sentiment%20Analysis%20of%20Commodity%20News%20-%20Gold%20(Transformer%20Comparison)/Images/Class%20Distribution.png) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Weighted F1 | Micro F1 | Macro F1 | Weighted Recall | Micro Recall | Macro Recall | Weighted Precision | Micro Precision | Macro Precision | |:-------------:|:-----:|:----:|:---------------:|:--------:|:-----------:|:--------:|:--------:|:---------------:|:------------:|:------------:|:------------------:|:---------------:|:---------------:| | 0.7426 | 1.0 | 133 | 0.3820 | 0.8803 | 0.8636 | 0.8803 | 0.6690 | 0.8803 | 0.8803 | 0.6809 | 0.8862 | 0.8803 | 0.8992 | | 0.332 | 2.0 | 266 | 0.3083 | 0.9007 | 0.8987 | 0.9007 | 0.8525 | 0.9007 | 0.9007 | 0.8402 | 0.9015 | 0.9007 | 0.8705 | | 0.2381 | 3.0 | 399 | 0.2870 | 0.9106 | 0.9097 | 0.9106 | 0.8686 | 0.9106 | 0.9106 | 0.8539 | 0.9096 | 0.9106 | 0.8862 | | 0.1911 | 4.0 | 532 | 0.2797 | 0.9163 | 0.9158 | 0.9163 | 0.8843 | 0.9163 | 0.9163 | 0.8819 | 0.9159 | 0.9163 | 0.8873 | | 0.1584 | 5.0 | 665 | 0.2838 | 0.9172 | 0.9170 | 0.9172 | 0.8854 | 0.9172 | 0.9172 | 0.8859 | 0.9171 | 0.9172 | 0.8853 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3