AnkitAI commited on
Commit
9656e51
β€’
1 Parent(s): 649a9fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -32
README.md CHANGED
@@ -12,44 +12,42 @@ widget:
12
  - text: This product is really bad!
13
  ---
14
 
15
- # 🌟 Fine-tuned RoBERTa for Sentiment Analysis on Reviews 🌟
16
 
17
- This is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the [Amazon Reviews dataset](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) for sentiment analysis.
18
 
19
- ## πŸ“œ Model Details
20
 
21
- - **πŸ†• Model Name:** `AnkitAI/reviews-roberta-base-sentiment-analysis`
22
- - **πŸ”— Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest`
23
- - **πŸ“Š Dataset:** [Amazon Reviews](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews)
24
- - **βš™οΈ Fine-tuning:** This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative).
25
 
26
- ## πŸ‹οΈ Training
27
 
28
  The model was trained using the following parameters:
29
 
30
- - **πŸ”§ Learning Rate:** 2e-5
31
- - **πŸ“¦ Batch Size:** 16
32
- - **βš–οΈ Weight Decay:** 0.01
33
- - **πŸ“… Evaluation Strategy:** Epoch
34
 
 
35
 
36
- ### πŸ‹οΈ Training Details
 
 
 
 
 
 
 
 
 
 
 
37
 
38
- - **πŸ“‰ Eval Loss:** 0.1049
39
- - **⏱️ Eval Runtime:** 3177.538 seconds
40
- - **πŸ“ˆ Eval Samples/Second:** 226.591
41
- - **πŸŒ€ Eval Steps/Second:** 7.081
42
- - **πŸƒ Train Runtime:** 110070.6349 seconds
43
- - **πŸ“Š Train Samples/Second:** 78.495
44
- - **πŸŒ€ Train Steps/Second:** 2.453
45
- - **πŸ“‰ Train Loss:** 0.0858
46
- - **⏳ Eval Accuracy:** 97.19%
47
- - **πŸŒ€ Eval Precision:** 97.9%
48
- - **⏱️ Eval Recall:** 97.18%
49
- - **πŸ“ˆ Eval F1 Score:** 97.19%
50
-
51
-
52
- ## πŸš€ Usage
53
 
54
  You can use this model directly with the Hugging Face `transformers` library:
55
 
@@ -63,10 +61,8 @@ tokenizer = RobertaTokenizer.from_pretrained(model_name)
63
  # Example usage
64
  inputs = tokenizer("This product is great!", return_tensors="pt")
65
  outputs = model(**inputs) # 1 for positive, 0 for negative
66
-
67
  ```
68
 
 
69
 
70
- ## πŸ“œ License
71
-
72
- This model is licensed under the [MIT License](LICENSE).
 
12
  - text: This product is really bad!
13
  ---
14
 
15
+ # Fine-tuned RoBERTa for Sentiment Analysis on Reviews
16
 
17
+ This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on the [Amazon Reviews dataset](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews) for sentiment analysis.
18
 
19
+ ## Model Details
20
 
21
+ - **Model Name:** `AnkitAI/reviews-roberta-base-sentiment-analysis`
22
+ - **Base Model:** `cardiffnlp/twitter-roberta-base-sentiment-latest`
23
+ - **Dataset:** [Amazon Reviews](https://www.kaggle.com/datasets/bittlingmayer/amazonreviews)
24
+ - **Fine-tuning:** This model was fine-tuned for sentiment analysis with a classification head for binary sentiment classification (positive and negative).
25
 
26
+ ## Training
27
 
28
  The model was trained using the following parameters:
29
 
30
+ - **Learning Rate:** 2e-5
31
+ - **Batch Size:** 16
32
+ - **Weight Decay:** 0.01
33
+ - **Evaluation Strategy:** Epoch
34
 
35
+ ### Training Details
36
 
37
+ - **Evaluation Loss:** 0.1049
38
+ - **Evaluation Runtime:** 3177.538 seconds
39
+ - **Evaluation Samples/Second:** 226.591
40
+ - **Evaluation Steps/Second:** 7.081
41
+ - **Training Runtime:** 110070.6349 seconds
42
+ - **Training Samples/Second:** 78.495
43
+ - **Training Steps/Second:** 2.453
44
+ - **Training Loss:** 0.0858
45
+ - **Evaluation Accuracy:** 97.19%
46
+ - **Evaluation Precision:** 97.9%
47
+ - **Evaluation Recall:** 97.18%
48
+ - **Evaluation F1 Score:** 97.19%
49
 
50
+ ## Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51
 
52
  You can use this model directly with the Hugging Face `transformers` library:
53
 
 
61
  # Example usage
62
  inputs = tokenizer("This product is great!", return_tensors="pt")
63
  outputs = model(**inputs) # 1 for positive, 0 for negative
 
64
  ```
65
 
66
+ ## License
67
 
68
+ This model is licensed under the [MIT License](LICENSE).