llama2-7b-tweet-summarization
This model is a fine-tuned version of NousResearch/Llama-2-7b-hf on the dialogstudio dataset. It achieves the following results on the evaluation set:
- Loss: 2.8876
- Rouge Scores: {'rouge1': 71.60622394148149, 'rouge2': 59.01568798771897, 'rougeL': 48.52977346441143, 'rougeLsum': 71.51397796132038}
- Bleu Scores: [0.663980810175294, 0.6539436277190132, 0.6336458977384846, 0.6103966849577666]
- Gen Len: 463.0182
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 7
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Rouge Scores | Bleu Scores | Gen Len |
---|---|---|---|---|---|---|
1.9342 | 1.0 | 220 | 1.8650 | {'rouge1': 81.3543855201909, 'rouge2': 68.33628607515259, 'rougeL': 58.82060003710821, 'rougeLsum': 81.31170089312123} | [0.7475269615217999, 0.7320503944473035, 0.7092142839754116, 0.6841937810377433] | 463.0182 |
1.6847 | 2.0 | 440 | 1.8651 | {'rouge1': 82.24291097248478, 'rouge2': 69.08611946264551, 'rougeL': 59.03857240450188, 'rougeLsum': 82.21842705467084} | [0.7506228265880205, 0.7370025856405312, 0.7147650770848061, 0.6900216562765312] | 463.0182 |
1.3489 | 3.0 | 660 | 1.9777 | {'rouge1': 74.8813002669941, 'rouge2': 62.506440459770204, 'rougeL': 52.7220886983953, 'rougeLsum': 74.84300846787131} | [0.6692327330499573, 0.6584063380227538, 0.6386775472556582, 0.6163741473600254] | 463.0182 |
0.9474 | 4.0 | 880 | 2.1929 | {'rouge1': 78.71628333472167, 'rouge2': 65.58881455717321, 'rougeL': 55.06014924478776, 'rougeLsum': 78.69638389807935} | [0.7102475447746652, 0.6987491736376487, 0.6776051942379059, 0.6536770956211561] | 463.0182 |
0.6111 | 5.0 | 1100 | 2.4439 | {'rouge1': 78.27895756478365, 'rouge2': 64.90905260843559, 'rougeL': 54.06285449810264, 'rougeLsum': 78.22155552696411} | [0.7449392180339443, 0.7329638020466664, 0.7106292329793894, 0.6852866075200218] | 463.0182 |
0.3982 | 6.0 | 1320 | 2.7334 | {'rouge1': 70.6225818335813, 'rouge2': 58.36070884299628, 'rougeL': 48.1310974990119, 'rougeLsum': 70.48308395430783} | [0.6525378114736703, 0.6423715222547518, 0.6226082916386746, 0.6000829162052789] | 463.0182 |
0.3039 | 7.0 | 1540 | 2.8876 | {'rouge1': 71.60622394148149, 'rouge2': 59.01568798771897, 'rougeL': 48.52977346441143, 'rougeLsum': 71.51397796132038} | [0.663980810175294, 0.6539436277190132, 0.6336458977384846, 0.6103966849577666] | 463.0182 |
Framework versions
- PEFT 0.8.2.dev0
- Transformers 4.38.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.2.dev0
- Tokenizers 0.15.1
- Downloads last month
- 2
Model tree for DrishtiSharma/llama2-7b-tweet-summarization
Base model
NousResearch/Llama-2-7b-hf