eladven commited on
Commit
81ca873
1 Parent(s): 03cbb29

Evaluation results for talhaa/flant5 model as a base model for other tasks

Browse files

As part of a research effort to identify high quality models in Huggingface that can serve as base models for further finetuning, we evaluated this by finetuning on 36 datasets. The model ranks 1st among all tested models for the google/t5-v1_1-base architecture as of 10/01/2023.


To share this information with others in your model card, please add the following evaluation results to your README.md page.

For more information please see https://ibm.github.io/model-recycling/ or contact me.

Best regards,
Elad Venezian
[email protected]
IBM Research AI

Files changed (1) hide show
  1. README.md +14 -0
README.md CHANGED
@@ -45,3 +45,17 @@ The following hyperparameters were used during training:
45
  - Pytorch 1.13.0+cu116
46
  - Datasets 2.8.0
47
  - Tokenizers 0.13.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45
  - Pytorch 1.13.0+cu116
46
  - Datasets 2.8.0
47
  - Tokenizers 0.13.2
48
+
49
+ ## Model Recycling
50
+
51
+ [Evaluation on 36 datasets](https://ibm.github.io/model-recycling/model_gain_chart?avg=9.03&mnli_lp=nan&20_newsgroup=4.19&ag_news=1.36&amazon_reviews_multi=0.23&anli=14.13&boolq=17.27&cb=23.12&cola=9.97&copa=29.50&dbpedia=6.50&esnli=5.11&financial_phrasebank=18.16&imdb=0.52&isear=1.43&mnli=11.97&mrpc=13.44&multirc=5.70&poem_sentiment=19.42&qnli=3.74&qqp=7.12&rotten_tomatoes=3.64&rte=25.34&sst2=0.09&sst_5bins=4.72&stsb=20.65&trec_coarse=4.15&trec_fine=9.53&tweet_ev_emoji=13.59&tweet_ev_emotion=4.90&tweet_ev_hate=1.07&tweet_ev_irony=7.25&tweet_ev_offensive=2.16&tweet_ev_sentiment=1.88&wic=12.97&wnli=9.44&wsc=7.45&yahoo_answers=3.38&model_name=talhaa%2Fflant5&base_name=google%2Ft5-v1_1-base) using talhaa/flant5 as a base model yields average score of 77.86 in comparison to 68.82 by google/t5-v1_1-base.
52
+
53
+ The model is ranked 1st among all tested models for the google/t5-v1_1-base architecture as of 10/01/2023
54
+ Results:
55
+
56
+ | 20_newsgroup | ag_news | amazon_reviews_multi | anli | boolq | cb | cola | copa | dbpedia | esnli | financial_phrasebank | imdb | isear | mnli | mrpc | multirc | poem_sentiment | qnli | qqp | rotten_tomatoes | rte | sst2 | sst_5bins | stsb | trec_coarse | trec_fine | tweet_ev_emoji | tweet_ev_emotion | tweet_ev_hate | tweet_ev_irony | tweet_ev_offensive | tweet_ev_sentiment | wic | wnli | wsc | yahoo_answers |
57
+ |---------------:|----------:|-----------------------:|--------:|--------:|--------:|--------:|-------:|----------:|--------:|-----------------------:|-------:|--------:|--------:|--------:|----------:|-----------------:|--------:|--------:|------------------:|--------:|--------:|------------:|--------:|--------------:|------------:|-----------------:|-------------------:|----------------:|-----------------:|---------------------:|---------------------:|--------:|-------:|--------:|----------------:|
58
+ | 87.0685 | 89.5333 | 67.14 | 52.1875 | 82.844 | 78.5714 | 80.1534 | 70 | 77.2667 | 90.6963 | 84.9 | 93.512 | 72.4902 | 87.4797 | 86.2745 | 61.8399 | 87.5 | 93.1173 | 90.7173 | 89.6811 | 85.9206 | 93.8073 | 56.5611 | 89.4438 | 97.4 | 91.6 | 47.054 | 80.5067 | 52.5926 | 74.8724 | 84.7674 | 71.76 | 68.8088 | 56.338 | 55.7692 | 72.6333 |
59
+
60
+
61
+ For more information, see: [Model Recycling](https://ibm.github.io/model-recycling/)