Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 2 new columns ({'traceback', 'error_msg'}) and 4 missing columns ({'result_metrics_npm', 'result_metrics', 'eval_version', 'result_metrics_average'}).

This happened while the json dataset builder was generating data using

hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/22h/cabrita-lora-v0-1_eval_request_False_float16_Adapter.json (at revision ddecbe21f94cbdf8856d486ff9b24e7bd0a1bb68)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 622, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2292, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2240, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              model: string
              base_model: string
              revision: string
              private: bool
              precision: string
              params: int64
              architectures: string
              weight_type: string
              status: string
              submitted_time: timestamp[s]
              model_type: string
              source: string
              job_id: int64
              job_start_time: string
              main_language: string
              error_msg: string
              traceback: string
              to
              {'model': Value(dtype='string', id=None), 'base_model': Value(dtype='string', id=None), 'revision': Value(dtype='string', id=None), 'private': Value(dtype='bool', id=None), 'precision': Value(dtype='string', id=None), 'params': Value(dtype='float64', id=None), 'architectures': Value(dtype='string', id=None), 'weight_type': Value(dtype='string', id=None), 'main_language': Value(dtype='string', id=None), 'status': Value(dtype='string', id=None), 'submitted_time': Value(dtype='timestamp[s]', id=None), 'model_type': Value(dtype='string', id=None), 'source': Value(dtype='string', id=None), 'job_id': Value(dtype='int64', id=None), 'job_start_time': Value(dtype='string', id=None), 'eval_version': Value(dtype='string', id=None), 'result_metrics': {'enem_challenge': Value(dtype='float64', id=None), 'bluex': Value(dtype='float64', id=None), 'oab_exams': Value(dtype='float64', id=None), 'assin2_rte': Value(dtype='float64', id=None), 'assin2_sts': Value(dtype='float64', id=None), 'faquad_nli': Value(dtype='float64', id=None), 'hatebr_offensive': Value(dtype='float64', id=None), 'portuguese_hate_speech': Value(dtype='float64', id=None), 'tweetsentbr': Value(dtype='float64', id=None)}, 'result_metrics_average': Value(dtype='float64', id=None), 'result_metrics_npm': Value(dtype='float64', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1872, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 2 new columns ({'traceback', 'error_msg'}) and 4 missing columns ({'result_metrics_npm', 'result_metrics', 'eval_version', 'result_metrics_average'}).
              
              This happened while the json dataset builder was generating data using
              
              hf://datasets/eduagarcia-temp/llm_pt_leaderboard_requests/22h/cabrita-lora-v0-1_eval_request_False_float16_Adapter.json (at revision ddecbe21f94cbdf8856d486ff9b24e7bd0a1bb68)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

model
string
base_model
string
revision
string
private
bool
precision
string
params
float64
architectures
string
weight_type
string
main_language
string
status
string
submitted_time
timestamp[us]
model_type
string
source
string
job_id
int64
job_start_time
string
eval_version
string
result_metrics
dict
result_metrics_average
float64
result_metrics_npm
float64
tanliboy/lambda-qwen2.5-14b-dpo-test
main
false
bfloat16
14.77
Qwen2ForCausalLM
Original
English
FINISHED
2024-09-28T11:33:19
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,221
2024-10-17T01-43-26.171758
1.1.0
{ "enem_challenge": 0.7991602519244226, "bluex": 0.7315716272600834, "oab_exams": 0.6104783599088838, "assin2_rte": 0.9448521957747049, "assin2_sts": 0.8243398669298373, "faquad_nli": 0.7882522522522523, "hatebr_offensive": 0.9003808155770413, "portuguese_hate_speech": 0.7474723628059027, "tweetsentbr": 0.7221843254982979 }
0.78541
0.678964
tanliboy/lambda-qwen2.5-32b-dpo-test
main
false
bfloat16
32.764
Qwen2ForCausalLM
Original
English
PENDING
2024-09-30T16:35:08
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
-1
null
null
null
null
null
01-ai/Yi-1.5-34B-32K
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-05-20T02:43:37
🟒 : pretrained
manual
674
2024-05-20T21-45-47.139761
1.1.0
{ "enem_challenge": 0.7354793561931421, "bluex": 0.6787204450625869, "oab_exams": 0.54624145785877, "assin2_rte": 0.9121699049758872, "assin2_sts": 0.809949940837174, "faquad_nli": 0.7177866756717641, "hatebr_offensive": 0.8271604938271605, "portuguese_hate_speech": 0.6997859414986487, "tweetsentbr": 0.7309621331738047 }
0.739806
0.604782
01-ai/Yi-1.5-34B-Chat-16K
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-05-20T02:44:14
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
manual
673
2024-05-20T18-32-22.664525
1.1.0
{ "enem_challenge": 0.7004898530440867, "bluex": 0.5201668984700973, "oab_exams": 0.5257403189066059, "assin2_rte": 0.9116655919331504, "assin2_sts": 0.777225956509937, "faquad_nli": 0.7909900023305279, "hatebr_offensive": 0.8889320535439632, "portuguese_hate_speech": 0.6504272099901414, "tweetsentbr": 0.7088300278488365 }
0.719385
0.584898
01-ai/Yi-1.5-34B-Chat
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-05-15T17:39:33
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
manual
624
2024-05-16T15-15-24.863291
1.1.0
{ "enem_challenge": 0.6906927921623512, "bluex": 0.6648122392211405, "oab_exams": 0.5248291571753987, "assin2_rte": 0.9170744853491483, "assin2_sts": 0.7661887019644651, "faquad_nli": 0.7743940809133725, "hatebr_offensive": 0.8210886883714428, "portuguese_hate_speech": 0.7105164005570834, "tweetsentbr": 0.7096563287199421 }
0.731028
0.598601
01-ai/Yi-1.5-34B
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-05-15T17:40:15
🟒 : pretrained
manual
627
2024-05-17T10-36-18.336343
1.1.0
{ "enem_challenge": 0.71518544436669, "bluex": 0.6662030598052852, "oab_exams": 0.5489749430523918, "assin2_rte": 0.8976911637262349, "assin2_sts": 0.8148786802023537, "faquad_nli": 0.585644163957417, "hatebr_offensive": 0.8363023241432246, "portuguese_hate_speech": 0.6962399848962205, "tweetsentbr": 0.7228749707523902 }
0.720444
0.570852
01-ai/Yi-1.5-6B-Chat
main
false
bfloat16
6.061
LlamaForCausalLM
Original
English
FINISHED
2024-05-16T14:35:19
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
manual
629
2024-05-17T14-53-37.626126
1.1.0
{ "enem_challenge": 0.5066480055983205, "bluex": 0.4631432545201669, "oab_exams": 0.3908883826879271, "assin2_rte": 0.8478217777818736, "assin2_sts": 0.6797897994537765, "faquad_nli": 0.6548247706694055, "hatebr_offensive": 0.7881170986195587, "portuguese_hate_speech": 0.6486990242682011, "tweetsentbr": 0.6586657928083186 }
0.626511
0.445931
01-ai/Yi-1.5-6B
main
false
bfloat16
6.061
LlamaForCausalLM
Original
English
FINISHED
2024-05-16T14:34:00
🟒 : pretrained
manual
628
2024-05-17T13-51-05.776238
1.1.0
{ "enem_challenge": 0.5395381385584325, "bluex": 0.4993045897079277, "oab_exams": 0.4154897494305239, "assin2_rte": 0.85320443811568, "assin2_sts": 0.611946662194731, "faquad_nli": 0.566892243623113, "hatebr_offensive": 0.8390372896945542, "portuguese_hate_speech": 0.6034251055649058, "tweetsentbr": 0.6835417262403757 }
0.623598
0.440799
01-ai/Yi-1.5-9B-32K
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-05-20T02:43:42
🟒 : pretrained
manual
673
2024-05-20T20-18-06.355522
1.1.0
{ "enem_challenge": 0.6724982505248426, "bluex": 0.5702364394993046, "oab_exams": 0.5011389521640092, "assin2_rte": 0.8657419886018202, "assin2_sts": 0.7267527969011244, "faquad_nli": 0.5410839160839161, "hatebr_offensive": 0.7806530019415174, "portuguese_hate_speech": 0.6955025872509083, "tweetsentbr": 0.6843568952184516 }
0.670885
0.499193
01-ai/Yi-1.5-9B-Chat-16K
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-05-20T02:43:57
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
manual
652
2024-05-20T09-41-40.204030
1.1.0
{ "enem_challenge": 0.7046885934219734, "bluex": 0.5660639777468707, "oab_exams": 0.48701594533029613, "assin2_rte": 0.889630445004916, "assin2_sts": 0.7254379491320617, "faquad_nli": 0.6373099047367834, "hatebr_offensive": 0.8668983847883869, "portuguese_hate_speech": 0.5826350789692436, "tweetsentbr": 0.6467979685370989 }
0.678498
0.514674
01-ai/Yi-1.5-9B-Chat
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-05-15T17:39:54
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
manual
625
2024-05-17T02-43-33.664147
1.1.0
{ "enem_challenge": 0.6242127361791463, "bluex": 0.5479833101529903, "oab_exams": 0.4469248291571754, "assin2_rte": 0.8807093197512774, "assin2_sts": 0.7520700202607307, "faquad_nli": 0.6913654763916721, "hatebr_offensive": 0.8297877646706737, "portuguese_hate_speech": 0.667940108892922, "tweetsentbr": 0.6732618942406834 }
0.679362
0.521304
01-ai/Yi-1.5-9B
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-05-15T17:40:08
🟒 : pretrained
manual
626
2024-05-17T09-09-38.931019
1.1.0
{ "enem_challenge": 0.6710986703988804, "bluex": 0.5771905424200278, "oab_exams": 0.4947608200455581, "assin2_rte": 0.8815204475360152, "assin2_sts": 0.7102876692830821, "faquad_nli": 0.6362495548508539, "hatebr_offensive": 0.7837384886240519, "portuguese_hate_speech": 0.6780580075662044, "tweetsentbr": 0.6934621257745327 }
0.680707
0.518635
01-ai/Yi-34B-200K
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-02-05T23:18:19
🟒 : pretrained
script
480
2024-04-17T23-49-34.862700
1.1.0
{ "enem_challenge": 0.7172848145556333, "bluex": 0.6481223922114048, "oab_exams": 0.5517084282460136, "assin2_rte": 0.9097218456052794, "assin2_sts": 0.7390390977418284, "faquad_nli": 0.49676238738738737, "hatebr_offensive": 0.8117947554592124, "portuguese_hate_speech": 0.7007076712295253, "tweetsentbr": 0.6181054682174745 }
0.688139
0.523233
01-ai/Yi-34B-Chat
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-02-27T00:40:17
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
272
2024-02-28T08-14-36.046639
1.1.0
{ "enem_challenge": 0.7123862841147656, "bluex": 0.6328233657858137, "oab_exams": 0.5202733485193621, "assin2_rte": 0.924014535978148, "assin2_sts": 0.7419038025688336, "faquad_nli": 0.7157210401891253, "hatebr_offensive": 0.7198401711140126, "portuguese_hate_speech": 0.7135410538975384, "tweetsentbr": 0.6880686233555414 }
0.707619
0.557789
01-ai/Yi-34B
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-02-05T23:05:39
🟒 : pretrained
script
440
2024-04-13T15-53-49.411062
1.1.0
{ "enem_challenge": 0.7207837648705389, "bluex": 0.6648122392211405, "oab_exams": 0.5599088838268793, "assin2_rte": 0.917882167398896, "assin2_sts": 0.76681855136608, "faquad_nli": 0.7798334442926054, "hatebr_offensive": 0.8107834570679608, "portuguese_hate_speech": 0.6224786612758311, "tweetsentbr": 0.7320656959105744 }
0.730596
0.591978
01-ai/Yi-6B-200K
main
false
bfloat16
6.061
LlamaForCausalLM
Original
English
FINISHED
2024-02-05T23:18:12
🟒 : pretrained
script
469
2024-04-16T17-07-31.622853
1.1.0
{ "enem_challenge": 0.5423372988103569, "bluex": 0.4673157162726008, "oab_exams": 0.4328018223234624, "assin2_rte": 0.40523403335417163, "assin2_sts": 0.4964641013268987, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.4892942520605069, "portuguese_hate_speech": 0.6053769911504425, "tweetsentbr": 0.6290014694641435 }
0.500831
0.214476
01-ai/Yi-6B-Chat
main
false
bfloat16
6.061
LlamaForCausalLM
Original
English
FINISHED
2024-02-27T00:40:39
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
273
2024-02-28T14-35-07.615539
1.1.0
{ "enem_challenge": 0.5570328901329601, "bluex": 0.5006954102920723, "oab_exams": 0.4118451025056948, "assin2_rte": 0.7948490568935549, "assin2_sts": 0.5684271643349206, "faquad_nli": 0.637960088691796, "hatebr_offensive": 0.775686136523575, "portuguese_hate_speech": 0.5712377041472934, "tweetsentbr": 0.5864804330790114 }
0.600468
0.40261
01-ai/Yi-6B
main
false
bfloat16
6.061
LlamaForCausalLM
Original
English
FINISHED
2024-02-05T23:04:05
🟒 : pretrained
script
228
2024-02-17T03-42-08.504508
1.1.0
{ "enem_challenge": 0.5689293212036389, "bluex": 0.5132127955493742, "oab_exams": 0.4460136674259681, "assin2_rte": 0.7903932929806128, "assin2_sts": 0.5666878345297481, "faquad_nli": 0.5985418799210473, "hatebr_offensive": 0.7425595238095237, "portuguese_hate_speech": 0.6184177704320946, "tweetsentbr": 0.5081067075683067 }
0.594763
0.391626
01-ai/Yi-9B-200K
v20240318
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-05-20T02:40:48
🟒 : pretrained
manual
666
2024-05-20T06-21-38.524751
1.1.0
{ "enem_challenge": 0.6529041287613716, "bluex": 0.5438108484005564, "oab_exams": 0.496127562642369, "assin2_rte": 0.8735592777805543, "assin2_sts": 0.7486645258696737, "faquad_nli": 0.7445188998494914, "hatebr_offensive": 0.817858599988343, "portuguese_hate_speech": 0.6727118239818735, "tweetsentbr": 0.7225357780938043 }
0.696966
0.547383
01-ai/Yi-9B-200k
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-04-13T05:22:25
🟒 : pretrained
leaderboard
451
2024-04-14T12-49-52.148781
1.1.0
{ "enem_challenge": 0.6564030790762772, "bluex": 0.5354659248956884, "oab_exams": 0.5056947608200456, "assin2_rte": 0.8708321784112503, "assin2_sts": 0.7508245525986388, "faquad_nli": 0.7162112665738773, "hatebr_offensive": 0.8238294119604646, "portuguese_hate_speech": 0.6723821369343758, "tweetsentbr": 0.7162549372015228 }
0.694211
0.54216
01-ai/Yi-9B
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-04-13T05:20:56
🟒 : pretrained
leaderboard
453
2024-04-14T11-08-02.891090
1.1.0
{ "enem_challenge": 0.6759972008397481, "bluex": 0.5493741307371349, "oab_exams": 0.4783599088838269, "assin2_rte": 0.8784695970900473, "assin2_sts": 0.752860308487488, "faquad_nli": 0.7478708154144531, "hatebr_offensive": 0.8574531631821884, "portuguese_hate_speech": 0.6448598532923182, "tweetsentbr": 0.6530471966712571 }
0.693144
0.542367
01-ai/Yi-Coder-9B-Chat
main
false
bfloat16
9
LlamaForCausalLM
Original
English
FINISHED
2024-09-21T08:22:47
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,132
2024-10-02T01-59-56.934513
1.1.0
{ "enem_challenge": 0.4541637508747376, "bluex": 0.42559109874826145, "oab_exams": 0.35079726651480636, "assin2_rte": 0.7994152126314071, "assin2_sts": 0.7640954232319354, "faquad_nli": 0.6541275915011303, "hatebr_offensive": 0.8464002808680579, "portuguese_hate_speech": 0.6100826569784736, "tweetsentbr": 0.6641060566616098 }
0.618753
0.431402
01-ai/Yi-Coder-9B
main
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-09-21T08:22:54
🟒 : pretrained
leaderboard
1,132
2024-10-02T03-06-38.881073
1.1.0
{ "enem_challenge": 0.4730580825752274, "bluex": 0.3949930458970793, "oab_exams": 0.3703872437357631, "assin2_rte": 0.8479141293428532, "assin2_sts": 0.7388500902557162, "faquad_nli": 0.5212030552344689, "hatebr_offensive": 0.8155218925088192, "portuguese_hate_speech": 0.6617294815662288, "tweetsentbr": 0.6216077000909054 }
0.605029
0.41049
152334H/miqu-1-70b-sf
main
false
float16
68.977
LlamaForCausalLM
Original
English
FINISHED
2024-04-26T08:25:57
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
704
2024-05-23T11-20-45.843993
1.1.0
{ "enem_challenge": 0.7466759972008398, "bluex": 0.6481223922114048, "oab_exams": 0.5398633257403189, "assin2_rte": 0.9309631191550011, "assin2_sts": 0.6972270341129023, "faquad_nli": 0.7641750093536355, "hatebr_offensive": 0.8382584367896485, "portuguese_hate_speech": 0.7336708394698086, "tweetsentbr": 0.7158026226548874 }
0.734973
0.609318
22h/cabrita-lora-v0-1
huggyllama/llama-7b
main
false
float16
0
?
Adapter
Portuguese
FAILED
2024-02-05T23:03:11
πŸ”Ά : fine-tuned
script
820
2024-06-16T10-13-34.877976
null
null
null
null
22h/cabrita_7b_pt_850000
main
false
float16
7
LlamaForCausalLM
Original
Portuguese
FINISHED
2024-02-11T13:34:40
πŸ†Ž : language adapted models (FP, FT, ...)
script
305
2024-03-08T02-07-35.059732
1.1.0
{ "enem_challenge": 0.22533240027991602, "bluex": 0.23087621696801114, "oab_exams": 0.2920273348519362, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.1265472264440735, "faquad_nli": 0.17721518987341772, "hatebr_offensive": 0.5597546967409981, "portuguese_hate_speech": 0.490163110698825, "tweetsentbr": 0.4575265405956153 }
0.32142
-0.032254
22h/open-cabrita3b
main
false
float16
3
LlamaForCausalLM
Original
Portuguese
FINISHED
2024-02-11T13:34:36
πŸ†Ž : language adapted models (FP, FT, ...)
script
285
2024-02-28T16-38-27.766897
1.1.0
{ "enem_challenge": 0.17984604618614417, "bluex": 0.2114047287899861, "oab_exams": 0.22687927107061504, "assin2_rte": 0.4301327637723658, "assin2_sts": 0.08919111846797594, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.5046251022011318, "portuguese_hate_speech": 0.4118866620594333, "tweetsentbr": 0.47963247012405114 }
0.330361
-0.005342
AALF/gemma-2-27b-it-SimPO-37K-100steps
main
false
bfloat16
27.227
Gemma2ForCausalLM
Original
English
FINISHED
2024-09-21T04:24:32
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,132
2024-10-02T01-48-29.895681
1.1.0
{ "enem_challenge": 0.781665500349895, "bluex": 0.7162726008344924, "oab_exams": 0.5662870159453303, "assin2_rte": 0.8957539543945432, "assin2_sts": 0.7116783323256065, "faquad_nli": 0.7043137254901961, "hatebr_offensive": 0.8798753128354102, "portuguese_hate_speech": 0.7243041235926246, "tweetsentbr": 0.6962639594964193 }
0.741824
0.613438
AALF/gemma-2-27b-it-SimPO-37K
main
false
bfloat16
27.227
Gemma2ForCausalLM
Original
English
FAILED
2024-08-29T19:24:31
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,061
2024-09-09T02-16-20.601975
null
null
null
null
AALF/gemma-2-27b-it-SimPO-37K
main
false
float16
27.227
Gemma2ForCausalLM
Original
English
FAILED
2024-09-05T20:33:39
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,058
2024-09-09T02-07-11.545200
null
null
null
null
AI-Sweden-Models/gpt-sw3-20b
main
false
float16
20.918
GPT2LMHeadModel
Original
English
FINISHED
2024-02-05T23:15:38
🟒 : pretrained
script
827
2024-06-17T02-29-40.078292
1.1.0
{ "enem_challenge": 0.1973407977606718, "bluex": 0.22531293463143254, "oab_exams": 0.27790432801822323, "assin2_rte": 0.5120595126338128, "assin2_sts": 0.07348005953132232, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.533175653895298, "portuguese_hate_speech": 0.2347827795333542, "tweetsentbr": 0.13985170050404128 }
0.292618
-0.064504
AI-Sweden-Models/gpt-sw3-40b
main
false
float16
39.927
GPT2LMHeadModel
Original
English
FINISHED
2024-02-05T23:15:47
🟒 : pretrained
script
253
2024-02-21T07-59-22.606213
1.1.0
{ "enem_challenge": 0.2358292512246326, "bluex": 0.2809457579972184, "oab_exams": 0.2542141230068337, "assin2_rte": 0.4096747911636189, "assin2_sts": 0.17308746611294112, "faquad_nli": 0.5125406216148655, "hatebr_offensive": 0.3920230910522173, "portuguese_hate_speech": 0.4365404510655907, "tweetsentbr": 0.491745311259787 }
0.354067
0.018354
AI-Sweden-Models/gpt-sw3-6.7b-v2
main
false
float16
7.111
GPT2LMHeadModel
Original
English
FINISHED
2024-02-05T23:15:31
🟒 : pretrained
script
462
2024-04-16T00-18-50.805343
1.1.0
{ "enem_challenge": 0.22813156053184044, "bluex": 0.23504867872044508, "oab_exams": 0.23097949886104785, "assin2_rte": 0.5833175952742944, "assin2_sts": 0.14706689693418745, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.5569200631123247, "portuguese_hate_speech": 0.5048069947120815, "tweetsentbr": 0.45897627809523983 }
0.3761
0.073856
AI-Sweden-Models/gpt-sw3-6.7b
main
false
float16
7.111
GPT2LMHeadModel
Original
English
FINISHED
2024-02-05T23:15:23
🟒 : pretrained
script
466
2024-04-15T22-34-55.424388
1.1.0
{ "enem_challenge": 0.21133659902029392, "bluex": 0.2573018080667594, "oab_exams": 0.2296127562642369, "assin2_rte": 0.6192900448928588, "assin2_sts": 0.08103924791097977, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.40737531518832293, "portuguese_hate_speech": 0.4441161100880904, "tweetsentbr": 0.433837189305867 }
0.347063
0.024837
AIDC-AI/Ovis1.5-Gemma2-9B
main
false
bfloat16
11.359
Ovis
Original
English
FAILED
2024-09-18T02:27:02
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,093
2024-09-22T04-13-27.607074
null
null
null
null
AIDC-AI/Ovis1.5-Llama3-8B
fb8c34a71ae86a9a12a033a395c640a2825d909e
false
bfloat16
8
Ovis
Original
English
FAILED
2024-09-18T02:41:23
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,096
2024-09-22T04-33-19.526440
null
null
null
null
AIJUUD/QWEN2_70B_JUUD_V1
main
false
float16
70
Qwen2ForCausalLM
Original
Chinese
FAILED
2024-06-24T00:34:13
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
874
2024-07-06T01-17-13.595586
null
null
null
null
AIM-ZJU/HawkLlama_8b
main
false
bfloat16
8.494
LlavaNextForConditionalGeneration
Original
English
FAILED
2024-09-18T02:05:36
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,091
2024-09-22T01-54-50.526492
null
null
null
null
AXCXEPT/EZO-Qwen2.5-32B-Instruct
main
false
bfloat16
32.764
Qwen2ForCausalLM
Original
English
PENDING
2024-10-04T05:43:37
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
-1
null
null
null
null
null
AbacusResearch/Jallabi-34B
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
PENDING
2024-09-05T13:41:51
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
-1
null
null
null
null
null
AdaptLLM/finance-LLM-13B
main
false
float16
13
LlamaForCausalLM
Original
English
FINISHED
2024-02-11T13:37:27
πŸ”Ά : fine-tuned
script
555
2024-04-24T18-00-42.073230
1.1.0
{ "enem_challenge": 0.4730580825752274, "bluex": 0.3852573018080668, "oab_exams": 0.36173120728929387, "assin2_rte": 0.8704914563684142, "assin2_sts": 0.6914158506759536, "faquad_nli": 0.6137142857142857, "hatebr_offensive": 0.8210157972117231, "portuguese_hate_speech": 0.6648065091139095, "tweetsentbr": 0.6129534464124105 }
0.610494
0.4269
AdaptLLM/finance-LLM
main
false
float16
0
LLaMAForCausalLM
Original
English
FINISHED
2024-02-11T13:37:12
πŸ”Ά : fine-tuned
script
545
2024-04-24T13-38-50.219195
1.1.0
{ "enem_challenge": 0.37578726382085376, "bluex": 0.2906815020862309, "oab_exams": 0.3011389521640091, "assin2_rte": 0.7173994459883221, "assin2_sts": 0.3141019003448064, "faquad_nli": 0.6856866537717602, "hatebr_offensive": 0.6665618718263835, "portuguese_hate_speech": 0.3323844809709906, "tweetsentbr": 0.5501887299910238 }
0.470437
0.214015
AdaptLLM/law-LLM-13B
main
false
float16
13
LlamaForCausalLM
Original
English
FINISHED
2024-02-11T13:37:17
πŸ”Ά : fine-tuned
script
551
2024-04-24T17-20-32.289644
1.1.0
{ "enem_challenge": 0.48915325402379284, "bluex": 0.3796940194714882, "oab_exams": 0.36082004555808656, "assin2_rte": 0.7762008093366958, "assin2_sts": 0.6862803522831282, "faquad_nli": 0.5589431210148192, "hatebr_offensive": 0.7648719048333295, "portuguese_hate_speech": 0.6972417545621965, "tweetsentbr": 0.5969146546466134 }
0.590013
0.387281
AdaptLLM/law-LLM
main
false
float16
0
LLaMAForCausalLM
Original
English
FINISHED
2024-02-11T13:37:01
πŸ”Ά : fine-tuned
script
550
2024-04-24T01-23-04.736612
1.1.0
{ "enem_challenge": 0.3932820153953814, "bluex": 0.3157162726008345, "oab_exams": 0.3034168564920273, "assin2_rte": 0.7690457097032879, "assin2_sts": 0.2736321836385559, "faquad_nli": 0.6837598520969155, "hatebr_offensive": 0.6310564282443625, "portuguese_hate_speech": 0.32991640141820316, "tweetsentbr": 0.4897974076561671 }
0.465514
0.208557
AdaptLLM/medicine-LLM-13B
main
false
float16
13
LlamaForCausalLM
Original
English
FINISHED
2024-02-11T13:37:22
πŸ”Ά : fine-tuned
script
553
2024-04-24T17-45-23.659613
1.1.0
{ "enem_challenge": 0.45976207137858643, "bluex": 0.37552155771905427, "oab_exams": 0.3553530751708428, "assin2_rte": 0.802953910231819, "assin2_sts": 0.6774179667769704, "faquad_nli": 0.7227569273678784, "hatebr_offensive": 0.8155967923139503, "portuguese_hate_speech": 0.6722790404040404, "tweetsentbr": 0.5992582348356217 }
0.608989
0.426546
AdaptLLM/medicine-LLM
main
false
float16
0
LLaMAForCausalLM
Original
English
FINISHED
2024-02-11T13:37:07
πŸ”Ά : fine-tuned
script
550
2024-04-24T13-09-44.649718
1.1.0
{ "enem_challenge": 0.3806857942617215, "bluex": 0.3129346314325452, "oab_exams": 0.28610478359908886, "assin2_rte": 0.7412241742464284, "assin2_sts": 0.30610797857979344, "faquad_nli": 0.6385993049986635, "hatebr_offensive": 0.4569817890542286, "portuguese_hate_speech": 0.26575729349526506, "tweetsentbr": 0.4667966458909563 }
0.428355
0.135877
AetherResearch/Cerebrum-1.0-7b
main
false
float16
7.242
MistralForCausalLM
Original
English
FINISHED
2024-03-14T11:07:59
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
332
2024-04-01T22-58-48.098123
1.1.0
{ "enem_challenge": 0.6137158852344297, "bluex": 0.5062586926286509, "oab_exams": 0.44510250569476084, "assin2_rte": 0.8562832789419443, "assin2_sts": 0.7083110279713039, "faquad_nli": 0.7709976024119299, "hatebr_offensive": 0.7925948726646638, "portuguese_hate_speech": 0.6342708554907774, "tweetsentbr": 0.6171926929726294 }
0.660525
0.494853
Alibaba-NLP/gte-Qwen2-7B-instruct
97fb655ac3882bce80a8ce4ecc9212ec24555fea
false
bfloat16
7.613
Qwen2ForCausalLM
Original
English
FAILED
2024-08-08T14:53:17
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,004
2024-08-11T15-51-57.600432
null
null
null
null
Alibaba-NLP/gte-Qwen2-7B-instruct
main
false
bfloat16
7.613
Qwen2ForCausalLM
Original
English
FAILED
2024-07-16T18:11:18
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
918
2024-07-16T19-11-22.839874
null
null
null
null
Alibaba-NLP/gte-Qwen2-7B-instruct
e26182b2122f4435e8b3ebecbf363990f409b45b
false
bfloat16
7.613
Qwen2ForCausalLM
Original
English
FAILED
2024-08-01T19:47:27
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
979
2024-08-08T03-00-24.963139
null
null
null
null
Azure99/blossom-v5.1-34b
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-06-05T14:01:58
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
798
2024-06-13T23-01-25.469438
1.1.0
{ "enem_challenge": 0.7263820853743876, "bluex": 0.6717663421418637, "oab_exams": 0.5444191343963554, "assin2_rte": 0.9087560002226345, "assin2_sts": 0.8294159674038925, "faquad_nli": 0.8188429839812075, "hatebr_offensive": 0.8519995605589193, "portuguese_hate_speech": 0.7209014624205066, "tweetsentbr": 0.7213065589027687 }
0.754866
0.632723
BAAI/Aquila-7B
main
false
float16
7
AquilaModel
Original
?
FINISHED
2024-02-05T23:09:00
🟒 : pretrained
script
343
2024-04-03T05-32-42.254781
1.1.0
{ "enem_challenge": 0.3275017494751575, "bluex": 0.2795549374130737, "oab_exams": 0.3047835990888383, "assin2_rte": 0.7202499022958302, "assin2_sts": 0.04640761012170769, "faquad_nli": 0.47034320848362593, "hatebr_offensive": 0.6981236353283272, "portuguese_hate_speech": 0.4164993156903397, "tweetsentbr": 0.4656320326711388 }
0.414344
0.144131
BAAI/Aquila2-34B
main
false
bfloat16
34
LlamaForCausalLM
Original
?
FINISHED
2024-02-05T23:10:17
🟒 : pretrained
script
484
2024-04-18T14-04-47.026230
1.1.0
{ "enem_challenge": 0.5479356193142058, "bluex": 0.4381084840055633, "oab_exams": 0.40455580865603646, "assin2_rte": 0.8261661293083891, "assin2_sts": 0.643049056717646, "faquad_nli": 0.4471267110923455, "hatebr_offensive": 0.4920183585480058, "portuguese_hate_speech": 0.6606858054226475, "tweetsentbr": 0.5598737392847967 }
0.557724
0.319206
BAAI/Aquila2-7B
main
false
float16
7
AquilaModel
Original
?
FINISHED
2024-02-05T23:09:07
🟒 : pretrained
script
360
2024-04-03T05-55-31.957348
1.1.0
{ "enem_challenge": 0.20573827851644508, "bluex": 0.14464534075104313, "oab_exams": 0.3225512528473804, "assin2_rte": 0.5426094787796916, "assin2_sts": 0.3589709171853071, "faquad_nli": 0.49799737773227726, "hatebr_offensive": 0.642139037433155, "portuguese_hate_speech": 0.5212215320910973, "tweetsentbr": 0.2826286167270258 }
0.390945
0.091046
BAAI/Emu3-Chat
main
false
bfloat16
8.492
Emu3ForCausalLM
Original
English
FINISHED
2024-10-21T21:11:49
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,240
2024-10-23T07-24-58.039230
1.1.0
{ "enem_challenge": 0.23722883135059483, "bluex": 0.24061196105702365, "oab_exams": 0.24419134396355352, "assin2_rte": 0.3333333333333333, "assin2_sts": 0.1347279162589762, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.37439678336438037 }
0.28526
-0.101355
BAAI/Emu3-Gen
main
false
bfloat16
8.492
Emu3ForCausalLM
Original
English
FINISHED
2024-10-21T21:11:31
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,239
2024-10-23T02-54-02.753983
1.1.0
{ "enem_challenge": 0.19384184744576627, "bluex": 0.19471488178025034, "oab_exams": 0.23735763097949886, "assin2_rte": 0, "assin2_sts": 0.04718167761792302, "faquad_nli": 0, "hatebr_offensive": 0.4333858584779487, "portuguese_hate_speech": 0.23104388244535695, "tweetsentbr": 0.1506866897702477 }
0.165357
-0.303077
BAAI/Emu3-Stage1
main
false
bfloat16
8.492
Emu3ForCausalLM
Original
English
FINISHED
2024-10-21T21:10:34
🟒 : pretrained
leaderboard
1,238
2024-10-23T01-39-06.029347
1.1.0
{ "enem_challenge": 0.19314205738278517, "bluex": 0.20723226703755215, "oab_exams": 0.2592255125284738, "assin2_rte": 0.5195268543314986, "assin2_sts": 0.06240586658952754, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.4946676587301587, "portuguese_hate_speech": 0.25062068613758454, "tweetsentbr": 0.5214764078192898 }
0.32755
-0.012098
BAAI/Gemma2-9B-IT-Simpo-Infinity-Preference
main
false
bfloat16
9.242
Gemma2ForCausalLM
Original
English
FAILED
2024-09-02T16:32:57
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,050
2024-09-09T01-43-17.349330
null
null
null
null
BAAI/Gemma2-9B-IT-Simpo-Infinity-Preference
fd6d02d300e3b9015e07c217e26c6f1b4823963a
false
bfloat16
9.242
Gemma2ForCausalLM
Original
English
FINISHED
2024-09-29T03:58:30
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,152
2024-10-04T05-31-00.057927
1.1.0
{ "enem_challenge": 0.7235829251224632, "bluex": 0.6175243393602226, "oab_exams": 0.5430523917995445, "assin2_rte": 0.9275070649057842, "assin2_sts": 0.7792788542377653, "faquad_nli": 0.7059360440659401, "hatebr_offensive": 0.8936026780915527, "portuguese_hate_speech": 0.6831189599233336, "tweetsentbr": 0.6664191998474515 }
0.726669
0.592002
BAAI/Infinity-Instruct-3M-0613-Mistral-7B
main
false
float16
7.242
MistralForCausalLM
Original
English
FINISHED
2024-06-22T00:35:20
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
844
2024-06-22T01-31-31.647844
1.1.0
{ "enem_challenge": 0.6466060181945417, "bluex": 0.5326842837273992, "oab_exams": 0.44510250569476084, "assin2_rte": 0.9174712657490132, "assin2_sts": 0.7632047672731808, "faquad_nli": 0.8241841468197617, "hatebr_offensive": 0.7990490978163615, "portuguese_hate_speech": 0.7141208181486736, "tweetsentbr": 0.6666509531443612 }
0.701008
0.56041
BAAI/Infinity-Instruct-3M-0625-Llama3-8B
main
false
bfloat16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-07-18T22:33:18
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
922
2024-07-19T01-31-48.292983
1.1.0
{ "enem_challenge": 0.6962911126662001, "bluex": 0.5702364394993046, "oab_exams": 0.4911161731207289, "assin2_rte": 0.9157306525139366, "assin2_sts": 0.6927579734425038, "faquad_nli": 0.6831208704581523, "hatebr_offensive": 0.8396850647140018, "portuguese_hate_speech": 0.6524280322235367, "tweetsentbr": 0.6735124292047466 }
0.690542
0.539493
BAAI/Infinity-Instruct-3M-0625-Mistral-7B
main
false
bfloat16
7.242
MistralForCausalLM
Original
English
FINISHED
2024-07-18T22:57:22
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
923
2024-07-19T01-41-09.242433
1.1.0
{ "enem_challenge": 0.6634009797060881, "bluex": 0.5507649513212796, "oab_exams": 0.44419134396355353, "assin2_rte": 0.9195035639155449, "assin2_sts": 0.7748240246646378, "faquad_nli": 0.8110585067106806, "hatebr_offensive": 0.8052263390689189, "portuguese_hate_speech": 0.7276695425104325, "tweetsentbr": 0.6851627567981504 }
0.709089
0.571585
BAAI/Infinity-Instruct-3M-0625-Qwen2-7B
main
false
bfloat16
7.616
Qwen2ForCausalLM
Original
English
FINISHED
2024-07-16T18:07:10
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
917
2024-07-16T18-14-59.914390
1.1.0
{ "enem_challenge": 0.7179846046186145, "bluex": 0.6063977746870653, "oab_exams": 0.5043280182232346, "assin2_rte": 0.9260247502412066, "assin2_sts": 0.7493063947663584, "faquad_nli": 0.7919477341597589, "hatebr_offensive": 0.7747879604913737, "portuguese_hate_speech": 0.6566388141704343, "tweetsentbr": 0.6595443181650077 }
0.709662
0.564613
BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B
main
false
bfloat16
0.003
LlamaForCausalLM
Original
English
FAILED
2024-07-18T22:23:09
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
922
2024-07-19T01-31-48.200936
null
null
null
null
BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B
a42c86c61b98ca4fdf238d688fe6ea11cf414d29
false
bfloat16
8.829
LlamaForCausalLM
Original
English
FINISHED
2024-08-01T19:56:06
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
980
2024-08-08T03-15-01.373098
1.1.0
{ "enem_challenge": 0.6920923722883136, "bluex": 0.588317107093185, "oab_exams": 0.4760820045558087, "assin2_rte": 0.8898452558964334, "assin2_sts": 0.7424901825788529, "faquad_nli": 0.7683175374941738, "hatebr_offensive": 0.8633798389731095, "portuguese_hate_speech": 0.6477449279306864, "tweetsentbr": 0.6951286339371854 }
0.707044
0.564291
BAAI/Infinity-Instruct-7M-0729-Llama3_1-8B
main
false
bfloat16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-08-05T19:30:28
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
982
2024-08-08T04-37-11.506316
1.1.0
{ "enem_challenge": 0.708187543736879, "bluex": 0.588317107093185, "oab_exams": 0.49430523917995445, "assin2_rte": 0.9353919239904989, "assin2_sts": 0.7590004583613057, "faquad_nli": 0.7479196445389977, "hatebr_offensive": 0.8223021238433512, "portuguese_hate_speech": 0.6924232912933478, "tweetsentbr": 0.6961115534449834 }
0.715995
0.577578
BAAI/Infinity-Instruct-7M-0729-mistral-7B
36651591cb13346ecbde23832013e024029700fa
false
bfloat16
7.242
MistralForCausalLM
Original
English
FAILED
2024-08-08T14:51:17
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,003
2024-08-11T15-39-11.190873
null
null
null
null
BAAI/Infinity-Instruct-7M-0729-mistral-7B
main
false
bfloat16
7.242
MistralForCausalLM
Original
English
FAILED
2024-08-05T19:35:35
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
983
2024-08-08T05-34-45.659619
null
null
null
null
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-70B
main
false
bfloat16
70.554
LlamaForCausalLM
Original
English
PENDING
2024-08-22T16:02:17
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
-1
null
null
null
null
null
BAAI/Infinity-Instruct-7M-Gen-Llama3_1-8B
main
false
bfloat16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-08-22T16:01:20
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,032
2024-08-25T02-40-32.192736
1.1.0
{ "enem_challenge": 0.708187543736879, "bluex": 0.588317107093185, "oab_exams": 0.49430523917995445, "assin2_rte": 0.9353919239904989, "assin2_sts": 0.7590004583613057, "faquad_nli": 0.7479196445389977, "hatebr_offensive": 0.8223021238433512, "portuguese_hate_speech": 0.6924232912933478, "tweetsentbr": 0.6961115534449834 }
0.715995
0.577578
BAAI/Infinity-Instruct-7M-Gen-mistral-7B
4356a156ed02a12d2dcabcc3d64a1b588a9ceb05
false
bfloat16
7.242
MistralForCausalLM
Original
English
FAILED
2024-08-28T16:27:37
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,044
2024-09-01T03-51-16.696093
null
null
null
null
BAAI/Infinity-Instruct-7M-Gen-mistral-7B
82c83d670a8954f4250547b53a057dea1fbd460d
false
bfloat16
7.242
MistralForCausalLM
Original
English
FAILED
2024-08-25T05:20:51
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,038
2024-08-26T05-13-12.087329
null
null
null
null
BAAI/Infinity-Instruct-7M-Gen-mistral-7B
main
false
bfloat16
7.242
MistralForCausalLM
Original
English
FAILED
2024-08-22T16:01:54
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,033
2024-08-25T03-38-30.014056
null
null
null
null
BAAI/OPI-Llama-3.1-8B-Instruct
main
false
bfloat16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-09-21T15:22:21
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
1,133
2024-10-02T03-38-58.627810
1.1.0
{ "enem_challenge": 0.4282715185444367, "bluex": 0.32684283727399166, "oab_exams": 0.3630979498861048, "assin2_rte": 0.3364346429550707, "assin2_sts": 0.17801095882930634, "faquad_nli": 0.17721518987341772, "hatebr_offensive": 0.3333333333333333, "portuguese_hate_speech": 0.22986425339366515, "tweetsentbr": 0.17212346147627985 }
0.282799
-0.126392
Bruno/Caramelinho
ybelkada/falcon-7b-sharded-bf16
main
false
bfloat16
0
?
Adapter
Portuguese
FINISHED
2024-02-24T18:01:08
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
256
2024-02-26T15-17-54.708968
1.1.0
{ "enem_challenge": 0.21483554933519944, "bluex": 0.2211404728789986, "oab_exams": 0.25148063781321184, "assin2_rte": 0.4896626375608876, "assin2_sts": 0.19384903999896694, "faquad_nli": 0.43917169974115616, "hatebr_offensive": 0.3396512838306731, "portuguese_hate_speech": 0.46566706851516976, "tweetsentbr": 0.563106045239156 }
0.353174
0.017928
Bruno/Caramelo_7B
ybelkada/falcon-7b-sharded-bf16
main
false
bfloat16
7
?
Adapter
Portuguese
FINISHED
2024-02-24T18:00:57
πŸ†Ž : language adapted models (FP, FT, ...)
leaderboard
255
2024-02-26T13-57-57.036659
1.1.0
{ "enem_challenge": 0.1980405878236529, "bluex": 0.24478442280945759, "oab_exams": 0.2528473804100228, "assin2_rte": 0.5427381481762671, "assin2_sts": 0.07473225338478715, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.33650009913117634, "portuguese_hate_speech": 0.412292817679558, "tweetsentbr": 0.35365936890599253 }
0.31725
-0.028868
CausalLM/34b-beta
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-05-30T19:56:15
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
795
2024-06-13T11-57-13.556700
1.1.0
{ "enem_challenge": 0.7242827151854444, "bluex": 0.6606397774687065, "oab_exams": 0.5348519362186788, "assin2_rte": 0.876489392618425, "assin2_sts": 0.772190146473889, "faquad_nli": 0.751649303344456, "hatebr_offensive": 0.8089822265646441, "portuguese_hate_speech": 0.7003298984357624, "tweetsentbr": 0.6590870652076504 }
0.720945
0.577932
CofeAI/Tele-FLM
main
false
bfloat16
0
?
Original
English
FAILED
2024-06-18T15:54:20
🟒 : pretrained
leaderboard
825
2024-06-18T19-47-08.886524
null
null
null
null
CohereForAI/aya-101
main
false
float16
12.921
T5ForConditionalGeneration
Original
English
FINISHED
2024-02-17T03:43:40
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
253
2024-02-21T19-25-38.847154
1.1.0
{ "enem_challenge": 0.5703289013296011, "bluex": 0.47844228094575797, "oab_exams": 0.3895216400911162, "assin2_rte": 0.845896116707975, "assin2_sts": 0.18932506997017534, "faquad_nli": 0.3536861536119358, "hatebr_offensive": 0.8577866430260047, "portuguese_hate_speech": 0.5858880778588808, "tweetsentbr": 0.7292099162284759 }
0.555565
0.354086
CohereForAI/aya-23-35B
main
false
float16
34.981
CohereForCausalLM
Original
English
FINISHED
2024-05-23T18:08:24
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
725
2024-05-25T09-17-37.504596
1.1.0
{ "enem_challenge": 0.7039888033589923, "bluex": 0.6022253129346314, "oab_exams": 0.5466970387243736, "assin2_rte": 0.9304615643741841, "assin2_sts": 0.7846161558721925, "faquad_nli": 0.7233650163143708, "hatebr_offensive": 0.8860471199766156, "portuguese_hate_speech": 0.6667720351930878, "tweetsentbr": 0.5833463689780555 }
0.714169
0.573536
CohereForAI/aya-23-8B
main
false
float16
8.028
CohereForCausalLM
Original
English
FINISHED
2024-05-23T18:08:07
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
manual
726
2024-05-25T07-14-27.654611
1.1.0
{ "enem_challenge": 0.6046186144156753, "bluex": 0.48400556328233657, "oab_exams": 0.4328018223234624, "assin2_rte": 0.9189769820971867, "assin2_sts": 0.780672309349922, "faquad_nli": 0.6541835357624831, "hatebr_offensive": 0.7471163522824039, "portuguese_hate_speech": 0.6490477906224632, "tweetsentbr": 0.6908665777890396 }
0.662477
0.491916
CohereForAI/aya-expanse-32b
main
false
float16
32.296
CohereForCausalLM
Original
English
PENDING
2024-10-27T00:57:18
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
-1
null
null
null
null
null
CohereForAI/aya-expanse-8b
main
false
float16
8.028
CohereForCausalLM
Original
English
FAILED
2024-10-24T16:10:02
πŸ†Ž : language adapted (FP, FT, ...)
leaderboard
1,246
2024-10-27T05-37-25.028602
null
null
null
null
CohereForAI/c4ai-command-r-plus-4bit
main
false
4bit
55.052
CohereForCausalLM
Original
English
FINISHED
2024-04-05T14:50:15
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
464
2024-04-15T16-05-38.445928
1.1.0
{ "enem_challenge": 0.7508747375787264, "bluex": 0.6620305980528511, "oab_exams": 0.6255125284738041, "assin2_rte": 0.9301234467745643, "assin2_sts": 0.7933785386356376, "faquad_nli": 0.7718257450767017, "hatebr_offensive": 0.773798484417851, "portuguese_hate_speech": 0.7166167166167167, "tweetsentbr": 0.7540570104676597 }
0.753135
0.625007
CohereForAI/c4ai-command-r-plus
main
false
float16
103.811
CohereForCausalLM
Original
English
FAILED
2024-04-07T18:08:25
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
996
2024-08-10T16-28-15.102154
null
null
null
null
CohereForAI/c4ai-command-r-v01
main
false
float16
34.981
CohereForCausalLM
Original
English
FINISHED
2024-04-05T14:48:52
πŸ’¬ : chat models (RLHF, DPO, IFT, ...)
leaderboard
472
2024-04-17T00-36-42.568466
1.1.0
{ "enem_challenge": 0.7158852344296711, "bluex": 0.6203059805285118, "oab_exams": 0.5521640091116173, "assin2_rte": 0.883132179380006, "assin2_sts": 0.7210331309303998, "faquad_nli": 0.47272296015180265, "hatebr_offensive": 0.8222299935886227, "portuguese_hate_speech": 0.7102306144559665, "tweetsentbr": 0.6479415106683347 }
0.68285
0.515582
Columbia-NLP/LION-LLaMA-3-8b-odpo-v1.0
main
false
bfloat16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-07-13T02:31:59
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
916
2024-07-15T01-32-05.828202
1.1.0
{ "enem_challenge": 0.6368089573128062, "bluex": 0.48817802503477054, "oab_exams": 0.43143507972665146, "assin2_rte": 0.9193834267092047, "assin2_sts": 0.7172104868084787, "faquad_nli": 0.7286917112711735, "hatebr_offensive": 0.8564797460211679, "portuguese_hate_speech": 0.7138199307203894, "tweetsentbr": 0.7112524584582163 }
0.689251
0.546527
CombinHorizon/YiSM-blossom5.1-34B-SLERP
main
false
bfloat16
34.389
LlamaForCausalLM
Original
English
PENDING
2024-08-27T05:35:07
🀝 : base merges and moerges
leaderboard
-1
null
null
null
null
null
ConvexAI/Luminex-34B-v0.1
main
false
float16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-06-08T22:12:40
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
814
2024-06-14T16-37-48.304200
1.1.0
{ "enem_challenge": 0.7200839748075577, "bluex": 0.6481223922114048, "oab_exams": 0.544874715261959, "assin2_rte": 0.9191070641797621, "assin2_sts": 0.8130683879495547, "faquad_nli": 0.8226956044555551, "hatebr_offensive": 0.6983754481802518, "portuguese_hate_speech": 0.7080758240759798, "tweetsentbr": 0.6743942014992422 }
0.727644
0.585166
ConvexAI/Luminex-34B-v0.2
main
false
float16
34.389
LlamaForCausalLM
Original
English
FINISHED
2024-05-30T19:58:15
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
796
2024-06-13T15-37-38.009373
1.1.0
{ "enem_challenge": 0.7228831350594822, "bluex": 0.6578581363004172, "oab_exams": 0.5412300683371298, "assin2_rte": 0.9162531250297254, "assin2_sts": 0.8034116288676774, "faquad_nli": 0.8054074649748557, "hatebr_offensive": 0.6992729676140119, "portuguese_hate_speech": 0.7138230088495575, "tweetsentbr": 0.6767857512576406 }
0.726325
0.582993
Cran-May/T.E-8.1
main
false
bfloat16
7.616
Qwen2ForCausalLM
Original
English
FINISHED
2024-09-30T05:54:37
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
1,156
2024-10-05T04-54-18.066733
1.1.0
{ "enem_challenge": 0.7585724282715185, "bluex": 0.6648122392211405, "oab_exams": 0.5480637813211845, "assin2_rte": 0.9342280640477356, "assin2_sts": 0.7928489762220716, "faquad_nli": 0.7432359307359307, "hatebr_offensive": 0.7896677858027337, "portuguese_hate_speech": 0.7120234165085138, "tweetsentbr": 0.7078357340772256 }
0.739032
0.604919
CultriX/NeuralMona_MoE-4x7B
main
false
bfloat16
24.154
MixtralForCausalLM
Original
English
FINISHED
2024-05-15T18:00:24
🀝 : base merges and moerges
leaderboard
739
2024-05-26T13-29-26.736769
1.1.0
{ "enem_challenge": 0.6312106368089573, "bluex": 0.5340751043115438, "oab_exams": 0.4214123006833713, "assin2_rte": 0.9244279910791389, "assin2_sts": 0.7720274719342004, "faquad_nli": 0.7694314032342202, "hatebr_offensive": 0.8409826856991804, "portuguese_hate_speech": 0.6819724557061289, "tweetsentbr": 0.6446949695921868 }
0.691137
0.545137
CultriX/Qwen2.5-14B-Wernicke-SFT
main
false
bfloat16
14.77
Qwen2ForCausalLM
Original
English
FINISHED
2024-11-17T01:29:23
🀝 : base merges and moerges
leaderboard
1,267
2024-11-19T04-37-24.664140
1.1.0
{ "enem_challenge": 0.8040587823652904, "bluex": 0.7204450625869263, "oab_exams": 0.6132118451025057, "assin2_rte": 0.9436210912232053, "assin2_sts": 0.8319345766276613, "faquad_nli": 0.8653846153846154, "hatebr_offensive": 0.802972634621363, "portuguese_hate_speech": 0.6963111124817714, "tweetsentbr": 0.6961554141888872 }
0.774899
0.657918
CultriX/Qwen2.5-14B-Wernicke
main
false
bfloat16
14.77
Qwen2ForCausalLM
Original
English
FINISHED
2024-11-17T01:29:03
🀝 : base merges and moerges
leaderboard
1,266
2024-11-19T02-46-31.548628
1.1.0
{ "enem_challenge": 0.8110566829951015, "bluex": 0.7329624478442281, "oab_exams": 0.621867881548975, "assin2_rte": 0.9444324272358017, "assin2_sts": 0.8370321624309986, "faquad_nli": 0.7906626549720158, "hatebr_offensive": 0.890936897154724, "portuguese_hate_speech": 0.7476462302616946, "tweetsentbr": 0.7151887167792026 }
0.787976
0.681094
DAMO-NLP-MT/polylm-1.7b
main
false
float16
1.7
GPT2LMHeadModel
Original
English
FINISHED
2024-02-11T13:34:48
🟒 : pretrained
script
478
2024-04-17T23-46-04.491918
1.1.0
{ "enem_challenge": 0.1966410076976907, "bluex": 0.26564673157162727, "oab_exams": 0.24874715261959, "assin2_rte": 0.4047692251758633, "assin2_sts": 0.05167868234986358, "faquad_nli": 0.4396551724137931, "hatebr_offensive": 0.358843537414966, "portuguese_hate_speech": 0.4530026545569895, "tweetsentbr": 0.22711575772255002 }
0.294011
-0.067176
DAMO-NLP-MT/polylm-13b
main
false
float16
13
PolyLMHeadModel
Original
English
FINISHED
2024-02-11T13:34:54
🟒 : pretrained
script
345
2024-04-03T09-53-29.935717
1.1.0
{ "enem_challenge": 0, "bluex": 0, "oab_exams": 0, "assin2_rte": 0, "assin2_sts": 0, "faquad_nli": 0, "hatebr_offensive": 0, "portuguese_hate_speech": 0, "tweetsentbr": 0 }
0
-0.568819
Dampfinchen/Llama-3.1-8B-Ultra-Instruct
main
false
bfloat16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-09-16T20:19:19
🀝 : base merges and moerges
leaderboard
1,076
2024-09-20T04-18-17.935178
1.1.0
{ "enem_challenge": 0.7032890132960112, "bluex": 0.5771905424200278, "oab_exams": 0.4988610478359909, "assin2_rte": 0.9235696737015304, "assin2_sts": 0.7286168616739841, "faquad_nli": 0.7912633013135291, "hatebr_offensive": 0.8682961944962534, "portuguese_hate_speech": 0.5975522317682872, "tweetsentbr": 0.6157633185702961 }
0.700489
0.55553
Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO
main
false
float16
13.96
MistralForCausalLM
Original
English
FINISHED
2024-07-31T19:07:39
πŸ’¬ : chat (RLHF, DPO, IFT, ...)
leaderboard
977
2024-08-08T02-43-35.640819
1.1.0
{ "enem_challenge": 0.7319804058782365, "bluex": 0.6578581363004172, "oab_exams": 0.510250569476082, "assin2_rte": 0.9259642329554806, "assin2_sts": 0.714480317302389, "faquad_nli": 0.6906170752324599, "hatebr_offensive": 0.8460180802244769, "portuguese_hate_speech": 0.7355214633181597, "tweetsentbr": 0.6700678573508446 }
0.720306
0.584625
Danielbrdz/Barcenas-Llama3-8b-ORPO
main
false
float16
8.03
LlamaForCausalLM
Original
English
FINISHED
2024-05-13T16:38:54
πŸ”Ά : fine-tuned/fp on domain-specific datasets
leaderboard
636
2024-05-18T00-12-52.690138
1.1.0
{ "enem_challenge": 0.7102869139258222, "bluex": 0.5827538247566064, "oab_exams": 0.508883826879271, "assin2_rte": 0.9178150146340144, "assin2_sts": 0.7260402501200387, "faquad_nli": 0.7308849598805747, "hatebr_offensive": 0.8698828946051447, "portuguese_hate_speech": 0.5958643988009942, "tweetsentbr": 0.6661915330469502 }
0.700956
0.553218
Deci/DeciLM-6b
main
false
bfloat16
5.717
DeciLMForCausalLM
Original
English
FAILED
2024-02-05T23:06:24
πŸ”Ά : fine-tuned
script
253
2024-02-25T19-40-34.104437
null
null
null
null
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
42,939