Dataset Preview
Full Screen Viewer
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 174 new columns ({'report.prefill.latency.unit', 'report.prefill.memory.max_allocated', 'config.launcher._target_', 'report.load.energy.ram', 'config.backend.cache_implementation', 'report.per_token.throughput.unit', 'config.environment.diffusers_commit', 'config.backend.seed', 'config.environment.platform', 'config.environment.processor', 'report.prefill.throughput.value', 'report.prefill.memory.max_process_vram', 'config.backend.version', 'report.decode.throughput.unit', 'config.environment.transformers_version', 'report.per_token.efficiency', 'report.prefill.latency.p50', 'config.environment.peft_version', 'report.load.memory.max_allocated', 'config.backend.quantization_config.version', 'report.decode.memory.max_allocated', 'report.decode.memory.max_global_vram', 'config.scenario.new_tokens', 'config.backend.hub_kwargs.local_files_only', 'report.load.energy.cpu', 'report.prefill.throughput.unit', 'report.decode.energy.gpu', 'report.decode.latency.unit', 'config.environment.accelerate_version', 'report.prefill.energy.gpu', 'config.launcher.numactl', 'report.load.memory.max_ram', 'report.decode.efficiency.value', 'report.load.latency.p50', 'config.scenario.name', 'report.decode.throughput.value', 'config.backend.task', 'report.decode.energy.total', 'config.backend.hub_kwargs.force_download', 'config.environment.timm_version', 'report.decode.latency.mean', 'config.backend.model_type', 'config.environment.gpu', 'report.prefill.memory.unit', 'report.prefill.latency.values', 'report.load.efficie ... 'config.environment.optimum_commit', 'report.load.latency.total', 'config.scenario.input_shapes.sequence_length', 'config.backend.library', 'report.decode.latency.stdev', 'config.backend.autocast_enabled', 'config.environment.peft_commit', 'config.scenario.input_shapes.num_choices', 'config.backend.processor', 'report.decode.latency.p90', 'config.scenario.memory', 'report.prefill.energy.total', 'report.prefill.latency.count', 'report.per_token.latency.mean', 'report.decode.memory.max_process_vram', 'config.environment.cpu', 'report.prefill.memory.max_global_vram', 'config.backend.attn_implementation', 'report.traceback', 'report.per_token.latency.p99', 'report.per_token.latency.total', 'report.decode.latency.p95', 'config.environment.cpu_count', 'report.load.throughput', 'config.environment.transformers_commit', 'config.scenario.latency', 'report.decode.latency.total', 'config.environment.optimum_version', 'report.prefill.latency.total', 'config.environment.timm_commit', 'config.scenario.generate_kwargs.max_new_tokens', 'report.load.memory.max_global_vram', 'config.backend.inter_op_num_threads', 'report.load.energy.total', 'config.name', 'report.prefill.memory.max_ram', 'config.launcher.device_isolation', 'config.backend.deepspeed_inference', 'report.prefill.energy.ram', 'report.prefill.efficiency.value', 'config.backend.quantization_config.exllama_config.max_batch_size', 'config.backend.torch_compile_target', 'report.load.memory.max_reserved', 'report.decode.memory.max_ram'}) and 34 missing columns ({'Hub ❤️', 'Base Model', 'Weight type', 'MATH Lvl 5', 'Model sha', 'Not_Merged', 'Generation', 'GPQA', 'fullname', 'Chat Template', 'MoE', 'BBH Raw', 'Available on the hub', 'IFEval Raw', 'Model', 'GPQA Raw', '#Params (B)', 'BBH', 'MUSR', 'Hub License', "Maintainer's Highlight", 'Submission Date', 'MMLU-PRO', 'Flagged', 'MUSR Raw', 'Precision', 'Type', 'Upload To Hub Date', 'Average ⬆️', 'T', 'IFEval', 'MMLU-PRO Raw', 'Architecture', 'MATH Lvl 5 Raw'}). This happened while the csv dataset builder was generating data using hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision c7cee487ad0aef4959407cce4f69477c1545ab4f) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast config.name: string config.backend.name: string config.backend.version: string config.backend._target_: string config.backend.task: string config.backend.library: string config.backend.model: string config.backend.processor: string config.backend.device: string config.backend.device_ids: int64 config.backend.seed: int64 config.backend.inter_op_num_threads: double config.backend.intra_op_num_threads: double config.backend.model_kwargs.trust_remote_code: bool config.backend.processor_kwargs.trust_remote_code: bool config.backend.hub_kwargs.trust_remote_code: bool config.backend.no_weights: bool config.backend.device_map: double config.backend.torch_dtype: string config.backend.eval_mode: bool config.backend.to_bettertransformer: bool config.backend.low_cpu_mem_usage: double config.backend.attn_implementation: string config.backend.cache_implementation: double config.backend.autocast_enabled: bool config.backend.autocast_dtype: double config.backend.torch_compile: bool config.backend.torch_compile_target: string config.backend.quantization_scheme: string config.backend.quantization_config.bits: int64 config.backend.quantization_config.version: string config.backend.deepspeed_inference: bool config.backend.peft_type: double config.scenario.name: string config.scenario._target_: string config.scenario.iterations: int64 config.scenario.duration: int64 config.scenario.warmup_runs: int64 config.scenario.input_shapes.batch_size: int64 config.scenario.input_shapes.num_choices: int64 co ... .latency.p50: double report.decode.latency.p90: double report.decode.latency.p95: double report.decode.latency.p99: double report.decode.latency.values: string report.decode.throughput.unit: string report.decode.throughput.value: double report.decode.energy.unit: string report.decode.energy.cpu: double report.decode.energy.ram: double report.decode.energy.gpu: double report.decode.energy.total: double report.decode.efficiency.unit: string report.decode.efficiency.value: double report.per_token.memory: double report.per_token.latency.unit: string report.per_token.latency.count: double report.per_token.latency.total: double report.per_token.latency.mean: double report.per_token.latency.stdev: double report.per_token.latency.p50: double report.per_token.latency.p90: double report.per_token.latency.p95: double report.per_token.latency.p99: double report.per_token.latency.values: string report.per_token.throughput.unit: string report.per_token.throughput.value: double report.per_token.energy: double report.per_token.efficiency: double config.backend.quantization_config.exllama_config.version: double config.backend.quantization_config.exllama_config.max_input_len: double config.backend.quantization_config.exllama_config.max_batch_size: double config.backend.hub_kwargs.revision: string config.backend.hub_kwargs.force_download: bool config.backend.hub_kwargs.local_files_only: bool -- schema metadata -- pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 27877 to {'T': Value(dtype='string', id=None), 'Model': Value(dtype='string', id=None), 'Average ⬆️': Value(dtype='float64', id=None), 'IFEval': Value(dtype='float64', id=None), 'IFEval Raw': Value(dtype='float64', id=None), 'BBH': Value(dtype='float64', id=None), 'BBH Raw': Value(dtype='float64', id=None), 'MATH Lvl 5': Value(dtype='float64', id=None), 'MATH Lvl 5 Raw': Value(dtype='float64', id=None), 'GPQA': Value(dtype='float64', id=None), 'GPQA Raw': Value(dtype='float64', id=None), 'MUSR': Value(dtype='float64', id=None), 'MUSR Raw': Value(dtype='float64', id=None), 'MMLU-PRO': Value(dtype='float64', id=None), 'MMLU-PRO Raw': Value(dtype='float64', id=None), 'Type': Value(dtype='string', id=None), 'Architecture': Value(dtype='string', id=None), 'Weight type': Value(dtype='string', id=None), 'Precision': Value(dtype='string', id=None), 'Not_Merged': Value(dtype='bool', id=None), 'Hub License': Value(dtype='string', id=None), '#Params (B)': Value(dtype='int64', id=None), 'Hub ❤️': Value(dtype='int64', id=None), 'Available on the hub': Value(dtype='bool', id=None), 'Model sha': Value(dtype='string', id=None), 'Flagged': Value(dtype='bool', id=None), 'MoE': Value(dtype='bool', id=None), 'Submission Date': Value(dtype='string', id=None), 'Upload To Hub Date': Value(dtype='string', id=None), 'Chat Template': Value(dtype='bool', id=None), "Maintainer's Highlight": Value(dtype='bool', id=None), 'fullname': Value(dtype='string', id=None), 'Generation': Value(dtype='int64', id=None), 'Base Model': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1396, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1045, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1029, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1124, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1884, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2015, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 174 new columns ({'report.prefill.latency.unit', 'report.prefill.memory.max_allocated', 'config.launcher._target_', 'report.load.energy.ram', 'config.backend.cache_implementation', 'report.per_token.throughput.unit', 'config.environment.diffusers_commit', 'config.backend.seed', 'config.environment.platform', 'config.environment.processor', 'report.prefill.throughput.value', 'report.prefill.memory.max_process_vram', 'config.backend.version', 'report.decode.throughput.unit', 'config.environment.transformers_version', 'report.per_token.efficiency', 'report.prefill.latency.p50', 'config.environment.peft_version', 'report.load.memory.max_allocated', 'config.backend.quantization_config.version', 'report.decode.memory.max_allocated', 'report.decode.memory.max_global_vram', 'config.scenario.new_tokens', 'config.backend.hub_kwargs.local_files_only', 'report.load.energy.cpu', 'report.prefill.throughput.unit', 'report.decode.energy.gpu', 'report.decode.latency.unit', 'config.environment.accelerate_version', 'report.prefill.energy.gpu', 'config.launcher.numactl', 'report.load.memory.max_ram', 'report.decode.efficiency.value', 'report.load.latency.p50', 'config.scenario.name', 'report.decode.throughput.value', 'config.backend.task', 'report.decode.energy.total', 'config.backend.hub_kwargs.force_download', 'config.environment.timm_version', 'report.decode.latency.mean', 'config.backend.model_type', 'config.environment.gpu', 'report.prefill.memory.unit', 'report.prefill.latency.values', 'report.load.efficie ... 'config.environment.optimum_commit', 'report.load.latency.total', 'config.scenario.input_shapes.sequence_length', 'config.backend.library', 'report.decode.latency.stdev', 'config.backend.autocast_enabled', 'config.environment.peft_commit', 'config.scenario.input_shapes.num_choices', 'config.backend.processor', 'report.decode.latency.p90', 'config.scenario.memory', 'report.prefill.energy.total', 'report.prefill.latency.count', 'report.per_token.latency.mean', 'report.decode.memory.max_process_vram', 'config.environment.cpu', 'report.prefill.memory.max_global_vram', 'config.backend.attn_implementation', 'report.traceback', 'report.per_token.latency.p99', 'report.per_token.latency.total', 'report.decode.latency.p95', 'config.environment.cpu_count', 'report.load.throughput', 'config.environment.transformers_commit', 'config.scenario.latency', 'report.decode.latency.total', 'config.environment.optimum_version', 'report.prefill.latency.total', 'config.environment.timm_commit', 'config.scenario.generate_kwargs.max_new_tokens', 'report.load.memory.max_global_vram', 'config.backend.inter_op_num_threads', 'report.load.energy.total', 'config.name', 'report.prefill.memory.max_ram', 'config.launcher.device_isolation', 'config.backend.deepspeed_inference', 'report.prefill.energy.ram', 'report.prefill.efficiency.value', 'config.backend.quantization_config.exllama_config.max_batch_size', 'config.backend.torch_compile_target', 'report.load.memory.max_reserved', 'report.decode.memory.max_ram'}) and 34 missing columns ({'Hub ❤️', 'Base Model', 'Weight type', 'MATH Lvl 5', 'Model sha', 'Not_Merged', 'Generation', 'GPQA', 'fullname', 'Chat Template', 'MoE', 'BBH Raw', 'Available on the hub', 'IFEval Raw', 'Model', 'GPQA Raw', '#Params (B)', 'BBH', 'MUSR', 'Hub License', "Maintainer's Highlight", 'Submission Date', 'MMLU-PRO', 'Flagged', 'MUSR Raw', 'Precision', 'Type', 'Upload To Hub Date', 'Average ⬆️', 'T', 'IFEval', 'MMLU-PRO Raw', 'Architecture', 'MATH Lvl 5 Raw'}). This happened while the csv dataset builder was generating data using hf://datasets/optimum-benchmark/llm-perf-leaderboard/perf-df-awq-1xA10.csv (at revision c7cee487ad0aef4959407cce4f69477c1545ab4f) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
T
string | Model
string | Average ⬆️
float64 | IFEval
float64 | IFEval Raw
float64 | BBH
float64 | BBH Raw
float64 | MATH Lvl 5
float64 | MATH Lvl 5 Raw
float64 | GPQA
float64 | GPQA Raw
float64 | MUSR
float64 | MUSR Raw
float64 | MMLU-PRO
float64 | MMLU-PRO Raw
float64 | Type
string | Architecture
string | Weight type
string | Precision
string | Not_Merged
bool | Hub License
string | #Params (B)
int64 | Hub ❤️
int64 | Available on the hub
bool | Model sha
string | Flagged
bool | MoE
bool | Submission Date
string | Upload To Hub Date
string | Chat Template
bool | Maintainer's Highlight
bool | fullname
string | Generation
int64 | Base Model
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
💬 | MaziyarPanahi/calme-2.4-rys-78b | 50.26 | 80.11 | 0.8 | 62.16 | 0.73 | 37.69 | 0.38 | 20.36 | 0.4 | 34.57 | 0.58 | 66.69 | 0.7 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | mit | 77 | 19 | true | 0a35e51ffa9efa644c11816a2d56434804177acb | true | true | 2024-09-03 | 2024-08-07 | true | false | MaziyarPanahi/calme-2.4-rys-78b | 2 | dnhkng/RYS-XLarge |
🔶 | dnhkng/RYS-XLarge | 44.75 | 79.96 | 0.8 | 58.77 | 0.71 | 38.97 | 0.39 | 17.9 | 0.38 | 23.72 | 0.5 | 49.2 | 0.54 | 🔶 fine-tuned on domain-specific datasets | Qwen2ForCausalLM | Original | bfloat16 | true | mit | 77 | 61 | true | 0f84dd9dde60f383e1e2821496befb4ce9a11ef6 | true | true | 2024-08-07 | 2024-07-24 | false | false | dnhkng/RYS-XLarge | 0 | dnhkng/RYS-XLarge |
💬 | MaziyarPanahi/calme-2.1-rys-78b | 44.14 | 81.36 | 0.81 | 59.47 | 0.71 | 36.4 | 0.36 | 19.24 | 0.39 | 19 | 0.47 | 49.38 | 0.54 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | mit | 77 | 3 | true | e746f5ddc0c9b31a2382d985a4ec87fa910847c7 | true | true | 2024-08-08 | 2024-08-06 | true | false | MaziyarPanahi/calme-2.1-rys-78b | 1 | dnhkng/RYS-XLarge |
💬 | MaziyarPanahi/calme-2.2-rys-78b | 43.92 | 79.86 | 0.8 | 59.27 | 0.71 | 37.92 | 0.38 | 20.92 | 0.41 | 16.83 | 0.45 | 48.73 | 0.54 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | mit | 77 | 3 | true | 8d0dde25c9042705f65559446944a19259c3fc8e | true | true | 2024-08-08 | 2024-08-06 | true | false | MaziyarPanahi/calme-2.2-rys-78b | 1 | dnhkng/RYS-XLarge |
💬 | MaziyarPanahi/calme-2.1-qwen2-72b | 43.61 | 81.63 | 0.82 | 57.33 | 0.7 | 36.03 | 0.36 | 17.45 | 0.38 | 20.15 | 0.47 | 49.05 | 0.54 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 25 | true | 0369c39770f45f2464587918f2dbdb8449ea3a0d | true | true | 2024-06-26 | 2024-06-08 | true | false | MaziyarPanahi/calme-2.1-qwen2-72b | 2 | Qwen/Qwen2-72B |
💬 | MaziyarPanahi/calme-2.2-qwen2-72b | 43.4 | 80.08 | 0.8 | 56.8 | 0.69 | 41.16 | 0.41 | 16.55 | 0.37 | 16.52 | 0.45 | 49.27 | 0.54 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 4 | true | 529e9bd80a76d943409bc92bb246aa7ca63dd9e6 | true | true | 2024-08-06 | 2024-07-09 | true | false | MaziyarPanahi/calme-2.2-qwen2-72b | 1 | Qwen/Qwen2-72B |
💬 | dfurman/Qwen2-72B-Orpo-v0.1 | 43.32 | 78.8 | 0.79 | 57.41 | 0.7 | 35.42 | 0.35 | 17.9 | 0.38 | 20.87 | 0.48 | 49.5 | 0.55 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 3 | true | 26c7bbaa728822c60bb47b2808972140653aae4c | true | true | 2024-08-22 | 2024-07-05 | true | false | dfurman/Qwen2-72B-Orpo-v0.1 | 1 | dfurman/Qwen2-72B-Orpo-v0.1 (Merge) |
🔶 | Undi95/MG-FinalMix-72B | 43.28 | 80.14 | 0.8 | 57.5 | 0.7 | 33.61 | 0.34 | 18.01 | 0.39 | 21.22 | 0.48 | 49.19 | 0.54 | 🔶 fine-tuned on domain-specific datasets | Qwen2ForCausalLM | Original | bfloat16 | false | other | 72 | 3 | true | 6c9c2f5d052495dcd49f44bf5623d21210653c65 | true | true | 2024-07-13 | 2024-06-25 | true | false | Undi95/MG-FinalMix-72B | 1 | Undi95/MG-FinalMix-72B (Merge) |
💬 | Qwen/Qwen2-72B-Instruct | 42.49 | 79.89 | 0.8 | 57.48 | 0.7 | 35.12 | 0.35 | 16.33 | 0.37 | 17.17 | 0.46 | 48.92 | 0.54 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 659 | true | 1af63c698f59c4235668ec9c1395468cb7cd7e79 | true | true | 2024-06-26 | 2024-05-28 | false | true | Qwen/Qwen2-72B-Instruct | 1 | Qwen/Qwen2-72B |
🔶 | abacusai/Dracarys-72B-Instruct | 42.37 | 78.56 | 0.79 | 56.94 | 0.69 | 33.61 | 0.34 | 18.79 | 0.39 | 16.81 | 0.46 | 49.51 | 0.55 | 🔶 fine-tuned on domain-specific datasets | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 14 | true | 10cabc4beb57a69df51533f65e39a7ad22821370 | true | true | 2024-08-16 | 2024-08-14 | true | true | abacusai/Dracarys-72B-Instruct | 0 | abacusai/Dracarys-72B-Instruct |
🔶 | VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct | 42.24 | 86.56 | 0.87 | 57.24 | 0.7 | 29.91 | 0.3 | 12.19 | 0.34 | 19.39 | 0.47 | 48.17 | 0.53 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | llama3.1 | 70 | 11 | true | e8e74aa789243c25a3a8f7565780a402f5050bbb | true | true | 2024-08-26 | 2024-07-29 | true | false | VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct | 0 | VAGOsolutions/Llama-3.1-SauerkrautLM-70b-Instruct |
💬 | alpindale/magnum-72b-v1 | 42.17 | 76.06 | 0.76 | 57.65 | 0.7 | 35.27 | 0.35 | 18.79 | 0.39 | 15.62 | 0.45 | 49.64 | 0.55 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 158 | true | fef27e0f235ae8858b84b765db773a2a954110dd | true | true | 2024-07-25 | 2024-06-17 | true | false | alpindale/magnum-72b-v1 | 2 | Qwen/Qwen2-72B |
💬 | meta-llama/Meta-Llama-3.1-70B-Instruct | 41.74 | 86.69 | 0.87 | 55.93 | 0.69 | 28.02 | 0.28 | 14.21 | 0.36 | 17.69 | 0.46 | 47.88 | 0.53 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3.1 | 70 | 480 | true | b9461463b511ed3c0762467538ea32cf7c9669f2 | true | true | 2024-08-15 | 2024-07-16 | true | true | meta-llama/Meta-Llama-3.1-70B-Instruct | 1 | meta-llama/Meta-Llama-3.1-70B |
🔶 | dnhkng/RYS-Llama3.1-Large | 41.6 | 84.92 | 0.85 | 55.41 | 0.69 | 28.4 | 0.28 | 16.55 | 0.37 | 17.09 | 0.46 | 47.21 | 0.52 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | mit | 81 | 0 | true | 52cc979de78155b33689efa48f52a8aab184bd86 | true | true | 2024-08-22 | 2024-08-11 | true | false | dnhkng/RYS-Llama3.1-Large | 0 | dnhkng/RYS-Llama3.1-Large |
💬 | abacusai/Smaug-Qwen2-72B-Instruct | 41.08 | 78.25 | 0.78 | 56.27 | 0.69 | 35.35 | 0.35 | 14.88 | 0.36 | 15.18 | 0.44 | 46.56 | 0.52 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 6 | true | af015925946d0c60ef69f512c3b35f421cf8063d | true | true | 2024-07-29 | 2024-06-26 | true | true | abacusai/Smaug-Qwen2-72B-Instruct | 0 | abacusai/Smaug-Qwen2-72B-Instruct |
🤝 | paulml/ECE-ILAB-Q1 | 40.93 | 78.65 | 0.79 | 53.7 | 0.67 | 26.13 | 0.26 | 18.23 | 0.39 | 18.81 | 0.46 | 50.06 | 0.55 | 🤝 base merges and moerges | Qwen2ForCausalLM | Original | bfloat16 | false | other | 72 | 0 | true | 393bea0ee85e4c752acd5fd77ce07f577fc13bd9 | true | true | 2024-06-26 | 2024-06-06 | true | false | paulml/ECE-ILAB-Q1 | 0 | paulml/ECE-ILAB-Q1 |
🔶 | KSU-HW-SEC/Llama3.1-70b-SVA-FT-1000step | 40.33 | 72.38 | 0.72 | 55.49 | 0.69 | 29.61 | 0.3 | 19.46 | 0.4 | 17.83 | 0.46 | 47.24 | 0.53 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 70 | 0 | false | b195fea0d8f350ff29243d4e88654b1baa5af79e | true | true | 2024-09-08 | 2024-09-08 | false | false | KSU-HW-SEC/Llama3.1-70b-SVA-FT-1000step | 0 | KSU-HW-SEC/Llama3.1-70b-SVA-FT-1000step |
💬 | upstage/solar-pro-preview-instruct | 39.61 | 84.16 | 0.84 | 54.82 | 0.68 | 20.09 | 0.2 | 16.11 | 0.37 | 15.01 | 0.44 | 47.48 | 0.53 | 💬 chat models (RLHF, DPO, IFT, ...) | SolarForCausalLM | Original | bfloat16 | true | mit | 22 | 298 | true | b4db141b5fb08b23f8bc323bc34e2cff3e9675f8 | true | true | 2024-09-11 | 2024-09-09 | true | true | upstage/solar-pro-preview-instruct | 0 | upstage/solar-pro-preview-instruct |
🔶 | pankajmathur/orca_mini_v7_72b | 39.06 | 59.3 | 0.59 | 55.06 | 0.68 | 26.44 | 0.26 | 18.01 | 0.39 | 24.21 | 0.51 | 51.35 | 0.56 | 🔶 fine-tuned on domain-specific datasets | Qwen2ForCausalLM | Original | bfloat16 | true | apache-2.0 | 72 | 11 | true | 447f11912cfa496e32e188a55214043a05760d3a | true | true | 2024-06-26 | 2024-06-26 | false | false | pankajmathur/orca_mini_v7_72b | 0 | pankajmathur/orca_mini_v7_72b |
🤝 | gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b | 38.27 | 80.72 | 0.81 | 51.51 | 0.67 | 26.81 | 0.27 | 10.29 | 0.33 | 15 | 0.44 | 45.28 | 0.51 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | llama3 | 70 | 1 | true | 2d73b7e1c7157df482555944d6a6b1362bc6c3c5 | true | true | 2024-06-27 | 2024-05-24 | true | false | gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b | 1 | gbueno86/Meta-LLama-3-Cat-Smaug-LLama-70b (Merge) |
💬 | MaziyarPanahi/calme-2.2-llama3-70b | 37.98 | 82.08 | 0.82 | 48.57 | 0.64 | 22.96 | 0.23 | 12.19 | 0.34 | 15.3 | 0.44 | 46.74 | 0.52 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 17 | true | 95366b974baedee4d95c1e841bc3d15e94753804 | true | true | 2024-06-26 | 2024-04-27 | true | false | MaziyarPanahi/calme-2.2-llama3-70b | 2 | meta-llama/Meta-Llama-3-70B |
🔶 | VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct | 37.82 | 80.45 | 0.8 | 52.03 | 0.67 | 21.68 | 0.22 | 10.4 | 0.33 | 13.54 | 0.43 | 48.8 | 0.54 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | other | 70 | 21 | true | 707cfd1a93875247c0223e0c7e3d86d58c432318 | true | true | 2024-06-26 | 2024-04-24 | true | false | VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct | 0 | VAGOsolutions/Llama-3-SauerkrautLM-70b-Instruct |
💬 | NousResearch/Hermes-3-Llama-3.1-70B | 37.31 | 76.61 | 0.77 | 53.77 | 0.68 | 13.75 | 0.14 | 14.88 | 0.36 | 23.43 | 0.49 | 41.41 | 0.47 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 75 | true | 093242c69a91f8d9d5b8094c380b88772f9bd7f8 | true | true | 2024-08-28 | 2024-07-29 | true | true | NousResearch/Hermes-3-Llama-3.1-70B | 1 | meta-llama/Meta-Llama-3.1-70B |
🔶 | ValiantLabs/Llama3-70B-Fireplace | 36.82 | 77.74 | 0.78 | 49.56 | 0.65 | 19.64 | 0.2 | 13.98 | 0.35 | 16.77 | 0.44 | 43.25 | 0.49 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | llama3 | 70 | 3 | true | 220079e4115733991eb19c30d5480db9696a665e | true | true | 2024-06-26 | 2024-05-09 | true | false | ValiantLabs/Llama3-70B-Fireplace | 0 | ValiantLabs/Llama3-70B-Fireplace |
💬 | tenyx/Llama3-TenyxChat-70B | 36.54 | 80.87 | 0.81 | 49.62 | 0.65 | 22.66 | 0.23 | 6.82 | 0.3 | 12.52 | 0.43 | 46.78 | 0.52 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 63 | true | a85d31e3af8fcc847cc9169f1144cf02f5351fab | true | true | 2024-08-04 | 2024-04-26 | true | false | tenyx/Llama3-TenyxChat-70B | 0 | tenyx/Llama3-TenyxChat-70B |
🤝 | gbueno86/Brinebreath-Llama-3.1-70B | 36.29 | 55.33 | 0.55 | 55.46 | 0.69 | 29.98 | 0.3 | 12.86 | 0.35 | 17.49 | 0.45 | 46.62 | 0.52 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | llama3.1 | 70 | 1 | true | c508ecf356167e8c498c6fa3937ba30a82208983 | true | true | 2024-08-29 | 2024-08-23 | true | false | gbueno86/Brinebreath-Llama-3.1-70B | 1 | gbueno86/Brinebreath-Llama-3.1-70B (Merge) |
💬 | meta-llama/Meta-Llama-3-70B-Instruct | 36.18 | 80.99 | 0.81 | 50.19 | 0.65 | 23.34 | 0.23 | 4.92 | 0.29 | 10.92 | 0.42 | 46.74 | 0.52 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 1,385 | true | 7129260dd854a80eb10ace5f61c20324b472b31c | true | true | 2024-06-12 | 2024-04-17 | true | true | meta-llama/Meta-Llama-3-70B-Instruct | 1 | meta-llama/Meta-Llama-3-70B |
🔶 | BAAI/Infinity-Instruct-3M-0625-Llama3-70B | 35.88 | 74.42 | 0.74 | 52.03 | 0.67 | 16.31 | 0.16 | 14.32 | 0.36 | 18.34 | 0.46 | 39.85 | 0.46 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | apache-2.0 | 70 | 3 | true | 6d8ceada57e55cff3503191adc4d6379ff321fe2 | true | true | 2024-08-30 | 2024-07-09 | true | false | BAAI/Infinity-Instruct-3M-0625-Llama3-70B | 0 | BAAI/Infinity-Instruct-3M-0625-Llama3-70B |
🔶 | KSU-HW-SEC/Llama3-70b-SVA-FT-1415 | 35.8 | 61.8 | 0.62 | 51.33 | 0.67 | 20.09 | 0.2 | 16.67 | 0.38 | 17.8 | 0.46 | 47.14 | 0.52 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 70 | 0 | false | 1c09728455567898116d2d9cfb6cbbbbd4ee730c | true | true | 2024-09-08 | 2024-09-08 | false | false | KSU-HW-SEC/Llama3-70b-SVA-FT-1415 | 0 | KSU-HW-SEC/Llama3-70b-SVA-FT-1415 |
🔶 | failspy/llama-3-70B-Instruct-abliterated | 35.79 | 80.23 | 0.8 | 48.94 | 0.65 | 23.72 | 0.24 | 5.26 | 0.29 | 10.53 | 0.41 | 46.06 | 0.51 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 82 | true | 53ae9dafe8b3d163e05d75387575f8e9f43253d0 | true | true | 2024-07-03 | 2024-05-07 | true | false | failspy/llama-3-70B-Instruct-abliterated | 0 | failspy/llama-3-70B-Instruct-abliterated |
💬 | dnhkng/RYS-Llama-3-Large-Instruct | 35.78 | 80.51 | 0.81 | 49.67 | 0.65 | 21.83 | 0.22 | 5.26 | 0.29 | 11.45 | 0.42 | 45.97 | 0.51 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | mit | 73 | 1 | true | 01e3208aaf7bf6d2b09737960c701ec6628977fe | true | true | 2024-08-07 | 2024-08-06 | true | false | dnhkng/RYS-Llama-3-Large-Instruct | 0 | dnhkng/RYS-Llama-3-Large-Instruct |
🔶 | KSU-HW-SEC/Llama3-70b-SVA-FT-final | 35.78 | 61.65 | 0.62 | 51.33 | 0.67 | 20.09 | 0.2 | 16.67 | 0.38 | 17.8 | 0.46 | 47.14 | 0.52 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 70 | 0 | false | 391bbd94173b34975d1aa2c7356977a630253b75 | true | true | 2024-09-08 | 2024-09-08 | false | false | KSU-HW-SEC/Llama3-70b-SVA-FT-final | 0 | KSU-HW-SEC/Llama3-70b-SVA-FT-final |
🔶 | KSU-HW-SEC/Llama3-70b-SVA-FT-500 | 35.61 | 61.05 | 0.61 | 51.89 | 0.67 | 19.34 | 0.19 | 17.45 | 0.38 | 16.99 | 0.45 | 46.97 | 0.52 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | null | 70 | 0 | false | 856a23f28aeada23d1135c86a37e05524307e8ed | true | true | 2024-09-08 | 2024-09-08 | false | false | KSU-HW-SEC/Llama3-70b-SVA-FT-500 | 0 | KSU-HW-SEC/Llama3-70b-SVA-FT-500 |
🔶 | cloudyu/Llama-3-70Bx2-MOE | 35.35 | 54.82 | 0.55 | 51.42 | 0.66 | 19.86 | 0.2 | 19.13 | 0.39 | 20.85 | 0.48 | 46.02 | 0.51 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | llama3 | 126 | 1 | true | b8bd85e8db8e4ec352b93441c92e0ae1334bf5a7 | true | false | 2024-06-27 | 2024-05-20 | false | false | cloudyu/Llama-3-70Bx2-MOE | 0 | cloudyu/Llama-3-70Bx2-MOE |
🔶 | Sao10K/L3-70B-Euryale-v2.1 | 35.35 | 73.84 | 0.74 | 48.7 | 0.65 | 20.85 | 0.21 | 10.85 | 0.33 | 12.25 | 0.42 | 45.6 | 0.51 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | cc-by-nc-4.0 | 70 | 110 | true | 36ad832b771cd783ea7ad00ed39e61f679b1a7c6 | true | true | 2024-07-01 | 2024-06-11 | true | false | Sao10K/L3-70B-Euryale-v2.1 | 0 | Sao10K/L3-70B-Euryale-v2.1 |
💬 | OpenBuddy/openbuddy-llama3.1-70b-v22.1-131k | 35.23 | 73.33 | 0.73 | 51.94 | 0.67 | 3.4 | 0.03 | 16.67 | 0.38 | 18.24 | 0.46 | 47.82 | 0.53 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | other | 70 | 0 | true | 43ed945180174d79a8f6c68509161c249c884dfa | true | true | 2024-08-24 | 2024-08-21 | true | false | OpenBuddy/openbuddy-llama3.1-70b-v22.1-131k | 0 | OpenBuddy/openbuddy-llama3.1-70b-v22.1-131k |
🔶 | migtissera/Llama-3-70B-Synthia-v3.5 | 35.2 | 60.76 | 0.61 | 49.12 | 0.65 | 18.96 | 0.19 | 18.34 | 0.39 | 23.39 | 0.49 | 40.65 | 0.47 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | llama3 | 70 | 5 | true | 8744db0bccfc18f1847633da9d29fc89b35b4190 | true | true | 2024-08-28 | 2024-05-26 | true | false | migtissera/Llama-3-70B-Synthia-v3.5 | 0 | migtissera/Llama-3-70B-Synthia-v3.5 |
🟢 | Qwen/Qwen2-72B | 35.13 | 38.24 | 0.38 | 51.86 | 0.66 | 29.15 | 0.29 | 19.24 | 0.39 | 19.73 | 0.47 | 52.56 | 0.57 | 🟢 pretrained | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 181 | true | 87993795c78576318087f70b43fbf530eb7789e7 | true | true | 2024-06-26 | 2024-05-22 | false | true | Qwen/Qwen2-72B | 0 | Qwen/Qwen2-72B |
🔶 | Sao10K/L3-70B-Euryale-v2.1 | 35.11 | 72.81 | 0.73 | 49.19 | 0.65 | 20.24 | 0.2 | 10.85 | 0.33 | 12.05 | 0.42 | 45.51 | 0.51 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | cc-by-nc-4.0 | 70 | 110 | true | 36ad832b771cd783ea7ad00ed39e61f679b1a7c6 | true | true | 2024-06-26 | 2024-06-11 | true | false | Sao10K/L3-70B-Euryale-v2.1 | 0 | Sao10K/L3-70B-Euryale-v2.1 |
💬 | microsoft/Phi-3.5-MoE-instruct | 35.1 | 69.25 | 0.69 | 48.77 | 0.64 | 20.54 | 0.21 | 14.09 | 0.36 | 17.33 | 0.46 | 40.64 | 0.47 | 💬 chat models (RLHF, DPO, IFT, ...) | Phi3ForCausalLM | Original | bfloat16 | true | mit | 42 | 467 | true | 482a9ba0eb0e1fa1671e3560e009d7cec2e5147c | true | false | 2024-08-21 | 2024-08-17 | true | true | microsoft/Phi-3.5-MoE-instruct | 0 | microsoft/Phi-3.5-MoE-instruct |
💬 | abacusai/Smaug-Llama-3-70B-Instruct-32K | 34.72 | 77.61 | 0.78 | 49.07 | 0.65 | 21.22 | 0.21 | 6.15 | 0.3 | 12.43 | 0.42 | 41.83 | 0.48 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 20 | true | 33840982dc253968f32ef3a534ee0e025eb97482 | true | true | 2024-08-06 | 2024-06-11 | true | true | abacusai/Smaug-Llama-3-70B-Instruct-32K | 0 | abacusai/Smaug-Llama-3-70B-Instruct-32K |
🔶 | BAAI/Infinity-Instruct-3M-0613-Llama3-70B | 34.47 | 68.21 | 0.68 | 51.33 | 0.66 | 14.88 | 0.15 | 14.43 | 0.36 | 16.53 | 0.45 | 41.44 | 0.47 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 70 | 5 | true | 9fc53668064bdda22975ca72c5a287f8241c95b3 | true | true | 2024-06-28 | 2024-06-27 | true | false | BAAI/Infinity-Instruct-3M-0613-Llama3-70B | 0 | BAAI/Infinity-Instruct-3M-0613-Llama3-70B |
💬 | dnhkng/RYS-Llama-3-Huge-Instruct | 34.37 | 76.86 | 0.77 | 49.07 | 0.65 | 21.22 | 0.21 | 1.45 | 0.26 | 11.93 | 0.42 | 45.66 | 0.51 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | mit | 99 | 1 | true | cfe14a5339e88a7a89f075d9d48215d45f64acaf | true | true | 2024-08-07 | 2024-08-06 | true | false | dnhkng/RYS-Llama-3-Huge-Instruct | 0 | dnhkng/RYS-Llama-3-Huge-Instruct |
💬 | mistralai/Mixtral-8x22B-Instruct-v0.1 | 33.89 | 71.84 | 0.72 | 44.11 | 0.61 | 18.73 | 0.19 | 16.44 | 0.37 | 13.49 | 0.43 | 38.7 | 0.45 | 💬 chat models (RLHF, DPO, IFT, ...) | MixtralForCausalLM | Original | bfloat16 | true | apache-2.0 | 140 | 661 | true | b0c3516041d014f640267b14feb4e9a84c8e8c71 | true | false | 2024-06-12 | 2024-04-16 | true | true | mistralai/Mixtral-8x22B-Instruct-v0.1 | 1 | mistralai/Mixtral-8x22B-v0.1 |
💬 | HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1 | 33.77 | 65.11 | 0.65 | 47.5 | 0.63 | 18.35 | 0.18 | 17.11 | 0.38 | 14.72 | 0.45 | 39.85 | 0.46 | 💬 chat models (RLHF, DPO, IFT, ...) | MixtralForCausalLM | Original | float16 | true | apache-2.0 | 140 | 260 | true | a3be084543d278e61b64cd600f28157afc79ffd3 | true | true | 2024-06-12 | 2024-04-10 | true | true | HuggingFaceH4/zephyr-orpo-141b-A35b-v0.1 | 1 | mistral-community/Mixtral-8x22B-v0.1 |
💬 | jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 | 33.3 | 68.52 | 0.69 | 49.85 | 0.64 | 17.98 | 0.18 | 10.07 | 0.33 | 12.35 | 0.43 | 41.07 | 0.47 | 💬 chat models (RLHF, DPO, IFT, ...) | Phi3ForCausalLM | Original | float16 | true | mit | 13 | 6 | true | d34bbd55b48e553f28579d86f3ccae19726c6b39 | true | true | 2024-08-28 | 2024-08-12 | true | false | jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 | 0 | jpacifico/Chocolatine-14B-Instruct-DPO-v1.2 |
🔶 | migtissera/Tess-v2.5.2-Qwen2-72B | 33.28 | 44.94 | 0.45 | 52.31 | 0.66 | 27.42 | 0.27 | 13.42 | 0.35 | 10.89 | 0.42 | 50.68 | 0.56 | 🔶 fine-tuned on domain-specific datasets | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 12 | true | 0435e634ad9bc8b1172395a535b78e6f25f3594f | true | true | 2024-08-10 | 2024-06-13 | true | false | migtissera/Tess-v2.5.2-Qwen2-72B | 0 | migtissera/Tess-v2.5.2-Qwen2-72B |
💬 | microsoft/Phi-3-medium-4k-instruct | 32.67 | 64.23 | 0.64 | 49.38 | 0.64 | 16.99 | 0.17 | 11.52 | 0.34 | 13.05 | 0.43 | 40.84 | 0.47 | 💬 chat models (RLHF, DPO, IFT, ...) | Phi3ForCausalLM | Original | bfloat16 | true | mit | 13 | 207 | true | d194e4e74ffad5a5e193e26af25bcfc80c7f1ffc | true | true | 2024-06-12 | 2024-05-07 | true | true | microsoft/Phi-3-medium-4k-instruct | 0 | microsoft/Phi-3-medium-4k-instruct |
💬 | 01-ai/Yi-1.5-34B-Chat | 32.63 | 60.67 | 0.61 | 44.26 | 0.61 | 23.34 | 0.23 | 15.32 | 0.36 | 13.06 | 0.43 | 39.12 | 0.45 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 231 | true | f3128b2d02d82989daae566c0a7eadc621ca3254 | true | true | 2024-06-12 | 2024-05-10 | true | true | 01-ai/Yi-1.5-34B-Chat | 0 | 01-ai/Yi-1.5-34B-Chat |
🔶 | alpindale/WizardLM-2-8x22B | 32.61 | 52.72 | 0.53 | 48.58 | 0.64 | 22.28 | 0.22 | 17.56 | 0.38 | 14.54 | 0.44 | 39.96 | 0.46 | 🔶 fine-tuned on domain-specific datasets | MixtralForCausalLM | Original | bfloat16 | true | apache-2.0 | 140 | 376 | true | 087834da175523cffd66a7e19583725e798c1b4f | true | true | 2024-06-28 | 2024-04-16 | false | false | alpindale/WizardLM-2-8x22B | 0 | alpindale/WizardLM-2-8x22B |
💬 | google/gemma-2-27b-it | 32.31 | 79.78 | 0.8 | 49.27 | 0.65 | 0.68 | 0.01 | 16.67 | 0.38 | 9.11 | 0.4 | 38.35 | 0.45 | 💬 chat models (RLHF, DPO, IFT, ...) | Gemma2ForCausalLM | Original | bfloat16 | true | gemma | 27 | 397 | true | f6c533e5eb013c7e31fc74ef042ac4f3fb5cf40b | true | true | 2024-08-07 | 2024-06-24 | true | true | google/gemma-2-27b-it | 1 | google/gemma-2-27b |
💬 | MaziyarPanahi/calme-2.4-llama3-70b | 32.18 | 50.27 | 0.5 | 48.4 | 0.64 | 22.66 | 0.23 | 11.97 | 0.34 | 13.1 | 0.43 | 46.71 | 0.52 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 13 | true | cb03e4d810b82d86e7cb01ab146bade09a5d06d1 | true | true | 2024-06-26 | 2024-04-28 | true | false | MaziyarPanahi/calme-2.4-llama3-70b | 2 | meta-llama/Meta-Llama-3-70B |
🤝 | paloalma/TW3-JRGL-v2 | 32.12 | 53.16 | 0.53 | 45.61 | 0.61 | 15.86 | 0.16 | 14.54 | 0.36 | 20.7 | 0.49 | 42.87 | 0.49 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 72 | 0 | true | aca3f0ba2bfb90038a9e2cd5b486821d4c181b46 | true | true | 2024-08-29 | 2024-04-01 | false | false | paloalma/TW3-JRGL-v2 | 0 | paloalma/TW3-JRGL-v2 |
💬 | internlm/internlm2_5-20b-chat | 32.08 | 70.1 | 0.7 | 62.83 | 0.75 | 0 | 0 | 9.51 | 0.32 | 16.74 | 0.46 | 33.31 | 0.4 | 💬 chat models (RLHF, DPO, IFT, ...) | InternLM2ForCausalLM | Original | bfloat16 | true | other | 19 | 76 | true | ef17bde929761255fee76d95e2c25969ccd93b0d | true | true | 2024-08-12 | 2024-07-30 | true | true | internlm/internlm2_5-20b-chat | 0 | internlm/internlm2_5-20b-chat |
💬 | cognitivecomputations/dolphin-2.9.2-qwen2-72b | 32 | 40.38 | 0.4 | 47.7 | 0.63 | 21.37 | 0.21 | 16 | 0.37 | 17.04 | 0.45 | 49.52 | 0.55 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 72 | 51 | true | e79582577c2bf2af304221af0e8308b7e7d46ca1 | true | true | 2024-06-27 | 2024-05-27 | true | true | cognitivecomputations/dolphin-2.9.2-qwen2-72b | 1 | Qwen/Qwen2-72B |
💬 | MTSAIR/MultiVerse_70B | 31.73 | 52.49 | 0.52 | 46.14 | 0.62 | 16.16 | 0.16 | 13.87 | 0.35 | 18.82 | 0.47 | 42.89 | 0.49 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | other | 72 | 38 | true | 063430cdc4d972a0884e3e3e3d45ea4afbdf71a2 | true | true | 2024-06-29 | 2024-03-25 | false | false | MTSAIR/MultiVerse_70B | 0 | MTSAIR/MultiVerse_70B |
🤝 | paloalma/Le_Triomphant-ECE-TW3 | 31.66 | 54.02 | 0.54 | 44.96 | 0.61 | 17.45 | 0.17 | 13.2 | 0.35 | 18.5 | 0.47 | 41.81 | 0.48 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 72 | 3 | true | f72399253bb3e65c0f55e50461488c098f658a49 | true | true | 2024-07-25 | 2024-04-01 | false | false | paloalma/Le_Triomphant-ECE-TW3 | 0 | paloalma/Le_Triomphant-ECE-TW3 |
🔶 | failspy/Phi-3-medium-4k-instruct-abliterated-v3 | 31.55 | 63.19 | 0.63 | 46.73 | 0.63 | 14.12 | 0.14 | 8.95 | 0.32 | 18.52 | 0.46 | 37.78 | 0.44 | 🔶 fine-tuned on domain-specific datasets | Phi3ForCausalLM | Original | bfloat16 | true | mit | 13 | 22 | true | 959b09eacf6cae85a8eb21b25e998addc89a367b | true | true | 2024-07-29 | 2024-05-22 | true | false | failspy/Phi-3-medium-4k-instruct-abliterated-v3 | 0 | failspy/Phi-3-medium-4k-instruct-abliterated-v3 |
💬 | microsoft/Phi-3-medium-128k-instruct | 31.52 | 60.4 | 0.6 | 48.46 | 0.64 | 16.16 | 0.16 | 11.52 | 0.34 | 11.35 | 0.41 | 41.24 | 0.47 | 💬 chat models (RLHF, DPO, IFT, ...) | Phi3ForCausalLM | Original | bfloat16 | true | mit | 13 | 361 | true | fa7d2aa4f5ea69b2e36b20d050cdae79c9bfbb3f | true | true | 2024-08-21 | 2024-05-07 | true | true | microsoft/Phi-3-medium-128k-instruct | 0 | microsoft/Phi-3-medium-128k-instruct |
💬 | Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO | 31.42 | 47.99 | 0.48 | 51.03 | 0.65 | 17.45 | 0.17 | 10.18 | 0.33 | 20.53 | 0.48 | 41.37 | 0.47 | 💬 chat models (RLHF, DPO, IFT, ...) | MistralForCausalLM | Original | float16 | true | mit | 13 | 3 | true | b749dbcb19901b8fd0e9f38c923a24533569f895 | true | true | 2024-08-13 | 2024-06-15 | true | false | Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO | 0 | Danielbrdz/Barcenas-14b-Phi-3-medium-ORPO |
🤝 | CombinHorizon/YiSM-blossom5.1-34B-SLERP | 31.09 | 50.33 | 0.5 | 46.4 | 0.62 | 19.79 | 0.2 | 14.09 | 0.36 | 14.37 | 0.44 | 41.56 | 0.47 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 34 | 0 | true | ebd8d6507623008567a0548cd0ff9e28cbd6a656 | true | true | 2024-08-27 | 2024-08-27 | true | false | CombinHorizon/YiSM-blossom5.1-34B-SLERP | 1 | CombinHorizon/YiSM-blossom5.1-34B-SLERP (Merge) |
💬 | CohereForAI/c4ai-command-r-plus | 30.86 | 76.64 | 0.77 | 39.92 | 0.58 | 7.55 | 0.08 | 7.38 | 0.31 | 20.42 | 0.48 | 33.24 | 0.4 | 💬 chat models (RLHF, DPO, IFT, ...) | CohereForCausalLM | Original | float16 | true | cc-by-nc-4.0 | 103 | 1,652 | true | fa1bd7fb1572ceb861bbbbecfa8af83b29fa8cca | true | true | 2024-06-13 | 2024-04-03 | true | true | CohereForAI/c4ai-command-r-plus | 0 | CohereForAI/c4ai-command-r-plus |
💬 | mattshumer/ref_70_e3 | 30.74 | 62.94 | 0.63 | 49.27 | 0.65 | 0 | 0 | 11.41 | 0.34 | 13 | 0.43 | 47.81 | 0.53 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | float16 | true | llama3.1 | 70 | 46 | true | 5d2d9dbb9e0bf61879255f63f1b787296fe524cc | true | true | 2024-09-08 | 2024-09-08 | true | false | mattshumer/ref_70_e3 | 2 | meta-llama/Meta-Llama-3.1-70B |
💬 | internlm/internlm2_5-7b-chat | 30.46 | 61.4 | 0.61 | 57.67 | 0.71 | 8.31 | 0.08 | 10.63 | 0.33 | 14.35 | 0.44 | 30.42 | 0.37 | 💬 chat models (RLHF, DPO, IFT, ...) | InternLM2ForCausalLM | Original | float16 | true | other | 7 | 154 | true | bebb00121ee105b823647c3ba2b1e152652edc33 | true | true | 2024-07-03 | 2024-06-27 | true | true | internlm/internlm2_5-7b-chat | 0 | internlm/internlm2_5-7b-chat |
💬 | ValiantLabs/Llama3-70B-ShiningValiant2 | 30.45 | 61.22 | 0.61 | 46.71 | 0.63 | 7.1 | 0.07 | 10.74 | 0.33 | 13.64 | 0.43 | 43.31 | 0.49 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3 | 70 | 4 | true | bd6cce8da08ccefe9ec58cae3df4bf75c97d8950 | true | true | 2024-07-25 | 2024-04-20 | true | false | ValiantLabs/Llama3-70B-ShiningValiant2 | 0 | ValiantLabs/Llama3-70B-ShiningValiant2 |
🤝 | altomek/YiSM-34B-0rn | 30.15 | 42.84 | 0.43 | 45.38 | 0.61 | 20.62 | 0.21 | 16.22 | 0.37 | 14.76 | 0.44 | 41.06 | 0.47 | 🤝 base merges and moerges | LlamaForCausalLM | Original | float16 | false | apache-2.0 | 34 | 1 | true | 7a481c67cbdd5c846d6aaab5ef9f1eebfad812c2 | true | true | 2024-06-27 | 2024-05-26 | true | false | altomek/YiSM-34B-0rn | 1 | altomek/YiSM-34B-0rn (Merge) |
💬 | OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k | 30.08 | 54.2 | 0.54 | 45.64 | 0.62 | 12.76 | 0.13 | 13.2 | 0.35 | 14.69 | 0.44 | 39.99 | 0.46 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 0 | true | 966be6ad502cdd50a9af94d5f003aec040cdb0b5 | true | false | 2024-08-30 | 2024-06-05 | true | false | OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k | 0 | OpenBuddy/openbuddy-yi1.5-34b-v21.3-32k |
🤝 | paloalma/ECE-TW3-JRGL-V1 | 30.02 | 55.35 | 0.55 | 46.7 | 0.63 | 11.86 | 0.12 | 12.98 | 0.35 | 17.46 | 0.46 | 35.79 | 0.42 | 🤝 base merges and moerges | LlamaForCausalLM | Original | float16 | false | apache-2.0 | 68 | 1 | true | 2f08c7ab9db03b1b9f455c7beee6a41e99aa910e | true | true | 2024-08-04 | 2024-04-03 | false | false | paloalma/ECE-TW3-JRGL-V1 | 0 | paloalma/ECE-TW3-JRGL-V1 |
🔶 | failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 | 29.97 | 77.47 | 0.77 | 37.87 | 0.57 | 11.86 | 0.12 | 6.26 | 0.3 | 7.97 | 0.4 | 38.36 | 0.45 | 🔶 fine-tuned on domain-specific datasets | ? | Adapter | bfloat16 | true | llama3 | 70 | 33 | true | fc951b03d92972ab52ad9392e620eba6173526b9 | true | true | 2024-08-30 | 2024-05-28 | true | false | failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 | 0 | failspy/Meta-Llama-3-70B-Instruct-abliterated-v3.5 |
💬 | recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp | 29.87 | 76.49 | 0.76 | 42.25 | 0.6 | 1.74 | 0.02 | 10.74 | 0.33 | 12.39 | 0.42 | 35.63 | 0.42 | 💬 chat models (RLHF, DPO, IFT, ...) | Gemma2ForCausalLM | Original | float16 | false | apache-2.0 | 10 | 2 | true | 9048af8616bc62b6efab2bc1bc77ba53c5dfed79 | true | true | 2024-09-12 | 2024-09-11 | true | false | recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp | 1 | recoilme/Gemma-2-Ataraxy-Gemmasutra-9B-slerp (Merge) |
🔶 | jpacifico/Chocolatine-14B-Instruct-4k-DPO | 29.83 | 46.89 | 0.47 | 48.02 | 0.63 | 14.88 | 0.15 | 12.19 | 0.34 | 15.15 | 0.44 | 41.82 | 0.48 | 🔶 fine-tuned on domain-specific datasets | Phi3ForCausalLM | Original | float16 | true | mit | 13 | 1 | true | 30677e58010979af26b70240846fdf7ff38cbbf2 | true | true | 2024-08-08 | 2024-08-01 | false | false | jpacifico/Chocolatine-14B-Instruct-4k-DPO | 0 | jpacifico/Chocolatine-14B-Instruct-4k-DPO |
💬 | microsoft/Phi-3-small-8k-instruct | 29.64 | 64.97 | 0.65 | 46.21 | 0.62 | 2.64 | 0.03 | 8.28 | 0.31 | 16.77 | 0.46 | 38.96 | 0.45 | 💬 chat models (RLHF, DPO, IFT, ...) | Phi3SmallForCausalLM | Original | bfloat16 | true | mit | 7 | 150 | true | 1535ae26fb4faada95c6950e8bc6e867cdad6b00 | true | true | 2024-06-13 | 2024-05-07 | true | true | microsoft/Phi-3-small-8k-instruct | 0 | microsoft/Phi-3-small-8k-instruct |
💬 | Qwen/Qwen2-57B-A14B-Instruct | 29.6 | 63.38 | 0.63 | 41.79 | 0.59 | 7.7 | 0.08 | 10.85 | 0.33 | 14.18 | 0.44 | 39.73 | 0.46 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2MoeForCausalLM | Original | bfloat16 | true | apache-2.0 | 57 | 74 | true | 5ea455a449e61a92a5b194ee06be807647d3e8b5 | true | true | 2024-08-14 | 2024-06-04 | true | true | Qwen/Qwen2-57B-A14B-Instruct | 1 | Qwen/Qwen2-57B-A14B |
🟢 | Qwen/Qwen1.5-110B | 29.56 | 34.22 | 0.34 | 44.28 | 0.61 | 23.04 | 0.23 | 13.65 | 0.35 | 13.71 | 0.44 | 48.45 | 0.54 | 🟢 pretrained | Qwen2ForCausalLM | Original | bfloat16 | true | other | 111 | 90 | true | 16659038ecdcc771c1293cf47020fa7cc2750ee8 | true | true | 2024-06-13 | 2024-04-25 | false | true | Qwen/Qwen1.5-110B | 0 | Qwen/Qwen1.5-110B |
💬 | anthracite-org/magnum-v3-34b | 29.39 | 51.15 | 0.51 | 44.33 | 0.61 | 17.82 | 0.18 | 14.77 | 0.36 | 6.57 | 0.39 | 41.69 | 0.48 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | null | 34 | 26 | false | 3bcd8c3dbb93021a5ce22203c690a1a084cafb73 | true | true | 2024-09-05 | 2024-08-22 | true | false | anthracite-org/magnum-v3-34b | 0 | anthracite-org/magnum-v3-34b |
🔶 | abacusai/Smaug-72B-v0.1 | 29.35 | 51.67 | 0.52 | 43.13 | 0.6 | 16.77 | 0.17 | 9.84 | 0.32 | 14.42 | 0.45 | 40.26 | 0.46 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | other | 72 | 463 | true | a1d657156f82c24b670158406378648233487011 | true | true | 2024-06-12 | 2024-02-02 | false | true | abacusai/Smaug-72B-v0.1 | 1 | moreh/MoMo-72B-lora-1.8.7-DPO |
💬 | Qwen/Qwen1.5-110B-Chat | 29.22 | 59.39 | 0.59 | 44.98 | 0.62 | 0 | 0 | 12.19 | 0.34 | 16.29 | 0.45 | 42.5 | 0.48 | 💬 chat models (RLHF, DPO, IFT, ...) | Qwen2ForCausalLM | Original | bfloat16 | true | other | 111 | 122 | true | 85f86cec25901f2dbd870a86e06756903c9a876a | true | true | 2024-06-12 | 2024-04-25 | true | true | Qwen/Qwen1.5-110B-Chat | 0 | Qwen/Qwen1.5-110B-Chat |
🤝 | paloalma/ECE-TW3-JRGL-V5 | 29.19 | 45.53 | 0.46 | 43.46 | 0.6 | 16.54 | 0.17 | 12.19 | 0.34 | 16.89 | 0.46 | 40.53 | 0.46 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 72 | 0 | true | 4061fa10de22945790cad825f7f4dec96d55b204 | true | true | 2024-08-30 | 2024-04-11 | false | false | paloalma/ECE-TW3-JRGL-V5 | 0 | paloalma/ECE-TW3-JRGL-V5 |
🤝 | DreadPoor/Heart_Stolen-8B-Model_Stock | 28.98 | 72.45 | 0.72 | 34.44 | 0.54 | 14.65 | 0.15 | 8.95 | 0.32 | 12.36 | 0.42 | 31.04 | 0.38 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 8 | 2 | true | 6d77987af7115c7455ddb072c48316815b018999 | true | true | 2024-09-10 | 2024-09-09 | true | false | DreadPoor/Heart_Stolen-8B-Model_Stock | 1 | DreadPoor/Heart_Stolen-8B-Model_Stock (Merge) |
🤝 | Dampfinchen/Llama-3.1-8B-Ultra-Instruct | 28.98 | 80.81 | 0.81 | 32.49 | 0.53 | 14.95 | 0.15 | 5.59 | 0.29 | 8.61 | 0.4 | 31.4 | 0.38 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | llama3 | 8 | 5 | true | 46662d14130cfd34f7d90816540794f24a301f86 | true | true | 2024-08-26 | 2024-08-26 | true | false | Dampfinchen/Llama-3.1-8B-Ultra-Instruct | 1 | Dampfinchen/Llama-3.1-8B-Ultra-Instruct (Merge) |
💬 | 01-ai/Yi-1.5-34B-Chat-16K | 28.98 | 45.64 | 0.46 | 44.54 | 0.61 | 18.81 | 0.19 | 11.74 | 0.34 | 13.74 | 0.44 | 39.38 | 0.45 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 27 | true | ff74452e11f0f749ab872dc19b1dd3813c25c4d8 | true | true | 2024-07-15 | 2024-05-15 | true | true | 01-ai/Yi-1.5-34B-Chat-16K | 0 | 01-ai/Yi-1.5-34B-Chat-16K |
💬 | google/gemma-2-9b-it | 28.86 | 74.36 | 0.74 | 42.14 | 0.6 | 0.23 | 0 | 14.77 | 0.36 | 9.74 | 0.41 | 31.95 | 0.39 | 💬 chat models (RLHF, DPO, IFT, ...) | Gemma2ForCausalLM | Original | bfloat16 | true | gemma | 9 | 453 | true | 1937c70277fcc5f7fb0fc772fc5bc69378996e71 | true | true | 2024-07-11 | 2024-06-24 | true | true | google/gemma-2-9b-it | 1 | google/gemma-2-9b |
🔶 | 152334H/miqu-1-70b-sf | 28.82 | 51.82 | 0.52 | 43.81 | 0.61 | 10.8 | 0.11 | 13.42 | 0.35 | 17.21 | 0.46 | 35.87 | 0.42 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | null | 68 | 218 | false | 1dca4cce36f01f2104ee2e6b97bac6ff7bb300c1 | true | true | 2024-06-26 | 2024-01-30 | false | false | 152334H/miqu-1-70b-sf | 0 | 152334H/miqu-1-70b-sf |
🔶 | ehristoforu/Gemma2-9b-it-train6 | 28.75 | 70.25 | 0.7 | 40.99 | 0.59 | 8.38 | 0.08 | 10.51 | 0.33 | 9.65 | 0.41 | 32.69 | 0.39 | 🔶 fine-tuned on domain-specific datasets | Gemma2ForCausalLM | Original | float16 | true | apache-2.0 | 9 | 1 | true | e72bf00b427c22c48b468818cf75300a373a0c8a | true | true | 2024-07-31 | 2024-07-22 | true | false | ehristoforu/Gemma2-9b-it-train6 | 6 | unsloth/gemma-2-9b-it-bnb-4bit |
💬 | microsoft/Phi-3-small-128k-instruct | 28.59 | 63.68 | 0.64 | 45.63 | 0.62 | 0 | 0 | 8.95 | 0.32 | 14.5 | 0.44 | 38.78 | 0.45 | 💬 chat models (RLHF, DPO, IFT, ...) | Phi3SmallForCausalLM | Original | bfloat16 | true | mit | 7 | 165 | true | f80aaa30bfc64c2b8ab214b541d9050e97163bc4 | true | true | 2024-06-13 | 2024-05-07 | true | true | microsoft/Phi-3-small-128k-instruct | 0 | microsoft/Phi-3-small-128k-instruct |
🔶 | VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct | 28.56 | 80.17 | 0.8 | 31 | 0.51 | 11.18 | 0.11 | 5.37 | 0.29 | 11.52 | 0.41 | 32.12 | 0.39 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | llama3.1 | 8 | 25 | true | 23ca79966a4ab0a61f7ccc7a0454ffef553b66eb | true | true | 2024-07-29 | 2024-07-25 | true | false | VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct | 0 | VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct |
🔶 | abhishek/autotrain-llama3-70b-orpo-v2 | 28.48 | 54.06 | 0.54 | 39.88 | 0.59 | 18.73 | 0.19 | 5.82 | 0.29 | 9.95 | 0.41 | 42.42 | 0.48 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | other | 70 | 2 | true | a2c16a8a7fa48792eb8a1f0c50e13309c2021a63 | true | true | 2024-08-21 | 2024-05-04 | true | false | abhishek/autotrain-llama3-70b-orpo-v2 | 0 | abhishek/autotrain-llama3-70b-orpo-v2 |
💬 | Azure99/blossom-v5.1-34b | 28.39 | 56.97 | 0.57 | 44.15 | 0.61 | 14.43 | 0.14 | 7.94 | 0.31 | 7.3 | 0.39 | 39.53 | 0.46 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 5 | true | 2c803204f5dbf4ce37e2df98eb0205cdc53de10d | true | true | 2024-07-27 | 2024-05-19 | true | false | Azure99/blossom-v5.1-34b | 0 | Azure99/blossom-v5.1-34b |
🟢 | dnhkng/RYS-Phi-3-medium-4k-instruct | 28.38 | 43.91 | 0.44 | 46.75 | 0.62 | 11.78 | 0.12 | 13.98 | 0.35 | 11.09 | 0.43 | 42.74 | 0.48 | 🟢 pretrained | Phi3ForCausalLM | Original | bfloat16 | true | mit | 17 | 1 | true | 1009e916b1ff8c9a53bc9d8ff48bea2a15ccde26 | true | true | 2024-08-07 | 2024-08-06 | false | false | dnhkng/RYS-Phi-3-medium-4k-instruct | 0 | dnhkng/RYS-Phi-3-medium-4k-instruct |
🤝 | DeepAutoAI/ldm_soup_Llama-3.1-8B-Instruct-v0.1 | 28.28 | 78.89 | 0.79 | 31.16 | 0.51 | 10.42 | 0.1 | 5.48 | 0.29 | 11.52 | 0.41 | 32.17 | 0.39 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | true | null | 8 | 0 | false | ecd140c95985b4292c896e25a94a7629d2924ad1 | true | true | 2024-09-16 | 2024-09-15 | true | false | DeepAutoAI/ldm_soup_Llama-3.1-8B-Instruct-v0.1 | 0 | DeepAutoAI/ldm_soup_Llama-3.1-8B-Instruct-v0.1 |
🤝 | DeepAutoAI/ldm_soup_Llama-3.1-8B-Instruct-v0.0 | 28.28 | 78.89 | 0.79 | 31.16 | 0.51 | 10.42 | 0.1 | 5.48 | 0.29 | 11.52 | 0.41 | 32.17 | 0.39 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | true | null | 8 | 0 | false | 210a97b4dadbda63cc9fe459e8415d4cd3bbaf99 | true | true | 2024-09-15 | 2024-09-14 | true | false | DeepAutoAI/ldm_soup_Llama-3.1-8B-Instruct-v0.0 | 0 | DeepAutoAI/ldm_soup_Llama-3.1-8B-Instruct-v0.0 |
🤝 | bunnycore/HyperLlama-3.1-8B | 28.07 | 78.83 | 0.79 | 29.81 | 0.51 | 16.01 | 0.16 | 4.92 | 0.29 | 7.93 | 0.38 | 30.92 | 0.38 | 🤝 base merges and moerges | LlamaForCausalLM | Original | bfloat16 | false | apache-2.0 | 8 | 3 | true | 659b18ffaee2c1e8dbe8a9a56a44502325d71696 | true | true | 2024-09-05 | 2024-09-04 | true | false | bunnycore/HyperLlama-3.1-8B | 0 | bunnycore/HyperLlama-3.1-8B |
🔶 | NLPark/AnFeng_v3.1-Avocet | 28.05 | 50.96 | 0.51 | 40.31 | 0.58 | 13.9 | 0.14 | 9.96 | 0.32 | 14.98 | 0.45 | 38.2 | 0.44 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | float16 | true | cc-by-nc-nd-4.0 | 34 | 0 | true | 5170739731033323e6e66a0f68d34790042a3b2a | true | true | 2024-08-07 | 2024-08-03 | false | false | NLPark/AnFeng_v3.1-Avocet | 0 | NLPark/AnFeng_v3.1-Avocet |
🤝 | OpenBuddy/openbuddy-zero-56b-v21.2-32k | 27.99 | 50.57 | 0.51 | 44.8 | 0.61 | 12.99 | 0.13 | 9.06 | 0.32 | 12.78 | 0.43 | 37.77 | 0.44 | 🤝 base merges and moerges | LlamaForCausalLM | Original | float16 | true | other | 56 | 0 | true | c7a1a4a6e798f75d1d3219ab9ff9f2692e29f7d5 | true | true | 2024-06-26 | 2024-06-10 | true | false | OpenBuddy/openbuddy-zero-56b-v21.2-32k | 0 | OpenBuddy/openbuddy-zero-56b-v21.2-32k |
🔶 | Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 | 27.93 | 77.92 | 0.78 | 29.69 | 0.51 | 16.92 | 0.17 | 4.36 | 0.28 | 7.77 | 0.38 | 30.9 | 0.38 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | llama3.1 | 8 | 29 | true | 2340f8fbcd2452125a798686ca90b882a08fb0d9 | true | true | 2024-08-28 | 2024-08-09 | true | false | Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 | 0 | Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 |
💬 | meta-llama/Meta-Llama-3.1-8B-Instruct | 27.91 | 78.56 | 0.79 | 29.89 | 0.51 | 17.6 | 0.18 | 2.35 | 0.27 | 8.41 | 0.39 | 30.68 | 0.38 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | llama3.1 | 8 | 2,447 | true | df34336b42332c6d360959e259cd6271c6a09fd4 | true | true | 2024-08-15 | 2024-07-18 | true | true | meta-llama/Meta-Llama-3.1-8B-Instruct | 1 | meta-llama/Meta-Llama-3.1-8B |
💬 | vicgalle/Configurable-Llama-3.1-8B-Instruct | 27.77 | 83.12 | 0.83 | 29.66 | 0.5 | 15.86 | 0.16 | 3.24 | 0.27 | 5.93 | 0.38 | 28.8 | 0.36 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | float16 | true | apache-2.0 | 8 | 10 | true | 133b3ab1a5385ff9b3d17da2addfe3fc1fd6f733 | true | true | 2024-08-05 | 2024-07-24 | true | false | vicgalle/Configurable-Llama-3.1-8B-Instruct | 0 | vicgalle/Configurable-Llama-3.1-8B-Instruct |
🔶 | BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B | 27.74 | 51.86 | 0.52 | 35.38 | 0.55 | 13.97 | 0.14 | 13.87 | 0.35 | 16.72 | 0.46 | 34.65 | 0.41 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 8 | 3 | true | a42c86c61b98ca4fdf238d688fe6ea11cf414d29 | true | true | 2024-08-05 | 2024-07-09 | true | false | BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B | 0 | BAAI/Infinity-Instruct-3M-0625-Yi-1.5-9B |
🔶 | cognitivecomputations/dolphin-2.9.1-yi-1.5-34b | 27.73 | 38.53 | 0.39 | 44.17 | 0.61 | 15.18 | 0.15 | 12.42 | 0.34 | 16.97 | 0.46 | 39.1 | 0.45 | 🔶 fine-tuned on domain-specific datasets | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 34 | 34 | true | 1ec522298a6935c881df6dc29d3669833bd8672d | true | true | 2024-07-27 | 2024-05-18 | true | true | cognitivecomputations/dolphin-2.9.1-yi-1.5-34b | 1 | 01-ai/Yi-1.5-34B |
💬 | 01-ai/Yi-1.5-9B-Chat | 27.71 | 60.46 | 0.6 | 36.95 | 0.56 | 11.63 | 0.12 | 11.3 | 0.33 | 12.84 | 0.43 | 33.06 | 0.4 | 💬 chat models (RLHF, DPO, IFT, ...) | LlamaForCausalLM | Original | bfloat16 | true | apache-2.0 | 8 | 128 | true | bc87d8557c98dc1e5fdef6ec23ed31088c4d3f35 | true | true | 2024-06-12 | 2024-05-10 | true | true | 01-ai/Yi-1.5-9B-Chat | 0 | 01-ai/Yi-1.5-9B-Chat |
End of preview.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 2