Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1869, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 578, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1885, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 597, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 399, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'model_kwargs' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1392, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1041, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 999, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1740, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1896, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

config
dict
report
dict
name
string
backend
dict
scenario
dict
launcher
dict
environment
dict
print_report
bool
log_report
bool
overall
dict
warmup
dict
train
dict
{ "name": "cpu_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2486.39488, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6635975709999968, 0.5373347489999958, 0.5483920370000135, 0.5482384080000031, 0.5583443929999987 ], "count": 5, "total": 2.855907158000008, "mean": 0.5711814316000016, "p50": 0.5483920370000135, "p90": 0.6214962997999975, "p95": 0.6425469353999972, "p99": 0.6593874438799969, "stdev": 0.04668376407871779, "stdev_": 8.173193576679578 }, "throughput": { "unit": "samples/s", "value": 17.50757193207044 }, "energy": { "unit": "kWh", "cpu": 0.0001236084470833335, "ram": 0.000005169594697994174, "gpu": 0, "total": 0.00012877804178132768 }, "efficiency": { "unit": "samples/kWh", "value": 77652.9900724889 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2486.39488, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6635975709999968, 0.5373347489999958 ], "count": 2, "total": 1.2009323199999926, "mean": 0.6004661599999963, "p50": 0.6004661599999963, "p90": 0.6509712887999968, "p95": 0.6572844298999968, "p99": 0.6623349427799968, "stdev": 0.06313141100000053, "stdev_": 10.513733363425661 }, "throughput": { "unit": "samples/s", "value": 6.6614911321564305 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2486.39488, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5483920370000135, 0.5482384080000031, 0.5583443929999987 ], "count": 3, "total": 1.6549748380000153, "mean": 0.5516582793333384, "p50": 0.5483920370000135, "p90": 0.5563539218000016, "p95": 0.5573491574000002, "p99": 0.558145345879999, "stdev": 0.004728212307700592, "stdev_": 0.8570907905913216 }, "throughput": { "unit": "samples/s", "value": 10.876298289678186 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_fill-mask_google-bert/bert-base-uncased
{ "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "library": "transformers", "model_type": "bert", "model": "google-bert/bert-base-uncased", "processor": "google-bert/bert-base-uncased", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2486.39488, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6635975709999968, 0.5373347489999958, 0.5483920370000135, 0.5482384080000031, 0.5583443929999987 ], "count": 5, "total": 2.855907158000008, "mean": 0.5711814316000016, "p50": 0.5483920370000135, "p90": 0.6214962997999975, "p95": 0.6425469353999972, "p99": 0.6593874438799969, "stdev": 0.04668376407871779, "stdev_": 8.173193576679578 }, "throughput": { "unit": "samples/s", "value": 17.50757193207044 }, "energy": { "unit": "kWh", "cpu": 0.0001236084470833335, "ram": 0.000005169594697994174, "gpu": 0, "total": 0.00012877804178132768 }, "efficiency": { "unit": "samples/kWh", "value": 77652.9900724889 } }
{ "memory": { "unit": "MB", "max_ram": 2486.39488, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.6635975709999968, 0.5373347489999958 ], "count": 2, "total": 1.2009323199999926, "mean": 0.6004661599999963, "p50": 0.6004661599999963, "p90": 0.6509712887999968, "p95": 0.6572844298999968, "p99": 0.6623349427799968, "stdev": 0.06313141100000053, "stdev_": 10.513733363425661 }, "throughput": { "unit": "samples/s", "value": 6.6614911321564305 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2486.39488, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5483920370000135, 0.5482384080000031, 0.5583443929999987 ], "count": 3, "total": 1.6549748380000153, "mean": 0.5516582793333384, "p50": 0.5483920370000135, "p90": 0.5563539218000016, "p95": 0.5573491574000002, "p99": 0.558145345879999, "stdev": 0.004728212307700592, "stdev_": 0.8570907905913216 }, "throughput": { "unit": "samples/s", "value": 10.876298289678186 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_fill-mask_google-bert/bert-base-uncased", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "fill-mask", "model": "google-bert/bert-base-uncased", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 2.738775005999969, "mean": 0.5477550011999938, "stdev": 0.03693447784258994, "p50": 0.5307143729999666, "p90": 0.5856752317999963, "p95": 0.6036043034000045, "p99": 0.6179475606800111, "values": [ 0.6215333750000127, 0.5307143729999666, 0.5270229210000252, 0.5276163199999928, 0.5318880169999716 ] }, "throughput": { "unit": "samples/s", "value": 18.25633719106628 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.1522477479999793, "mean": 0.5761238739999897, "stdev": 0.045409501000023056, "p50": 0.5761238739999897, "p90": 0.612451474800008, "p95": 0.6169924249000104, "p99": 0.6206251849800123, "values": [ 0.6215333750000127, 0.5307143729999666 ] }, "throughput": { "unit": "samples/s", "value": 6.942951300088064 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2488.782848, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.5865272579999896, "mean": 0.5288424193333299, "stdev": 0.0021671455040491463, "p50": 0.5276163199999928, "p90": 0.5310336775999758, "p95": 0.5314608472999737, "p99": 0.531802583059972, "values": [ 0.5270229210000252, 0.5276163199999928, 0.5318880169999716 ] }, "throughput": { "unit": "samples/s", "value": 11.345534663357242 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2405.380096, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6642791839999518, 1.4877955420000148, 1.518523646999995, 1.4944322430000057, 1.520130590000008 ], "count": 5, "total": 7.685161205999975, "mean": 1.537032241199995, "p50": 1.518523646999995, "p90": 1.6066197463999743, "p95": 1.635449465199963, "p99": 1.658513240239954, "stdev": 0.06489842944051911, "stdev_": 4.222320631989572 }, "throughput": { "unit": "samples/s", "value": 6.5060443964355485 }, "energy": { "unit": "kWh", "cpu": 0.00031560247720555535, "ram": 0.000013200145479373513, "gpu": 0, "total": 0.00032880262268492885 }, "efficiency": { "unit": "samples/kWh", "value": 30413.382710704165 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2405.380096, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6642791839999518, 1.4877955420000148 ], "count": 2, "total": 3.1520747259999666, "mean": 1.5760373629999833, "p50": 1.5760373629999833, "p90": 1.6466308197999582, "p95": 1.655455001899955, "p99": 1.6625143475799524, "stdev": 0.08824182099996847, "stdev_": 5.598967579804096 }, "throughput": { "unit": "samples/s", "value": 2.538010896128763 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2405.380096, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.518523646999995, 1.4944322430000057, 1.520130590000008 ], "count": 3, "total": 4.533086480000009, "mean": 1.5110288266666696, "p50": 1.518523646999995, "p90": 1.5198092014000053, "p95": 1.5199698957000067, "p99": 1.5200984511400077, "stdev": 0.011753879033600017, "stdev_": 0.7778725876149617 }, "throughput": { "unit": "samples/s", "value": 3.970804457275645 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_image-classification_google/vit-base-patch16-224
{ "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "library": "transformers", "model_type": "vit", "model": "google/vit-base-patch16-224", "processor": "google/vit-base-patch16-224", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2405.380096, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6642791839999518, 1.4877955420000148, 1.518523646999995, 1.4944322430000057, 1.520130590000008 ], "count": 5, "total": 7.685161205999975, "mean": 1.537032241199995, "p50": 1.518523646999995, "p90": 1.6066197463999743, "p95": 1.635449465199963, "p99": 1.658513240239954, "stdev": 0.06489842944051911, "stdev_": 4.222320631989572 }, "throughput": { "unit": "samples/s", "value": 6.5060443964355485 }, "energy": { "unit": "kWh", "cpu": 0.00031560247720555535, "ram": 0.000013200145479373513, "gpu": 0, "total": 0.00032880262268492885 }, "efficiency": { "unit": "samples/kWh", "value": 30413.382710704165 } }
{ "memory": { "unit": "MB", "max_ram": 2405.380096, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.6642791839999518, 1.4877955420000148 ], "count": 2, "total": 3.1520747259999666, "mean": 1.5760373629999833, "p50": 1.5760373629999833, "p90": 1.6466308197999582, "p95": 1.655455001899955, "p99": 1.6625143475799524, "stdev": 0.08824182099996847, "stdev_": 5.598967579804096 }, "throughput": { "unit": "samples/s", "value": 2.538010896128763 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2405.380096, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.518523646999995, 1.4944322430000057, 1.520130590000008 ], "count": 3, "total": 4.533086480000009, "mean": 1.5110288266666696, "p50": 1.518523646999995, "p90": 1.5198092014000053, "p95": 1.5199698957000067, "p99": 1.5200984511400077, "stdev": 0.011753879033600017, "stdev_": 0.7778725876149617 }, "throughput": { "unit": "samples/s", "value": 3.970804457275645 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_image-classification_google/vit-base-patch16-224", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "image-classification", "model": "google/vit-base-patch16-224", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 7.2970974209999895, "mean": 1.459419484199998, "stdev": 0.05210139006345095, "p50": 1.4401334369999859, "p90": 1.521250764199999, "p95": 1.5379663595999886, "p99": 1.5513388359199802, "values": [ 1.5546819549999782, 1.4241662819999874, 1.4401334369999859, 1.4711039780000306, 1.4070117690000075 ] }, "throughput": { "unit": "samples/s", "value": 6.852039532336137 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 2.9788482369999656, "mean": 1.4894241184999828, "stdev": 0.06525783649999539, "p50": 1.4894241184999828, "p90": 1.541630387699979, "p95": 1.5481561713499787, "p99": 1.5533767982699782, "values": [ 1.5546819549999782, 1.4241662819999874 ] }, "throughput": { "unit": "samples/s", "value": 2.6856017371522416 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2442.985472, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 4.318249184000024, "mean": 1.4394163946666747, "stdev": 0.02617044676610726, "p50": 1.4401334369999859, "p90": 1.4649098698000216, "p95": 1.4680069239000262, "p99": 1.4704845671800297, "values": [ 1.4401334369999859, 1.4711039780000306, 1.4070117690000075 ] }, "throughput": { "unit": "samples/s", "value": 4.1683560241714614 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2837.921792, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8517281370000092, 0.737927233999983, 0.7432572670000184, 0.7577234400000066, 0.7178666339999893 ], "count": 5, "total": 3.8085027120000063, "mean": 0.7617005424000013, "p50": 0.7432572670000184, "p90": 0.8141262582000082, "p95": 0.8329271976000087, "p99": 0.8479679491200091, "stdev": 0.0467921387806671, "stdev_": 6.143114803782634 }, "throughput": { "unit": "samples/s", "value": 13.128518943273347 }, "energy": { "unit": "kWh", "cpu": 0.00016087763887222292, "ram": 0.000006728507669543458, "gpu": 0, "total": 0.00016760614654176637 }, "efficiency": { "unit": "samples/kWh", "value": 59663.683023152524 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2837.921792, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8517281370000092, 0.737927233999983 ], "count": 2, "total": 1.589655370999992, "mean": 0.794827685499996, "p50": 0.794827685499996, "p90": 0.8403480467000065, "p95": 0.8460380918500079, "p99": 0.8505901279700089, "stdev": 0.056900451500013105, "stdev_": 7.158841159919987 }, "throughput": { "unit": "samples/s", "value": 5.032537332269385 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2837.921792, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7432572670000184, 0.7577234400000066, 0.7178666339999893 ], "count": 3, "total": 2.2188473410000142, "mean": 0.7396157803333381, "p50": 0.7432572670000184, "p90": 0.7548302054000089, "p95": 0.7562768227000077, "p99": 0.7574341165400068, "stdev": 0.016473950446861275, "stdev_": 2.2273660033911953 }, "throughput": { "unit": "samples/s", "value": 8.11232015262824 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_multiple-choice_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2837.921792, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8517281370000092, 0.737927233999983, 0.7432572670000184, 0.7577234400000066, 0.7178666339999893 ], "count": 5, "total": 3.8085027120000063, "mean": 0.7617005424000013, "p50": 0.7432572670000184, "p90": 0.8141262582000082, "p95": 0.8329271976000087, "p99": 0.8479679491200091, "stdev": 0.0467921387806671, "stdev_": 6.143114803782634 }, "throughput": { "unit": "samples/s", "value": 13.128518943273347 }, "energy": { "unit": "kWh", "cpu": 0.00016087763887222292, "ram": 0.000006728507669543458, "gpu": 0, "total": 0.00016760614654176637 }, "efficiency": { "unit": "samples/kWh", "value": 59663.683023152524 } }
{ "memory": { "unit": "MB", "max_ram": 2837.921792, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.8517281370000092, 0.737927233999983 ], "count": 2, "total": 1.589655370999992, "mean": 0.794827685499996, "p50": 0.794827685499996, "p90": 0.8403480467000065, "p95": 0.8460380918500079, "p99": 0.8505901279700089, "stdev": 0.056900451500013105, "stdev_": 7.158841159919987 }, "throughput": { "unit": "samples/s", "value": 5.032537332269385 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2837.921792, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7432572670000184, 0.7577234400000066, 0.7178666339999893 ], "count": 3, "total": 2.2188473410000142, "mean": 0.7396157803333381, "p50": 0.7432572670000184, "p90": 0.7548302054000089, "p95": 0.7562768227000077, "p99": 0.7574341165400068, "stdev": 0.016473950446861275, "stdev_": 2.2273660033911953 }, "throughput": { "unit": "samples/s", "value": 8.11232015262824 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_multiple-choice_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "multiple-choice", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 3.581090587999995, "mean": 0.716218117599999, "stdev": 0.043372798377969854, "p50": 0.697155070000008, "p90": 0.7645524997999928, "p95": 0.7826806403999967, "p99": 0.7971831528799999, "values": [ 0.8008087810000006, 0.710168077999981, 0.6928677590000234, 0.697155070000008, 0.680090899999982 ] }, "throughput": { "unit": "samples/s", "value": 13.962227084549829 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.5109768589999817, "mean": 0.7554884294999908, "stdev": 0.04532035150000979, "p50": 0.7554884294999908, "p90": 0.7917447106999986, "p95": 0.7962767458499996, "p99": 0.7999023739700004, "values": [ 0.8008087810000006, 0.710168077999981 ] }, "throughput": { "unit": "samples/s", "value": 5.294588035778844 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2845.749248, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 2.0701137290000133, "mean": 0.6900379096666711, "stdev": 0.007248103654729414, "p50": 0.6928677590000234, "p90": 0.696297607800011, "p95": 0.6967263389000096, "p99": 0.6970693237800083, "values": [ 0.6928677590000234, 0.697155070000008, 0.680090899999982 ] }, "throughput": { "unit": "samples/s", "value": 8.695174447587021 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2857.074688, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7181410320000055, 0.592719202000012, 0.5735279499999706, 0.5630075310000393, 0.5716755029999945 ], "count": 5, "total": 3.019071218000022, "mean": 0.6038142436000044, "p50": 0.5735279499999706, "p90": 0.6679723000000081, "p95": 0.6930566660000067, "p99": 0.7131241588000057, "stdev": 0.057981135750453355, "stdev_": 9.602478968492644 }, "throughput": { "unit": "samples/s", "value": 16.56138474041113 }, "energy": { "unit": "kWh", "cpu": 0.00013044473528333293, "ram": 0.000005455539198205685, "gpu": 0, "total": 0.00013590027448153862 }, "efficiency": { "unit": "samples/kWh", "value": 73583.36867346393 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2857.074688, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7181410320000055, 0.592719202000012 ], "count": 2, "total": 1.3108602340000175, "mean": 0.6554301170000087, "p50": 0.6554301170000087, "p90": 0.7055988490000061, "p95": 0.7118699405000057, "p99": 0.7168868137000055, "stdev": 0.0627109149999967, "stdev_": 9.567902568626682 }, "throughput": { "unit": "samples/s", "value": 6.1028626794089575 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2857.074688, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5735279499999706, 0.5630075310000393, 0.5716755029999945 ], "count": 3, "total": 1.7082109840000044, "mean": 0.5694036613333348, "p50": 0.5716755029999945, "p90": 0.5731574605999754, "p95": 0.573342705299973, "p99": 0.573490901059971, "stdev": 0.004585539037910888, "stdev_": 0.8053230685544305 }, "throughput": { "unit": "samples/s", "value": 10.537340040895062 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_text-classification_FacebookAI/roberta-base
{ "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "library": "transformers", "model_type": "roberta", "model": "FacebookAI/roberta-base", "processor": "FacebookAI/roberta-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2857.074688, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7181410320000055, 0.592719202000012, 0.5735279499999706, 0.5630075310000393, 0.5716755029999945 ], "count": 5, "total": 3.019071218000022, "mean": 0.6038142436000044, "p50": 0.5735279499999706, "p90": 0.6679723000000081, "p95": 0.6930566660000067, "p99": 0.7131241588000057, "stdev": 0.057981135750453355, "stdev_": 9.602478968492644 }, "throughput": { "unit": "samples/s", "value": 16.56138474041113 }, "energy": { "unit": "kWh", "cpu": 0.00013044473528333293, "ram": 0.000005455539198205685, "gpu": 0, "total": 0.00013590027448153862 }, "efficiency": { "unit": "samples/kWh", "value": 73583.36867346393 } }
{ "memory": { "unit": "MB", "max_ram": 2857.074688, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7181410320000055, 0.592719202000012 ], "count": 2, "total": 1.3108602340000175, "mean": 0.6554301170000087, "p50": 0.6554301170000087, "p90": 0.7055988490000061, "p95": 0.7118699405000057, "p99": 0.7168868137000055, "stdev": 0.0627109149999967, "stdev_": 9.567902568626682 }, "throughput": { "unit": "samples/s", "value": 6.1028626794089575 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2857.074688, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.5735279499999706, 0.5630075310000393, 0.5716755029999945 ], "count": 3, "total": 1.7082109840000044, "mean": 0.5694036613333348, "p50": 0.5716755029999945, "p90": 0.5731574605999754, "p95": 0.573342705299973, "p99": 0.573490901059971, "stdev": 0.004585539037910888, "stdev_": 0.8053230685544305 }, "throughput": { "unit": "samples/s", "value": 10.537340040895062 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_text-classification_FacebookAI/roberta-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-classification", "model": "FacebookAI/roberta-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 2.882509665999976, "mean": 0.5765019331999952, "stdev": 0.04978939696949581, "p50": 0.5569985249999831, "p90": 0.6300386333999881, "p95": 0.6525100941999881, "p99": 0.670487262839988, "values": [ 0.674981554999988, 0.5569985249999831, 0.5424031590000027, 0.5626242509999884, 0.5455021760000136 ] }, "throughput": { "unit": "samples/s", "value": 17.34599560576128 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.2319800799999712, "mean": 0.6159900399999856, "stdev": 0.05899151500000244, "p50": 0.6159900399999856, "p90": 0.6631832519999875, "p95": 0.6690824034999878, "p99": 0.673801724699988, "values": [ 0.674981554999988, 0.5569985249999831 ] }, "throughput": { "unit": "samples/s", "value": 6.493611487614465 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2826.752, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.6505295860000047, "mean": 0.5501765286666682, "stdev": 0.00889233078021606, "p50": 0.5455021760000136, "p90": 0.5591998359999935, "p95": 0.5609120434999909, "p99": 0.5622818094999888, "values": [ 0.5424031590000027, 0.5626242509999884, 0.5455021760000136 ] }, "throughput": { "unit": "samples/s", "value": 10.905590637500968 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2841.710592, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7321619030000193, 0.6079302050000024, 0.627617966999992, 0.6354264059999934, 0.6183757500000127 ], "count": 5, "total": 3.2215122310000197, "mean": 0.644302446200004, "p50": 0.627617966999992, "p90": 0.6934677042000089, "p95": 0.712814803600014, "p99": 0.7282924831200183, "stdev": 0.044881117612553555, "stdev_": 6.965846222881107 }, "throughput": { "unit": "samples/s", "value": 15.520661234453556 }, "energy": { "unit": "kWh", "cpu": 0.00013758844127777795, "ram": 0.000005754325319195459, "gpu": 0, "total": 0.0001433427665969734 }, "efficiency": { "unit": "samples/kWh", "value": 69762.85052538636 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 2841.710592, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7321619030000193, 0.6079302050000024 ], "count": 2, "total": 1.3400921080000217, "mean": 0.6700460540000108, "p50": 0.6700460540000108, "p90": 0.7197387332000176, "p95": 0.7259503181000184, "p99": 0.730919586020019, "stdev": 0.06211584900000844, "stdev_": 9.270385017446491 }, "throughput": { "unit": "samples/s", "value": 5.969738909916684 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2841.710592, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.627617966999992, 0.6354264059999934, 0.6183757500000127 ], "count": 3, "total": 1.881420122999998, "mean": 0.6271400409999993, "p50": 0.627617966999992, "p90": 0.6338647181999931, "p95": 0.6346455620999933, "p99": 0.6352702372199934, "stdev": 0.006969099772257625, "stdev_": 1.1112509673509483 }, "throughput": { "unit": "samples/s", "value": 9.56724113872998 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_text-generation_openai-community/gpt2
{ "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "library": "transformers", "model_type": "gpt2", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 2841.710592, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7321619030000193, 0.6079302050000024, 0.627617966999992, 0.6354264059999934, 0.6183757500000127 ], "count": 5, "total": 3.2215122310000197, "mean": 0.644302446200004, "p50": 0.627617966999992, "p90": 0.6934677042000089, "p95": 0.712814803600014, "p99": 0.7282924831200183, "stdev": 0.044881117612553555, "stdev_": 6.965846222881107 }, "throughput": { "unit": "samples/s", "value": 15.520661234453556 }, "energy": { "unit": "kWh", "cpu": 0.00013758844127777795, "ram": 0.000005754325319195459, "gpu": 0, "total": 0.0001433427665969734 }, "efficiency": { "unit": "samples/kWh", "value": 69762.85052538636 } }
{ "memory": { "unit": "MB", "max_ram": 2841.710592, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.7321619030000193, 0.6079302050000024 ], "count": 2, "total": 1.3400921080000217, "mean": 0.6700460540000108, "p50": 0.6700460540000108, "p90": 0.7197387332000176, "p95": 0.7259503181000184, "p99": 0.730919586020019, "stdev": 0.06211584900000844, "stdev_": 9.270385017446491 }, "throughput": { "unit": "samples/s", "value": 5.969738909916684 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 2841.710592, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 0.627617966999992, 0.6354264059999934, 0.6183757500000127 ], "count": 3, "total": 1.881420122999998, "mean": 0.6271400409999993, "p50": 0.627617966999992, "p90": 0.6338647181999931, "p95": 0.6346455620999933, "p99": 0.6352702372199934, "stdev": 0.006969099772257625, "stdev_": 1.1112509673509483 }, "throughput": { "unit": "samples/s", "value": 9.56724113872998 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_text-generation_openai-community/gpt2", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 3.1791685380000274, "mean": 0.6358337076000055, "stdev": 0.07846941233493662, "p50": 0.596941285000014, "p90": 0.7161873142000047, "p95": 0.7544328206000045, "p99": 0.7850292257200044, "values": [ 0.7926783270000044, 0.6014507950000052, 0.596941285000014, 0.593667699000008, 0.5944304319999958 ] }, "throughput": { "unit": "samples/s", "value": 15.727382616668173 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 1.3941291220000096, "mean": 0.6970645610000048, "stdev": 0.0956137659999996, "p50": 0.6970645610000048, "p90": 0.7735555738000045, "p95": 0.7831169504000044, "p99": 0.7907660516800044, "values": [ 0.7926783270000044, 0.6014507950000052 ] }, "throughput": { "unit": "samples/s", "value": 5.738349392288174 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 2827.354112, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 1.7850394160000178, "mean": 0.5950131386666726, "stdev": 0.0013985114990352936, "p50": 0.5944304319999958, "p90": 0.5964391144000103, "p95": 0.5966901997000121, "p99": 0.5968910679400136, "values": [ 0.596941285000014, 0.593667699000008, 0.5944304319999958 ] }, "throughput": { "unit": "samples/s", "value": 10.083810944822195 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
{ "name": "cpu_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }, "print_report": true, "log_report": true }
{ "overall": { "memory": { "unit": "MB", "max_ram": 4268.384256, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3445962040000268, 1.1509175200000072, 1.1637554680000335, 1.1548053729999879, 1.1563683900000115 ], "count": 5, "total": 5.970442955000067, "mean": 1.1940885910000134, "p50": 1.1563683900000115, "p90": 1.2722599096000295, "p95": 1.3084280568000282, "p99": 1.337362574560027, "stdev": 0.07536891411331247, "stdev_": 6.311836046452236 }, "throughput": { "unit": "samples/s", "value": 8.37458801245668 }, "energy": { "unit": "kWh", "cpu": 0.0002454654149888872, "ram": 0.00001026645591635923, "gpu": 0, "total": 0.00025573187090524645 }, "efficiency": { "unit": "samples/kWh", "value": 39103.456149605976 } }, "warmup": { "memory": { "unit": "MB", "max_ram": 4268.384256, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3445962040000268, 1.1509175200000072 ], "count": 2, "total": 2.495513724000034, "mean": 1.247756862000017, "p50": 1.247756862000017, "p90": 1.3252283356000247, "p95": 1.3349122698000258, "p99": 1.3426594171600266, "stdev": 0.09683934200000976, "stdev_": 7.761074689245703 }, "throughput": { "unit": "samples/s", "value": 3.205752756661614 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 4268.384256, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.1637554680000335, 1.1548053729999879, 1.1563683900000115 ], "count": 3, "total": 3.474929231000033, "mean": 1.1583097436666776, "p50": 1.1563683900000115, "p90": 1.1622780524000291, "p95": 1.1630167602000312, "p99": 1.163607726440033, "stdev": 0.0039032200955765547, "stdev_": 0.3369755039114796 }, "throughput": { "unit": "samples/s", "value": 5.179961605957618 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null
null
null
cpu_training_transformers_token-classification_microsoft/deberta-v3-base
{ "name": "pytorch", "version": "2.4.1+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "library": "transformers", "model_type": "deberta-v2", "model": "microsoft/deberta-v3-base", "processor": "microsoft/deberta-v3-base", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "model_kwargs": {}, "processor_kwargs": {}, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "evaluation_strategy": "no", "eval_strategy": "no", "save_strategy": "no", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": true }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": null, "numactl": false, "numactl_kwargs": {}, "start_method": "spawn" }
{ "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16766.783488, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.8.0-1015-azure-x86_64-with-glibc2.39", "processor": "x86_64", "python_version": "3.10.15", "optimum_benchmark_version": "0.5.0.dev0", "optimum_benchmark_commit": "08c9f59440cf4e5a5d6711ec19e8329ab2de652d", "transformers_version": "4.45.2", "transformers_commit": null, "accelerate_version": "1.0.1", "accelerate_commit": null, "diffusers_version": "0.30.3", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "1.0.9", "timm_commit": null, "peft_version": null, "peft_commit": null }
true
true
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": { "unit": "MB", "max_ram": 4268.384256, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3445962040000268, 1.1509175200000072, 1.1637554680000335, 1.1548053729999879, 1.1563683900000115 ], "count": 5, "total": 5.970442955000067, "mean": 1.1940885910000134, "p50": 1.1563683900000115, "p90": 1.2722599096000295, "p95": 1.3084280568000282, "p99": 1.337362574560027, "stdev": 0.07536891411331247, "stdev_": 6.311836046452236 }, "throughput": { "unit": "samples/s", "value": 8.37458801245668 }, "energy": { "unit": "kWh", "cpu": 0.0002454654149888872, "ram": 0.00001026645591635923, "gpu": 0, "total": 0.00025573187090524645 }, "efficiency": { "unit": "samples/kWh", "value": 39103.456149605976 } }
{ "memory": { "unit": "MB", "max_ram": 4268.384256, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.3445962040000268, 1.1509175200000072 ], "count": 2, "total": 2.495513724000034, "mean": 1.247756862000017, "p50": 1.247756862000017, "p90": 1.3252283356000247, "p95": 1.3349122698000258, "p99": 1.3426594171600266, "stdev": 0.09683934200000976, "stdev_": 7.761074689245703 }, "throughput": { "unit": "samples/s", "value": 3.205752756661614 }, "energy": null, "efficiency": null }
{ "memory": { "unit": "MB", "max_ram": 4268.384256, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "values": [ 1.1637554680000335, 1.1548053729999879, 1.1563683900000115 ], "count": 3, "total": 3.474929231000033, "mean": 1.1583097436666776, "p50": 1.1563683900000115, "p90": 1.1622780524000291, "p95": 1.1630167602000312, "p99": 1.163607726440033, "stdev": 0.0039032200955765547, "stdev_": 0.3369755039114796 }, "throughput": { "unit": "samples/s", "value": 5.179961605957618 }, "energy": null, "efficiency": null }
{ "name": "cpu_training_transformers_token-classification_microsoft/deberta-v3-base", "backend": { "name": "pytorch", "version": "2.3.0+cpu", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "token-classification", "model": "microsoft/deberta-v3-base", "library": "transformers", "device": "cpu", "device_ids": null, "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": false }, "no_weights": true, "device_map": null, "torch_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "autocast_enabled": false, "autocast_dtype": null, "torch_compile": false, "torch_compile_target": "forward", "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }, "scenario": { "name": "training", "_target_": "optimum_benchmark.scenarios.training.scenario.TrainingScenario", "max_steps": 5, "warmup_steps": 2, "dataset_shapes": { "dataset_size": 500, "sequence_length": 16, "num_choices": 1 }, "training_arguments": { "per_device_train_batch_size": 2, "gradient_accumulation_steps": 1, "output_dir": "./trainer_output", "do_train": true, "use_cpu": false, "max_steps": 5, "do_eval": false, "do_predict": false, "report_to": "none", "skip_memory_metrics": true, "ddp_find_unused_parameters": false }, "latency": true, "memory": true, "energy": false }, "launcher": { "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "error", "start_method": "spawn" }, "environment": { "cpu": " AMD EPYC 7763 64-Core Processor", "cpu_count": 4, "cpu_ram_mb": 16757.346304, "system": "Linux", "machine": "x86_64", "platform": "Linux-6.5.0-1018-azure-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.10.14", "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": "2e77e02d1fd3ab0d2e788c3d89c12299219a25e8", "transformers_version": "4.40.2", "transformers_commit": null, "accelerate_version": "0.30.0", "accelerate_commit": null, "diffusers_version": "0.27.2", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": "0.9.16", "timm_commit": null, "peft_version": null, "peft_commit": null } }
{ "overall": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 5, "total": 5.583659903000068, "mean": 1.1167319806000138, "stdev": 0.07853258776030796, "p50": 1.0731834320000075, "p90": 1.2025418994000006, "p95": 1.2374836681999908, "p99": 1.265437083239983, "values": [ 1.2724254369999812, 1.0731834320000075, 1.0977165930000297, 1.0711078369999996, 1.0692266040000504 ] }, "throughput": { "unit": "samples/s", "value": 8.954700119385008 }, "energy": null, "efficiency": null }, "warmup": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 2, "total": 2.3456088689999888, "mean": 1.1728044344999944, "stdev": 0.09962100249998684, "p50": 1.1728044344999944, "p90": 1.252501236499984, "p95": 1.2624633367499825, "p99": 1.2704330169499816, "values": [ 1.2724254369999812, 1.0731834320000075 ] }, "throughput": { "unit": "samples/s", "value": 3.4106283045436583 }, "energy": null, "efficiency": null }, "train": { "memory": { "unit": "MB", "max_ram": 4374.970368, "max_global_vram": null, "max_process_vram": null, "max_reserved": null, "max_allocated": null }, "latency": { "unit": "s", "count": 3, "total": 3.2380510340000797, "mean": 1.0793503446666932, "stdev": 0.01300958794585394, "p50": 1.0711078369999996, "p90": 1.0923948418000236, "p95": 1.0950557174000266, "p99": 1.097184417880029, "values": [ 1.0977165930000297, 1.0711078369999996, 1.0692266040000504 ] }, "throughput": { "unit": "samples/s", "value": 5.558899415419021 }, "energy": null, "efficiency": null } }
null
null
null
null
null
null
null
null
null
null

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
688