Dataset Preview
Full Screen
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'torch_compile_config' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1870, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 620, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'torch_compile_config' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1886, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 639, in finalize
                  self._build_writer(self.schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 441, in _build_writer
                  self.pa_writer = self._WRITER_CLASS(self.stream, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/parquet/core.py", line 1010, in __init__
                  self.writer = _parquet.ParquetWriter(
                File "pyarrow/_parquet.pyx", line 2157, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'torch_compile_config' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1417, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1049, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 924, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1000, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1741, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1897, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

experiment_name
string
backend
dict
launcher
dict
benchmark
dict
environment
dict
prefill
dict
decode
dict
per_token
dict
preprocess
dict
timestamp
timestamp[us]
project_name
string
run_id
string
duration
float64
emissions
float64
emissions_rate
float64
cpu_power
float64
gpu_power
float64
ram_power
float64
cpu_energy
float64
gpu_energy
float64
ram_energy
float64
energy_consumed
float64
country_name
string
country_iso_code
string
region
string
cloud_provider
string
cloud_region
string
os
string
python_version
string
codecarbon_version
string
cpu_count
int64
cpu_model
string
gpu_count
int64
gpu_model
string
longitude
float64
latitude
float64
ram_total_size
float64
tracking_mode
string
on_cloud
string
pue
float64
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "NousResearch/Hermes-3-Llama-3.1-70B", "processor": "NousResearch/Hermes-3-Llama-3.1-70B", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.010771365165906027, "ram": 0.000042438637758223256, "gpu": 0.03499995191660332, "total": 0.04581375572026757 }, "efficiency": { "unit": "tokens/kWh", "value": 7896383.83303204 }, "measures": [ { "unit": "kWh", "cpu": 0.011991021439712495, "ram": 0.0000472154011431394, "gpu": 0.03904179484450765, "total": 0.05108003168536328 }, { "unit": "kWh", "cpu": 0.011924504528505105, "ram": 0.00004698183313556059, "gpu": 0.0383002809179942, "total": 0.05027176727963486 }, { "unit": "kWh", "cpu": 0.011993333750839039, "ram": 0.00004725707737948754, "gpu": 0.039085344879353556, "total": 0.05112593570757207 }, { "unit": "kWh", "cpu": 0.011946801094772919, "ram": 0.00004707362635401354, "gpu": 0.03875589711580574, "total": 0.05074977183693266 }, { "unit": "kWh", "cpu": 0.01197381732976064, "ram": 0.00004717986447721291, "gpu": 0.038945671434291285, "total": 0.05096666862852914 }, { "unit": "kWh", "cpu": 0.011987848265313857, "ram": 0.0000472352374466683, "gpu": 0.03906375180652333, "total": 0.05109883530928383 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.011961736531058947, "ram": 0.00004713317937246131, "gpu": 0.03891160196259591, "total": 0.05092047167302732 }, { "unit": "kWh", "cpu": 0.011979645344180362, "ram": 0.00004720366638520082, "gpu": 0.03902127205030581, "total": 0.05104812106087142 }, { "unit": "kWh", "cpu": 0.011954943374916913, "ram": 0.000047106491888488104, "gpu": 0.03887390415465575, "total": 0.05087595402146117 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.006909195393007252, "ram": 0.000027241396927033754, "gpu": 0.016518416020279855, "total": 0.023454852810214137 }, "efficiency": { "unit": "tokens/kWh", "value": 383715.9018998692 }, "measures": [ { "unit": "kWh", "cpu": 0.007650506008230138, "ram": 0.000030191938381548358, "gpu": 0.01784508344273661, "total": 0.025525781389348315 }, { "unit": "kWh", "cpu": 0.0077314414274568515, "ram": 0.00003048246007387592, "gpu": 0.018924916806568604, "total": 0.026686840694099358 }, { "unit": "kWh", "cpu": 0.007653596007886024, "ram": 0.000030171857720687098, "gpu": 0.018020290527331895, "total": 0.025704058392938656 }, { "unit": "kWh", "cpu": 0.007697896991887443, "ram": 0.000030347271183221454, "gpu": 0.018528098155798034, "total": 0.026256342418868733 }, { "unit": "kWh", "cpu": -0.01197381732976064, "ram": -0.00004717986447721291, "gpu": -0.038945671434291285, "total": -0.05096666862852914 }, { "unit": "kWh", "cpu": 0.007658830678214622, "ram": 0.000030193639633192956, "gpu": 0.018433706969190666, "total": 0.02612273128703846 }, { "unit": "kWh", "cpu": 0.019649733130168157, "ram": 0.00007744103120943546, "gpu": 0.057579361619005454, "total": 0.07730653578038305 }, { "unit": "kWh", "cpu": 0.007676097559245948, "ram": 0.000030260951420738253, "gpu": 0.018055022221773243, "total": 0.025761380732439998 }, { "unit": "kWh", "cpu": 0.007661823454623434, "ram": 0.000030205083447398784, "gpu": 0.018397531384707122, "total": 0.02608955992277784 }, { "unit": "kWh", "cpu": 0.007685846002120536, "ram": 0.000030299600677452185, "gpu": 0.01834582050997824, "total": 0.026061966112776114 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00003136402598271767, "ram": 9.47190721638247e-8, "gpu": 0.000020315571802598242, "total": 0.000051774316857479744 }, "efficiency": { "unit": "samples/kWh", "value": 19314595.743536726 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "NousResearch/Hermes-3-Llama-3.1-8B", "processor": "NousResearch/Hermes-3-Llama-3.1-8B", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-31T01:45:36
codecarbon
9b44e373-79e1-42d2-a8a3-7a17097cc038
-1,726,061,022.483971
0.0284
0.000048
120
349.599509
0.472934
0.019641
0.05722
0.000077
0.076938
United States
USA
virginia
Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
96
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
1
1 x NVIDIA A100-SXM4-80GB
-77.4903
39.0469
1,121.805893
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-31T00:07:22
codecarbon
9b44e373-79e1-42d2-a8a3-7a17097cc038
-1,726,061,253.057946
0.01878
0.000052
120
390.207659
0.472846
0.011955
0.038874
0.000047
0.050876
United States
USA
virginia
Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
96
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
1
1 x NVIDIA A100-SXM4-80GB
-77.4903
39.0469
1,121.805893
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-30T23:07:28
codecarbon
9b44e373-79e1-42d2-a8a3-7a17097cc038
-1,726,061,610.765814
0.000019
0.00002
120
78.098636
0.364706
0.000031
0.00002
0
0.000052
United States
USA
virginia
Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
96
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
1
1 x NVIDIA A100-SXM4-80GB
-77.4903
39.0469
1,121.805893
process
N
1
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", "processor": "NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPO", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "google/gemma-2-27b-it", "processor": "google/gemma-2-27b-it", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "google/gemma-2-9b-it", "processor": "google/gemma-2-9b-it", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.010796036717298752, "ram": 0.00004221247271874213, "gpu": 0.035094443769978056, "total": 0.04593269295999555 }, "efficiency": { "unit": "tokens/kWh", "value": 7875937.087230494 }, "measures": [ { "unit": "kWh", "cpu": 0.011980346283999582, "ram": 0.0000468146921658609, "gpu": 0.03907187736859896, "total": 0.051099038344764404 }, { "unit": "kWh", "cpu": 0.011968268100575854, "ram": 0.000046796964567169203, "gpu": 0.038862176645267255, "total": 0.05087724171041028 }, { "unit": "kWh", "cpu": 0.012002059256316474, "ram": 0.00004692907708999362, "gpu": 0.03908471126774771, "total": 0.05113369960115419 }, { "unit": "kWh", "cpu": 0.011988915132762241, "ram": 0.00004688097061350007, "gpu": 0.03888492888569317, "total": 0.05092072498906891 }, { "unit": "kWh", "cpu": 0.01203697110641127, "ram": 0.0000470687174617259, "gpu": 0.039224088323692285, "total": 0.051308128147565246 }, { "unit": "kWh", "cpu": 0.012036172255718451, "ram": 0.000047065627186738225, "gpu": 0.03923518749923005, "total": 0.05131842538213527 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.011944209547775489, "ram": 0.00004670645096363496, "gpu": 0.038601951159307646, "total": 0.05059286715804673 }, { "unit": "kWh", "cpu": 0.01201830078779409, "ram": 0.000046996014971729906, "gpu": 0.039131409638443415, "total": 0.05119670644120922 }, { "unit": "kWh", "cpu": 0.011985124701634056, "ram": 0.000046866212167068536, "gpu": 0.03884810691180007, "total": 0.05088009782560127 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.006907515534659228, "ram": 0.000027028633857338117, "gpu": 0.01715961075545351, "total": 0.024094154923970088 }, "efficiency": { "unit": "tokens/kWh", "value": 373534.5783406723 }, "measures": [ { "unit": "kWh", "cpu": 0.007722293895389887, "ram": 0.00003024361633084403, "gpu": 0.01979682194854604, "total": 0.027549359460266785 }, { "unit": "kWh", "cpu": 0.007694034414272747, "ram": 0.000030104407365749493, "gpu": 0.019139355033701122, "total": 0.026863493855339614 }, { "unit": "kWh", "cpu": 0.007648532584433768, "ram": 0.000029927197088757627, "gpu": 0.018361837467224973, "total": 0.026040297248747485 }, { "unit": "kWh", "cpu": 0.007686299982511739, "ram": 0.000030071536896441264, "gpu": 0.01930583988911394, "total": 0.02702221140852218 }, { "unit": "kWh", "cpu": -0.01203697110641127, "ram": -0.0000470687174617259, "gpu": -0.039224088323692285, "total": -0.051308128147565246 }, { "unit": "kWh", "cpu": 0.007651544128606726, "ram": 0.000029936437820258908, "gpu": 0.019263768466558417, "total": 0.026945249032985452 }, { "unit": "kWh", "cpu": 0.019666019202272123, "ram": 0.00007691706936913101, "gpu": 0.05781383708436749, "total": 0.07755677335600875 }, { "unit": "kWh", "cpu": 0.0077191797311727955, "ram": 0.00003020015788327701, "gpu": 0.019259535129847904, "total": 0.027008915018904067 }, { "unit": "kWh", "cpu": 0.0076452522367239095, "ram": 0.000029911338650262216, "gpu": 0.018756748894261932, "total": 0.026431912469636054 }, { "unit": "kWh", "cpu": 0.007678970277619865, "ram": 0.000030043294630385515, "gpu": 0.019122451964605602, "total": 0.026831465536855725 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000031577332566181814, "ram": 9.538423654320006e-8, "gpu": 0.000021227516981525696, "total": 0.000052900233784250705 }, "efficiency": { "unit": "samples/kWh", "value": 18903508.14097379 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "meta-llama/Llama-3.1-8B-Instruct", "processor": "meta-llama/Llama-3.1-8B-Instruct", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-30T22:16:52
codecarbon
44d9819e-fd7f-4e87-9ec7-b5475ec334af
-1,726,061,021.783446
0.028686
0.000049
120
353.767415
0.469344
0.019664
0.057971
0.000077
0.077712
United States
USA
virginia
Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
96
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
1
1 x NVIDIA A100-SXM4-80GB
-77.4903
39.0469
1,121.805893
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-30T20:38:30
codecarbon
44d9819e-fd7f-4e87-9ec7-b5475ec334af
-1,726,061,252.153407
0.018782
0.000052
120
388.96925
0.469252
0.011985
0.038848
0.000047
0.05088
United States
USA
virginia
Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
96
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
1
1 x NVIDIA A100-SXM4-80GB
-77.4903
39.0469
1,121.805893
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-30T19:38:26
codecarbon
44d9819e-fd7f-4e87-9ec7-b5475ec334af
-1,726,061,610.76087
0.00002
0.000021
120
81.038643
0.36463
0.000032
0.000021
0
0.000053
United States
USA
virginia
Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
96
Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz
1
1 x NVIDIA A100-SXM4-80GB
-77.4903
39.0469
1,121.805893
process
N
1
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "mistralai/Mistral-7B-Instruct-v0.3", "processor": "mistralai/Mistral-7B-Instruct-v0.3", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "mistralai/Mixtral-8x22B-Instruct-v0.1", "processor": "mistralai/Mixtral-8x22B-Instruct-v0.1", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "mistralai/Mixtral-8x7B-Instruct-v0.1", "processor": "mistralai/Mixtral-8x7B-Instruct-v0.1", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204529.905664, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.223-212.873.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-80GB" ], "gpu_count": 1, "gpu_vram_mb": 85899345920, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0006784651691832389, "ram": 0.000006872767178977215, "gpu": 0.004670849625565543, "total": 0.00535618756192776 }, "efficiency": { "unit": "tokens/kWh", "value": 56280142.64151448 }, "measures": [ { "unit": "kWh", "cpu": 0.0007471547884041582, "ram": 0.000007565518406011505, "gpu": 0.005128198824777996, "total": 0.005882919131588165 }, { "unit": "kWh", "cpu": 0.0007498932815120901, "ram": 0.000007596245961500314, "gpu": 0.005162669130131636, "total": 0.005920158657605227 }, { "unit": "kWh", "cpu": 0.0007506841969611641, "ram": 0.0000076041352101112216, "gpu": 0.005159375238607655, "total": 0.005917663570778931 }, { "unit": "kWh", "cpu": 0.0007552490596986899, "ram": 0.00000765116218203134, "gpu": 0.00519234248720446, "total": 0.005955242709085182 }, { "unit": "kWh", "cpu": 0.0007571123219730605, "ram": 0.000007669590319577743, "gpu": 0.00525252309090396, "total": 0.006017305003196596 }, { "unit": "kWh", "cpu": 0.0007562589633509356, "ram": 0.000007661224940613736, "gpu": 0.005222808067132156, "total": 0.005986728255423707 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.0007560068380162398, "ram": 0.000007658837285322218, "gpu": 0.0052021186061357305, "total": 0.005965784281437292 }, { "unit": "kWh", "cpu": 0.0007560777030474536, "ram": 0.000007659780890775818, "gpu": 0.005200013604452103, "total": 0.005963751088390333 }, { "unit": "kWh", "cpu": 0.000756214538868598, "ram": 0.000007661176593828245, "gpu": 0.005188447206309732, "total": 0.0059523229217721615 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0015004224549616597, "ram": 0.000015204797701429561, "gpu": 0.00586308749602189, "total": 0.00737871474868498 }, "efficiency": { "unit": "tokens/kWh", "value": 1219724.6141821598 }, "measures": [ { "unit": "kWh", "cpu": 0.0016782613314461846, "ram": 0.000017009955692935013, "gpu": 0.006698563692180137, "total": 0.008393834979319253 }, { "unit": "kWh", "cpu": 0.0016719845362669326, "ram": 0.000016942875567914424, "gpu": 0.0065212854948022425, "total": 0.008210212906637095 }, { "unit": "kWh", "cpu": 0.0016724901459935718, "ram": 0.000016948727439907015, "gpu": 0.006527124666140693, "total": 0.00821656353957417 }, { "unit": "kWh", "cpu": 0.0016649095729317186, "ram": 0.000016871157784986704, "gpu": 0.0064939879729633, "total": 0.008175768703680013 }, { "unit": "kWh", "cpu": -0.0007571123219730605, "ram": -0.000007669590319577743, "gpu": -0.00525252309090396, "total": -0.006017305003196596 }, { "unit": "kWh", "cpu": 0.0016639060387660326, "ram": 0.000016861220430135516, "gpu": 0.00647061517648817, "total": 0.008151382435684335 }, { "unit": "kWh", "cpu": 0.002422301665296781, "ram": 0.000024544091842681722, "gpu": 0.011690915463836049, "total": 0.014137761220975514 }, { "unit": "kWh", "cpu": 0.0016633325706456384, "ram": 0.000016854834230008524, "gpu": 0.006496602697278142, "total": 0.008176790102153775 }, { "unit": "kWh", "cpu": 0.0016610744073210513, "ram": 0.000016832224697802917, "gpu": 0.0064857387996974936, "total": 0.008163645431716333 }, { "unit": "kWh", "cpu": 0.0016630766029217424, "ram": 0.00001685247964750152, "gpu": 0.006498564087736636, "total": 0.008178493170305896 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000011119852669475626, "ram": 8.376988347301967e-8, "gpu": 0.00001861168155592452, "total": 0.000029815304108873166 }, "efficiency": { "unit": "samples/kWh", "value": 33539822.2452608 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "EleutherAI/pythia-1.4b", "processor": "EleutherAI/pythia-1.4b", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-25T13:52:02
codecarbon
8bd2e04f-cfc4-4707-8676-e5f40e6d32e7
-1,729,480,256.414559
0.005216
0.000025
42.5
205.309734
0.430641
0.002419
0.011687
0.000025
0.014131
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T13:17:51
codecarbon
8bd2e04f-cfc4-4707-8676-e5f40e6d32e7
-1,729,480,397.287495
0.002197
0.000034
42.5
291.606274
0.430585
0.000756
0.005188
0.000008
0.005952
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T13:07:11
codecarbon
8bd2e04f-cfc4-4707-8676-e5f40e6d32e7
-1,729,480,460.397016
0.000011
0.000012
42.5
71.303904
0.321208
0.000011
0.000019
0
0.00003
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0006950257205501962, "ram": 0.000007229079517594969, "gpu": 0.004783902188229749, "total": 0.00548615698829754 }, "efficiency": { "unit": "tokens/kWh", "value": 54946841.77704233 }, "measures": [ { "unit": "kWh", "cpu": 0.0007641022352298832, "ram": 0.000007945458815587156, "gpu": 0.005254148925537994, "total": 0.006026196619583464 }, { "unit": "kWh", "cpu": 0.0007698822285832043, "ram": 0.000008007436582032034, "gpu": 0.005301914241527816, "total": 0.006079803906693053 }, { "unit": "kWh", "cpu": 0.0007732566303492706, "ram": 0.000008043132134785983, "gpu": 0.005305364522065981, "total": 0.006086664284550038 }, { "unit": "kWh", "cpu": 0.0007727046854467418, "ram": 0.000008037193759434171, "gpu": 0.005334005933867791, "total": 0.006114747813073966 }, { "unit": "kWh", "cpu": 0.0007735792766898762, "ram": 0.00000804650520994879, "gpu": 0.005336946213998051, "total": 0.006118571995897874 }, { "unit": "kWh", "cpu": 0.000773910086490004, "ram": 0.000008050033200335675, "gpu": 0.0053161620307038415, "total": 0.006098122150394184 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.0007740681057641875, "ram": 0.000008051292602441139, "gpu": 0.005336773991638122, "total": 0.006118893390004751 }, { "unit": "kWh", "cpu": 0.0007736332350321597, "ram": 0.000008047155500604038, "gpu": 0.00532560537159199, "total": 0.006107285762124749 }, { "unit": "kWh", "cpu": 0.0007751207219166339, "ram": 0.000008062587370780708, "gpu": 0.005328100651365908, "total": 0.006111283960653323 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.001532609044249083, "ram": 0.00001594622041976216, "gpu": 0.006043184112321255, "total": 0.0075917393769901025 }, "efficiency": { "unit": "tokens/kWh", "value": 1185499.0738062232 }, "measures": [ { "unit": "kWh", "cpu": 0.0017120611431651516, "ram": 0.00001781507733712239, "gpu": 0.006835263801540092, "total": 0.008565140022042382 }, { "unit": "kWh", "cpu": 0.0017026174863408717, "ram": 0.00001771530782239789, "gpu": 0.006717194540418081, "total": 0.00843752733458136 }, { "unit": "kWh", "cpu": 0.0016996438265546904, "ram": 0.00001768379281359322, "gpu": 0.006708423977846101, "total": 0.008425751597214388 }, { "unit": "kWh", "cpu": 0.0017036087327997728, "ram": 0.00001772523439230838, "gpu": 0.00668814340606616, "total": 0.00840947737325825 }, { "unit": "kWh", "cpu": -0.0007735792766898762, "ram": -0.00000804650520994879, "gpu": -0.005336946213998051, "total": -0.006118571995897874 }, { "unit": "kWh", "cpu": 0.0016993492261053469, "ram": 0.000017680599692580472, "gpu": 0.00669986813766843, "total": 0.008416897963466338 }, { "unit": "kWh", "cpu": 0.0024794984861841347, "ram": 0.000025795559494294352, "gpu": 0.012036623518179912, "total": 0.014541917563858364 }, { "unit": "kWh", "cpu": 0.0016971929005708168, "ram": 0.000017658742769242277, "gpu": 0.0066783995093817605, "total": 0.008393251152721812 }, { "unit": "kWh", "cpu": 0.0017041481349138639, "ram": 0.00001773071350101803, "gpu": 0.0067008664718002375, "total": 0.008422745320215135 }, { "unit": "kWh", "cpu": 0.0017015497825460567, "ram": 0.00001770368158501338, "gpu": 0.006704003974309813, "total": 0.008423257438440863 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000011265942924405358, "ram": 8.491845032760329e-8, "gpu": 0.00001888834844376852, "total": 0.00003023920981850148 }, "efficiency": { "unit": "samples/kWh", "value": 33069647.189926323 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "EleutherAI/pythia-1.4b", "processor": "EleutherAI/pythia-1.4b", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-25T15:05:18
codecarbon
8248080f-4663-456d-a137-5e9a7085d7e8
-1,729,590,786.50507
0.005365
0.000026
42.5
206.475127
0.44216
0.002477
0.012032
0.000026
0.014535
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T14:30:22
codecarbon
8248080f-4663-456d-a137-5e9a7085d7e8
-1,729,590,930.636167
0.002256
0.000034
42.5
292.152453
0.442095
0.000775
0.005328
0.000008
0.006111
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T14:19:25
codecarbon
8248080f-4663-456d-a137-5e9a7085d7e8
-1,729,590,995.334638
0.000011
0.000012
42.5
71.423005
0.32139
0.000011
0.000019
0
0.00003
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0009210179793032473, "ram": 0.000009403775706541059, "gpu": 0.006402295760721532, "total": 0.007332717515731321 }, "efficiency": { "unit": "tokens/kWh", "value": 41960978.223952904 }, "measures": [ { "unit": "kWh", "cpu": 0.0010188414567649793, "ram": 0.00001040020860331343, "gpu": 0.007069801211392779, "total": 0.008099042876761072 }, { "unit": "kWh", "cpu": 0.001023928069784597, "ram": 0.00001045351056190972, "gpu": 0.007115094580959713, "total": 0.008149476161306221 }, { "unit": "kWh", "cpu": 0.0010232283780346432, "ram": 0.000010447744339112577, "gpu": 0.007115182636586503, "total": 0.008148858758960258 }, { "unit": "kWh", "cpu": 0.0010241352881440744, "ram": 0.000010456856244100832, "gpu": 0.00711364291313199, "total": 0.008148235057520166 }, { "unit": "kWh", "cpu": 0.0010237236436746672, "ram": 0.000010452806977406536, "gpu": 0.007123706532293106, "total": 0.008157882982945182 }, { "unit": "kWh", "cpu": 0.0010238973002910726, "ram": 0.000010454782526115431, "gpu": 0.00711877319501486, "total": 0.008153125277832046 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.001024103350314949, "ram": 0.000010456892804618941, "gpu": 0.0071262034787364925, "total": 0.008160763721856056 }, { "unit": "kWh", "cpu": 0.0010241413993481215, "ram": 0.000010457277251147982, "gpu": 0.007125078477836055, "total": 0.008159677154435316 }, { "unit": "kWh", "cpu": 0.001024180906675368, "ram": 0.000010457677757685137, "gpu": 0.007115474581263825, "total": 0.008150113165696887 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0017958659208215804, "ram": 0.000018341901791392123, "gpu": 0.007306251844996492, "total": 0.009120459667609458 }, "efficiency": { "unit": "tokens/kWh", "value": 986792.3688060087 }, "measures": [ { "unit": "kWh", "cpu": 0.0020010180013199305, "ram": 0.000020439056319295045, "gpu": 0.008227584915394992, "total": 0.010249041973034218 }, { "unit": "kWh", "cpu": 0.0019951513697100583, "ram": 0.000020378089452524212, "gpu": 0.0081139814911797, "total": 0.010129510950342285 }, { "unit": "kWh", "cpu": 0.0019969063428461812, "ram": 0.00002039467949555688, "gpu": 0.008126311778821282, "total": 0.010143612801163042 }, { "unit": "kWh", "cpu": 0.00199376487094794, "ram": 0.0000203630047581031, "gpu": 0.008109437043099632, "total": 0.010123564918805665 }, { "unit": "kWh", "cpu": -0.0010237236436746672, "ram": -0.000010452806977406536, "gpu": -0.007123706532293106, "total": -0.008157882982945182 }, { "unit": "kWh", "cpu": 0.001994637724863909, "ram": 0.00002037162971133379, "gpu": 0.008115028992016882, "total": 0.010130038346592124 }, { "unit": "kWh", "cpu": 0.0030181286351364765, "ram": 0.000030822295893551696, "gpu": 0.015217404673913748, "total": 0.018266355604943774 }, { "unit": "kWh", "cpu": 0.0019955356547212005, "ram": 0.000020380813163960854, "gpu": 0.008098154811851721, "total": 0.010114071279736882 }, { "unit": "kWh", "cpu": 0.0019944307125576103, "ram": 0.000020369530424909583, "gpu": 0.008090167583239882, "total": 0.010104967826222402 }, { "unit": "kWh", "cpu": 0.0019928095397871697, "ram": 0.000020352725672092604, "gpu": 0.008088153692740185, "total": 0.010101315958199408 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010438337355784218, "ram": 7.78486547950319e-8, "gpu": 0.000016238901880072376, "total": 0.000026755087890651626 }, "efficiency": { "unit": "samples/kWh", "value": 37376068.58504866 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "HuggingFaceTB/SmolLM-1.7B", "processor": "HuggingFaceTB/SmolLM-1.7B", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-24T16:13:22
codecarbon
09de8d70-881d-4fdb-8c44-21203f36a2d4
-1,729,257,239.680579
0.006737
0.000026
42.5
214.176313
0.434033
0.003017
0.015204
0.000031
0.018251
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T15:30:45
codecarbon
09de8d70-881d-4fdb-8c44-21203f36a2d4
-1,729,257,408.4833
0.003008
0.000035
42.5
295.275793
0.433973
0.001024
0.007115
0.00001
0.00815
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T15:16:16
codecarbon
09de8d70-881d-4fdb-8c44-21203f36a2d4
-1,729,257,494.353458
0.00001
0.000011
42.5
66.290905
0.318089
0.00001
0.000016
0
0.000027
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00028216661832921095, "ram": 0.000002878352463094162, "gpu": 0.0011204146741087939, "total": 0.001405459644901099 }, "efficiency": { "unit": "tokens/kWh", "value": 218923397.1364946 }, "measures": [ { "unit": "kWh", "cpu": 0.00031336019538074554, "ram": 0.0000031952333195117534, "gpu": 0.00122852237170612, "total": 0.0015450778004063774 }, { "unit": "kWh", "cpu": 0.00031393739547250644, "ram": 0.000003202335114385519, "gpu": 0.0012359929332390607, "total": 0.001553132663825953 }, { "unit": "kWh", "cpu": 0.0003159219112732292, "ram": 0.000003222888771863899, "gpu": 0.0012393301581283822, "total": 0.0015584749581734753 }, { "unit": "kWh", "cpu": 0.0003150893582905055, "ram": 0.0000032143926946747804, "gpu": 0.0012481104429333811, "total": 0.0015664141939185612 }, { "unit": "kWh", "cpu": 0.0003144043295370163, "ram": 0.000003207396461775205, "gpu": 0.001258061839781277, "total": 0.0015756735657800683 }, { "unit": "kWh", "cpu": 0.0003145929484800339, "ram": 0.0000032093337072112605, "gpu": 0.0012565132274318103, "total": 0.001574315509619055 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.00031270235966302526, "ram": 0.000003190109472312273, "gpu": 0.0012584437845308827, "total": 0.001574336253666219 }, { "unit": "kWh", "cpu": 0.0003087726142439349, "ram": 0.000003149867500644629, "gpu": 0.0012466090528420182, "total": 0.001558531534586599 }, { "unit": "kWh", "cpu": 0.00031288507095111233, "ram": 0.0000031919675885623027, "gpu": 0.001232562930495007, "total": 0.0015486399690346833 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0020209151121617142, "ram": 0.000020623012510417966, "gpu": 0.004044847485875458, "total": 0.00608638561054759 }, "efficiency": { "unit": "tokens/kWh", "value": 1478710.1205686298 }, "measures": [ { "unit": "kWh", "cpu": 0.002228578082469823, "ram": 0.000022743844645549306, "gpu": 0.004435512992850832, "total": 0.006686834919966205 }, { "unit": "kWh", "cpu": 0.0022643340706578674, "ram": 0.000023107646810448203, "gpu": 0.004506180549384453, "total": 0.00679362226685277 }, { "unit": "kWh", "cpu": 0.002240564864718504, "ram": 0.000022864817856105112, "gpu": 0.004480500528844189, "total": 0.006743930211418801 }, { "unit": "kWh", "cpu": 0.002231354974089644, "ram": 0.00002277091024922105, "gpu": 0.0044298010438375, "total": 0.0066839269281763645 }, { "unit": "kWh", "cpu": -0.0003144043295370163, "ram": -0.000003207396461775205, "gpu": -0.001258061839781277, "total": -0.0015756735657800683 }, { "unit": "kWh", "cpu": 0.0022437327763739938, "ram": 0.000022895809829293018, "gpu": 0.004527934455678562, "total": 0.006794563041881842 }, { "unit": "kWh", "cpu": 0.0025504533577234215, "ram": 0.000026025299423921498, "gpu": 0.00576570933478493, "total": 0.008342187991932273 }, { "unit": "kWh", "cpu": 0.0022642733620749727, "ram": 0.000023105812715842442, "gpu": 0.004556943645553702, "total": 0.006844322820344518 }, { "unit": "kWh", "cpu": 0.0022470556892481295, "ram": 0.000022930371845914078, "gpu": 0.004533760849227519, "total": 0.006803746910321554 }, { "unit": "kWh", "cpu": 0.0022532082737978065, "ram": 0.00002299300818966014, "gpu": 0.00447019329837417, "total": 0.00674639458036163 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010718486329682895, "ram": 7.992339706452299e-8, "gpu": 0.000017620569652265772, "total": 0.00002841897937901319 }, "efficiency": { "unit": "samples/kWh", "value": 35187752.05342098 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "HuggingFaceTB/SmolLM-135M", "processor": "HuggingFaceTB/SmolLM-135M", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-23T19:50:03
codecarbon
05f83304-edf6-4273-b97d-e5de644eb411
-1,728,779,136.923329
0.003062
0.000014
42.5
94.45086
0.433685
0.002566
0.005703
0.000026
0.008295
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-23T19:13:56
codecarbon
05f83304-edf6-4273-b97d-e5de644eb411
-1,728,779,327.786548
0.000572
0.000022
42.5
167.436858
0.433622
0.000313
0.001233
0.000003
0.001549
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-23T19:09:29
codecarbon
05f83304-edf6-4273-b97d-e5de644eb411
-1,728,779,353.37886
0.00001
0.000012
42.5
70.040198
0.317968
0.000011
0.000018
0
0.000028
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00038417323045180534, "ram": 0.000003917567631278797, "gpu": 0.0020836598613710013, "total": 0.002471750659454085 }, "efficiency": { "unit": "tokens/kWh", "value": 124481811.63546507 }, "measures": [ { "unit": "kWh", "cpu": 0.00042439884210412454, "ram": 0.00000432643824564241, "gpu": 0.0022828590485080014, "total": 0.0027115843288577682 }, { "unit": "kWh", "cpu": 0.00042775152907360186, "ram": 0.000004361830192752804, "gpu": 0.002320344634051952, "total": 0.0027524579933183063 }, { "unit": "kWh", "cpu": 0.0004269135638458061, "ram": 0.000004353528012110313, "gpu": 0.002306500456309979, "total": 0.002737767548167895 }, { "unit": "kWh", "cpu": 0.00042629300760428234, "ram": 0.000004347332333560652, "gpu": 0.002313714350970053, "total": 0.002744354690907896 }, { "unit": "kWh", "cpu": 0.00042705259539434323, "ram": 0.000004355086217043936, "gpu": 0.0023207949121900606, "total": 0.0027522025938014477 }, { "unit": "kWh", "cpu": 0.0004277230125062691, "ram": 0.000004361916595886913, "gpu": 0.0023226774136958506, "total": 0.002754762342798006 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.00042727887452002596, "ram": 0.0000043573934032328814, "gpu": 0.002324739359790051, "total": 0.0027563756277133103 }, { "unit": "kWh", "cpu": 0.0004273450137403721, "ram": 0.000004357849664637245, "gpu": 0.0023255226937499707, "total": 0.0027572255571549804 }, { "unit": "kWh", "cpu": 0.00042697586572922833, "ram": 0.00000435430164792082, "gpu": 0.002319445744444093, "total": 0.0027507759118212405 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0021575579513180274, "ram": 0.00002201039115281833, "gpu": 0.004663859981084995, "total": 0.0068434283235558405 }, "efficiency": { "unit": "tokens/kWh", "value": 1315130.308155782 }, "measures": [ { "unit": "kWh", "cpu": 0.0024012055666631877, "ram": 0.00002449660161589356, "gpu": 0.005293597012651863, "total": 0.007719299180930941 }, { "unit": "kWh", "cpu": 0.002399304204071693, "ram": 0.00002447631017533231, "gpu": 0.005170139413886221, "total": 0.007593919928133252 }, { "unit": "kWh", "cpu": 0.00239920346893047, "ram": 0.000024475506960495002, "gpu": 0.0051808830335919875, "total": 0.007604562009482951 }, { "unit": "kWh", "cpu": 0.002397024483553307, "ram": 0.000024453144005693137, "gpu": 0.005157916904107829, "total": 0.0075793945316668335 }, { "unit": "kWh", "cpu": -0.00042705259539434323, "ram": -0.000004355086217043936, "gpu": -0.0023207949121900606, "total": -0.0027522025938014477 }, { "unit": "kWh", "cpu": 0.002387084423529167, "ram": 0.00002435184806787585, "gpu": 0.005154224678932229, "total": 0.007565660950529268 }, { "unit": "kWh", "cpu": 0.0028258748023854002, "ram": 0.000028826662935648893, "gpu": 0.0074967232195958244, "total": 0.010351424684916871 }, { "unit": "kWh", "cpu": 0.0024026328822716113, "ram": 0.000024510452757425444, "gpu": 0.005180848033564112, "total": 0.0076079913685931436 }, { "unit": "kWh", "cpu": 0.0024013627431852193, "ram": 0.000024497710395037818, "gpu": 0.005174165528217922, "total": 0.007600025981798177 }, { "unit": "kWh", "cpu": 0.002388939533984567, "ram": 0.000024370760831825227, "gpu": 0.005150896898492019, "total": 0.007564207193308414 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010503983233461947, "ram": 7.834823109019483e-8, "gpu": 0.000016727513381886716, "total": 0.00002730984484643886 }, "efficiency": { "unit": "samples/kWh", "value": 36616831.97480331 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "HuggingFaceTB/SmolLM-360M", "processor": "HuggingFaceTB/SmolLM-360M", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-24T15:08:31
codecarbon
acdaf5b9-c8a0-438e-95f7-e1c02c020a0d
-1,729,710,332.809063
0.003808
0.000016
42.5
112.749334
0.433547
0.002816
0.00747
0.000029
0.010315
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T14:28:39
codecarbon
acdaf5b9-c8a0-438e-95f7-e1c02c020a0d
-1,729,710,535.16628
0.001015
0.000028
42.5
230.885946
0.433451
0.000427
0.002319
0.000004
0.002751
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T14:22:36
codecarbon
acdaf5b9-c8a0-438e-95f7-e1c02c020a0d
-1,729,710,570.444148
0.00001
0.000011
42.5
67.849187
0.318084
0.000011
0.000017
0
0.000027
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0006078373521613038, "ram": 0.0000062823229751878135, "gpu": 0.004224683213077207, "total": 0.0048388028882136985 }, "efficiency": { "unit": "tokens/kWh", "value": 62297846.58810162 }, "measures": [ { "unit": "kWh", "cpu": 0.0006697385625783176, "ram": 0.000006920114361301345, "gpu": 0.004640193434374051, "total": 0.0053168521113136705 }, { "unit": "kWh", "cpu": 0.0006729380550277105, "ram": 0.000006955487467552911, "gpu": 0.004670399847428008, "total": 0.005350293389923272 }, { "unit": "kWh", "cpu": 0.000674865046512433, "ram": 0.0000069753509534202524, "gpu": 0.004697523758015798, "total": 0.0053793641554816495 }, { "unit": "kWh", "cpu": 0.0006765831028264276, "ram": 0.000006992626429266204, "gpu": 0.004700783482846083, "total": 0.005384359212101779 }, { "unit": "kWh", "cpu": 0.0006767935208930793, "ram": 0.000006995337489867422, "gpu": 0.0047073423769821154, "total": 0.005391131235365065 }, { "unit": "kWh", "cpu": 0.0006768564707834331, "ram": 0.00000699580457259683, "gpu": 0.004711763213852027, "total": 0.005395615489208059 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.0006769714395637796, "ram": 0.0000069971713324534475, "gpu": 0.004710331268261925, "total": 0.005394299879158161 }, { "unit": "kWh", "cpu": 0.0006767927662111812, "ram": 0.000006995601382546744, "gpu": 0.004698884592437835, "total": 0.005382672960031559 }, { "unit": "kWh", "cpu": 0.0006768345572166759, "ram": 0.000006995735762872974, "gpu": 0.004709610156574229, "total": 0.0053934404495537755 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0012634316170816028, "ram": 0.000013061109683948865, "gpu": 0.005194258099847594, "total": 0.006470750826613145 }, "efficiency": { "unit": "tokens/kWh", "value": 1390874.1413723528 }, "measures": [ { "unit": "kWh", "cpu": 0.001408863899259774, "ram": 0.000014566548243871863, "gpu": 0.005876456090049942, "total": 0.0072998865375535835 }, { "unit": "kWh", "cpu": 0.0014058435875777637, "ram": 0.000014533029521171267, "gpu": 0.00578180045876997, "total": 0.007202177075868906 }, { "unit": "kWh", "cpu": 0.001404320976875725, "ram": 0.000014517349302872156, "gpu": 0.005750399322538291, "total": 0.007169237648716898 }, { "unit": "kWh", "cpu": 0.0014010596418617145, "ram": 0.00001448412171026788, "gpu": 0.005755766271275942, "total": 0.0071713100348479095 }, { "unit": "kWh", "cpu": -0.0006767935208930793, "ram": -0.000006995337489867422, "gpu": -0.0047073423769821154, "total": -0.005391131235365065 }, { "unit": "kWh", "cpu": 0.0014042657692339977, "ram": 0.0000145170475462244, "gpu": 0.005752092657226093, "total": 0.007170875474006324 }, { "unit": "kWh", "cpu": 0.002079143602190369, "ram": 0.000021492073486871634, "gpu": 0.0104623644809958, "total": 0.012563000156673026 }, { "unit": "kWh", "cpu": 0.0014025079943917884, "ram": 0.000014498369587383913, "gpu": 0.005750782100622187, "total": 0.007167788464601359 }, { "unit": "kWh", "cpu": 0.0014023684843991946, "ram": 0.000014496885009948401, "gpu": 0.005766703502248172, "total": 0.007183568871657331 }, { "unit": "kWh", "cpu": 0.0014027357359187803, "ram": 0.000014501009920744553, "gpu": 0.005753558491731647, "total": 0.007170795237571172 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00001108087887082042, "ram": 8.167052233473494e-8, "gpu": 0.000016956680231938748, "total": 0.000028119229625093902 }, "efficiency": { "unit": "samples/kWh", "value": 35562851.946256354 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "allenai/OLMo-1B-hf", "processor": "allenai/OLMo-1B-hf", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-24T19:03:04
codecarbon
19c0e444-7415-41b0-b98a-b54de07f98d7
-1,729,710,395.186178
0.004638
0.000026
42.5
213.838334
0.439336
0.00208
0.010463
0.000021
0.012564
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T18:33:43
codecarbon
19c0e444-7415-41b0-b98a-b54de07f98d7
-1,729,710,514.001775
0.001991
0.000035
42.5
295.75187
0.43932
0.000677
0.00471
0.000007
0.005393
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T18:24:09
codecarbon
19c0e444-7415-41b0-b98a-b54de07f98d7
-1,729,710,570.395269
0.00001
0.000011
42.5
65.197055
0.314279
0.000011
0.000017
0
0.000028
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "cardiffnlp/twitter-roberta-base-sentiment-latest", "processor": "cardiffnlp/twitter-roberta-base-sentiment-latest", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-03T22:44:08
codecarbon
c9ea553f-5291-4d43-8ab2-f9ce8dc44700
-1,727,977,510.569744
0.000254
0.000026
42.5
213.188395
0.385039
0.000114
0.000573
0.000001
0.000688
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T22:42:30
codecarbon
c9ea553f-5291-4d43-8ab2-f9ce8dc44700
-1,727,977,519.340369
0.00001
0.000011
42.5
68.969125
0.282692
0.000011
0.000017
0
0.000028
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00006843412625166758, "ram": 6.495987432548598e-7, "gpu": 0.0003659467372016001, "total": 0.0004350304621965225 }, "efficiency": { "unit": "tokens/kWh", "value": 577773332.7705556 }, "measures": [ { "unit": "kWh", "cpu": 0.00007661582131875915, "ram": 7.270470834546824e-7, "gpu": 0.00040231893296599486, "total": 0.00047966180136820865 }, { "unit": "kWh", "cpu": 0.00007612333831944511, "ram": 7.225901751256202e-7, "gpu": 0.00040236809967200693, "total": 0.00047921402816657763 }, { "unit": "kWh", "cpu": 0.00007599541320138631, "ram": 7.213695811715175e-7, "gpu": 0.0003997589309180094, "total": 0.00047647571370056725 }, { "unit": "kWh", "cpu": 0.00007557263277569492, "ram": 7.173576931168489e-7, "gpu": 0.00040205782164599857, "total": 0.00047834781211481036 }, { "unit": "kWh", "cpu": 0.00007594381393958861, "ram": 7.209234286376104e-7, "gpu": 0.00040244032195199675, "total": 0.00047910505932022306 }, { "unit": "kWh", "cpu": 0.00007603627364582939, "ram": 7.218092832504138e-7, "gpu": 0.0004104514394720066, "total": 0.00048720952240108653 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.00007607713516458278, "ram": 7.22200622388365e-7, "gpu": 0.0004148672763379935, "total": 0.0004916666121249644 }, { "unit": "kWh", "cpu": 0.00007618006651666806, "ram": 7.231542359495631e-7, "gpu": 0.00040862504912199393, "total": 0.00048552826987461164 }, { "unit": "kWh", "cpu": 0.00007579676763472146, "ram": 7.195353294539768e-7, "gpu": 0.00041657949993000043, "total": 0.0004930958028941763 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00032348659908444275, "ram": 0.000003072502371237189, "gpu": 0.0008102718704392012, "total": 0.0011368309718948811 }, "efficiency": { "unit": "tokens/kWh", "value": 7916744.197247469 }, "measures": [ { "unit": "kWh", "cpu": 0.0003574988217444311, "ram": 0.000003395730974543728, "gpu": 0.0008999454421780156, "total": 0.001260839994896991 }, { "unit": "kWh", "cpu": 0.0003597064875145906, "ram": 0.000003416586254110718, "gpu": 0.0009043490568119844, "total": 0.0012674721305806851 }, { "unit": "kWh", "cpu": 0.0003634462718388908, "ram": 0.0000034517643716471943, "gpu": 0.0009177482341979903, "total": 0.0012846462704085282 }, { "unit": "kWh", "cpu": 0.0003580753265118062, "ram": 0.0000034011102125321956, "gpu": 0.000907056003422016, "total": 0.001268532440146354 }, { "unit": "kWh", "cpu": -0.00007594381393958861, "ram": -7.209234286376104e-7, "gpu": -0.00040244032195199675, "total": -0.00047910505932022306 }, { "unit": "kWh", "cpu": 0.0003584977115930576, "ram": 0.000003405089643388381, "gpu": 0.0008946326601499827, "total": 0.001256535461386427 }, { "unit": "kWh", "cpu": 0.00043365753502151803, "ram": 0.00000411857397070893, "gpu": 0.0013038496541900174, "total": 0.0017416257631822432 }, { "unit": "kWh", "cpu": 0.0003591390641236086, "ram": 0.000003411175593864572, "gpu": 0.0008893837670620047, "total": 0.0012519340067794786 }, { "unit": "kWh", "cpu": 0.0003595088816513917, "ram": 0.0000034147095254425512, "gpu": 0.0008980354406499941, "total": 0.0012609590318268314 }, { "unit": "kWh", "cpu": 0.0003612797047847213, "ram": 0.0000034312065947712373, "gpu": 0.0008901587676820033, "total": 0.0012548696790614941 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010335147520835462, "ram": 6.951719567818088e-8, "gpu": 0.000016803068997983672, "total": 0.000027207733714497313 }, "efficiency": { "unit": "samples/kWh", "value": 36754255.62795633 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "distilbert/distilgpt2", "processor": "distilbert/distilgpt2", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-03T19:19:59
codecarbon
93ec0abc-f979-47e6-ba6c-015715ed9265
-1,727,977,483.227785
0.000645
0.000017
42.5
127.081351
0.40367
0.000437
0.001307
0.000004
0.001748
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T19:13:50
codecarbon
93ec0abc-f979-47e6-ba6c-015715ed9265
-1,727,977,513.830222
0.000182
0.000028
42.5
233.663976
0.403637
0.000076
0.000417
0.000001
0.000493
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T19:12:45
codecarbon
93ec0abc-f979-47e6-ba6c-015715ed9265
-1,727,977,519.375375
0.00001
0.000011
42.5
69.271544
0.28685
0.00001
0.000017
0
0.000027
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0001620655900791058, "ram": 0.0000015199258754561861, "gpu": 0.0008476811503666859, "total": 0.001011266666321248 }, "efficiency": { "unit": "tokens/kWh", "value": 297486320.88742566 }, "measures": [ { "unit": "kWh", "cpu": 0.0001807475891604554, "ram": 0.000001694572316550935, "gpu": 0.0009331888021057999, "total": 0.0011156309635828063 }, { "unit": "kWh", "cpu": 0.00018008889945134193, "ram": 0.0000016887296008880154, "gpu": 0.000938843251074406, "total": 0.0011206208801266364 }, { "unit": "kWh", "cpu": 0.0001798466089937493, "ram": 0.0000016867903117113444, "gpu": 0.0009534802072277415, "total": 0.0011350136065332024 }, { "unit": "kWh", "cpu": 0.00018016871874852866, "ram": 0.0000016898550856852566, "gpu": 0.000940103807637982, "total": 0.0011219623814721965 }, { "unit": "kWh", "cpu": 0.00017954580621252613, "ram": 0.0000016837666515050435, "gpu": 0.0009352576926500156, "total": 0.0011164872655140467 }, { "unit": "kWh", "cpu": 0.0001797642011160457, "ram": 0.0000016860627821245568, "gpu": 0.0009455621453384744, "total": 0.0011270124092366447 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.00017972155109093578, "ram": 0.000001685662121903846, "gpu": 0.0009414274198080719, "total": 0.0011228346330209114 }, { "unit": "kWh", "cpu": 0.00018012731117586457, "ram": 0.0000016894807783672083, "gpu": 0.000942803809798054, "total": 0.0011246206017522867 }, { "unit": "kWh", "cpu": 0.00018064521484161054, "ram": 0.0000016943391058256554, "gpu": 0.0009461443680263137, "total": 0.0011284839219737487 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0006941310564490555, "ram": 0.0000065195511389882145, "gpu": 0.0015446014301244483, "total": 0.0022452520377124916 }, "efficiency": { "unit": "tokens/kWh", "value": 4008458.671378997 }, "measures": [ { "unit": "kWh", "cpu": 0.0007667801528512023, "ram": 0.000007201369761167983, "gpu": 0.0017799322572780163, "total": 0.0025539137798903867 }, { "unit": "kWh", "cpu": 0.0007654345594188068, "ram": 0.000007189538057079124, "gpu": 0.0017116163692918462, "total": 0.0024842404667677316 }, { "unit": "kWh", "cpu": 0.0007738969895601663, "ram": 0.000007268686204328937, "gpu": 0.001701727750270443, "total": 0.002482893426034937 }, { "unit": "kWh", "cpu": 0.0007722275602000286, "ram": 0.000007252917411954056, "gpu": 0.001713406926279415, "total": 0.002492887403891397 }, { "unit": "kWh", "cpu": -0.00017954580621252613, "ram": -0.0000016837666515050435, "gpu": -0.0009352576926500156, "total": -0.0011164872655140467 }, { "unit": "kWh", "cpu": 0.000770763807665187, "ram": 0.000007239217667747306, "gpu": 0.0017025752509471914, "total": 0.002480578276280121 }, { "unit": "kWh", "cpu": 0.0009583884991923499, "ram": 0.00000899934381225773, "gpu": 0.002658187126547862, "total": 0.0036255749695524725 }, { "unit": "kWh", "cpu": 0.000774199179955172, "ram": 0.00000727140821879161, "gpu": 0.0017105824795762281, "total": 0.0024920530677501936 }, { "unit": "kWh", "cpu": 0.000768632685410092, "ram": 0.000007219470749306452, "gpu": 0.0017009096940601154, "total": 0.0024767618502195138 }, { "unit": "kWh", "cpu": 0.0007705329364500757, "ram": 0.0000072373261587539885, "gpu": 0.00170233413964338, "total": 0.002480104402252206 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010824495200641752, "ram": 7.158500334577163e-8, "gpu": 0.00001786251428992358, "total": 0.0000287585944939111 }, "efficiency": { "unit": "samples/kWh", "value": 34772213.927621685 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "facebook/opt-125m", "processor": "facebook/opt-125m", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-03T14:38:46
codecarbon
5d2b938b-783b-44ef-b25d-7862cb82f3c5
-1,727,736,204.719394
0.001332
0.000017
42.5
118.341148
0.399094
0.000951
0.002648
0.000009
0.003609
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T14:25:20
codecarbon
5d2b938b-783b-44ef-b25d-7862cb82f3c5
-1,727,736,269.993213
0.000417
0.000027
42.5
222.630982
0.398701
0.000181
0.000946
0.000002
0.001128
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T14:22:47
codecarbon
5d2b938b-783b-44ef-b25d-7862cb82f3c5
-1,727,736,284.373033
0.000011
0.000012
42.5
70.30865
0.282018
0.000011
0.000018
0
0.000029
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0008743925115414459, "ram": 0.000009073014340048727, "gpu": 0.005965406855654898, "total": 0.006848872381536393 }, "efficiency": { "unit": "tokens/kWh", "value": 52820811.93033508 }, "measures": [ { "unit": "kWh", "cpu": 0.0009618845320857267, "ram": 0.000009974887221968817, "gpu": 0.006500705478336144, "total": 0.00747256489764384 }, { "unit": "kWh", "cpu": 0.0009701170610512136, "ram": 0.000010065652345373525, "gpu": 0.006603977227623403, "total": 0.007584159941019989 }, { "unit": "kWh", "cpu": 0.0009724562256936528, "ram": 0.000010091178502841091, "gpu": 0.0066447511491301015, "total": 0.007627298553326592 }, { "unit": "kWh", "cpu": 0.0009732569738840945, "ram": 0.000010099627709040595, "gpu": 0.006651574210144062, "total": 0.007634930811737196 }, { "unit": "kWh", "cpu": 0.0009730900524310224, "ram": 0.000010097911957738982, "gpu": 0.0066502172646139, "total": 0.007633405229002663 }, { "unit": "kWh", "cpu": 0.0009731515151170447, "ram": 0.00001009806480394192, "gpu": 0.006648533096599962, "total": 0.007631782676520947 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.0009734353665255182, "ram": 0.00001010211564087702, "gpu": 0.0066574306037168185, "total": 0.007640968085883215 }, { "unit": "kWh", "cpu": 0.0009732663908571535, "ram": 0.000010100366039829037, "gpu": 0.006650310875800258, "total": 0.00763367763269724 }, { "unit": "kWh", "cpu": 0.0009732669977690327, "ram": 0.000010100339178876271, "gpu": 0.006646568650584328, "total": 0.007629935987532241 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0013969666306013094, "ram": 0.000014500609685849926, "gpu": 0.005961756741623781, "total": 0.007373223981910941 }, "efficiency": { "unit": "tokens/kWh", "value": 1220632.9310055005 }, "measures": [ { "unit": "kWh", "cpu": 0.0015635349269392867, "ram": 0.000016235092136136885, "gpu": 0.006860477432823586, "total": 0.008440247451899013 }, { "unit": "kWh", "cpu": 0.0015532499537137087, "ram": 0.000016123250633229456, "gpu": 0.006632953917467432, "total": 0.008202327121814375 }, { "unit": "kWh", "cpu": 0.001547297970215975, "ram": 0.000016060518458437368, "gpu": 0.0065902838833356014, "total": 0.008153642372010014 }, { "unit": "kWh", "cpu": 0.0015496825589291488, "ram": 0.000016085128919805413, "gpu": 0.006604002227639683, "total": 0.00816976991548863 }, { "unit": "kWh", "cpu": -0.0009730900524310224, "ram": -0.000010097911957738982, "gpu": -0.0066502172646139, "total": -0.007633405229002663 }, { "unit": "kWh", "cpu": 0.0015526281651833812, "ram": 0.000016116171075133998, "gpu": 0.006589721660661851, "total": 0.008158465996920362 }, { "unit": "kWh", "cpu": 0.002523647511672767, "ram": 0.000026192105827141344, "gpu": 0.013227737248847404, "total": 0.015777576866347315 }, { "unit": "kWh", "cpu": 0.0015498057216015385, "ram": 0.000016085942161502088, "gpu": 0.00658310748870683, "total": 0.008148999152469888 }, { "unit": "kWh", "cpu": 0.001552847952601264, "ram": 0.00001611738568690492, "gpu": 0.006589137493525854, "total": 0.008158102831814028 }, { "unit": "kWh", "cpu": 0.0015500615975870425, "ram": 0.000016088413917946765, "gpu": 0.006590363327843463, "total": 0.008156513339348456 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010887769342015317, "ram": 8.887375878840892e-8, "gpu": 0.00001873084831771621, "total": 0.000029707491418519937 }, "efficiency": { "unit": "samples/kWh", "value": 33661543.006508805 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "meta-llama/Llama-3.2-1B", "processor": "meta-llama/Llama-3.2-1B", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-04T16:09:52
codecarbon
0ab80b0a-01c3-4ed4-b4c7-a7e8a2872036
-1,727,470,814.537385
0.005827
0.000027
42.5
222.951648
0.441103
0.002523
0.013237
0.000026
0.015786
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-04T15:34:14
codecarbon
0ab80b0a-01c3-4ed4-b4c7-a7e8a2872036
-1,727,470,945.83583
0.002816
0.000034
42.5
290.247145
0.441072
0.000973
0.006647
0.00001
0.00763
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-04T15:20:30
codecarbon
0ab80b0a-01c3-4ed4-b4c7-a7e8a2872036
-1,727,471,027.350448
0.000011
0.000012
42.5
73.298032
0.348279
0.000011
0.000019
0
0.00003
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0014863347658632645, "ram": 0.000014875274686459053, "gpu": 0.010253619786222634, "total": 0.011754829826772357 }, "efficiency": { "unit": "tokens/kWh", "value": 25509684.48025046 }, "measures": [ { "unit": "kWh", "cpu": 0.0016490302515946194, "ram": 0.00001649813690981336, "gpu": 0.011347064355422098, "total": 0.013012592743926531 }, { "unit": "kWh", "cpu": 0.0016518713439402442, "ram": 0.00001653242370414942, "gpu": 0.011369387151058064, "total": 0.013037790918702458 }, { "unit": "kWh", "cpu": 0.0016516454622131681, "ram": 0.000016530357467400806, "gpu": 0.011394721615769932, "total": 0.013062897435450499 }, { "unit": "kWh", "cpu": 0.00165204664148227, "ram": 0.000016534376967469733, "gpu": 0.011365798537076088, "total": 0.013034379555525825 }, { "unit": "kWh", "cpu": 0.0016513170368874855, "ram": 0.000016527227705957916, "gpu": 0.011413016630406059, "total": 0.013080860894999505 }, { "unit": "kWh", "cpu": 0.0016521028726305103, "ram": 0.00001653502896064305, "gpu": 0.011397110784348019, "total": 0.013065748685939169 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.0016516367285107358, "ram": 0.000016529800637687052, "gpu": 0.011420394136307799, "total": 0.013088560665456217 }, { "unit": "kWh", "cpu": 0.0016516665281883537, "ram": 0.000016530735150022162, "gpu": 0.01143668053822644, "total": 0.013104877801564815 }, { "unit": "kWh", "cpu": 0.0016520307931852576, "ram": 0.00001653465936144705, "gpu": 0.011392024113611843, "total": 0.013060589566158562 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0026635658130413184, "ram": 0.000026670408309406583, "gpu": 0.011081837782129744, "total": 0.013772074003480479 }, "efficiency": { "unit": "tokens/kWh", "value": 653496.3432323643 }, "measures": [ { "unit": "kWh", "cpu": 0.0029633043283714654, "ram": 0.00002967629519591133, "gpu": 0.012345875432247855, "total": 0.015338856055815243 }, { "unit": "kWh", "cpu": 0.0029587058901968727, "ram": 0.00002962458578746078, "gpu": 0.012347088766551995, "total": 0.015335419242536324 }, { "unit": "kWh", "cpu": 0.0029588049258104645, "ram": 0.000029625859438707337, "gpu": 0.012311082071080115, "total": 0.015299512856329273 }, { "unit": "kWh", "cpu": 0.0029588140898376076, "ram": 0.000029626152500472093, "gpu": 0.01233931320477577, "total": 0.015327753447113852 }, { "unit": "kWh", "cpu": -0.0016513170368874855, "ram": -0.000016527227705957916, "gpu": -0.011413016630406059, "total": -0.013080860894999505 }, { "unit": "kWh", "cpu": 0.002958841279897229, "ram": 0.000029626498067417, "gpu": 0.012316890964616078, "total": 0.015305358742580724 }, { "unit": "kWh", "cpu": 0.004611106528960553, "ram": 0.00004616325204812629, "gpu": 0.02370919452289577, "total": 0.028366464303904493 }, { "unit": "kWh", "cpu": 0.0029584549165880963, "ram": 0.000029623287912011178, "gpu": 0.012292687889698506, "total": 0.01528076609419865 }, { "unit": "kWh", "cpu": 0.0029598287450958473, "ram": 0.000029636404647639844, "gpu": 0.012252946191237513, "total": 0.01524241134098099 }, { "unit": "kWh", "cpu": 0.0029591144625425326, "ram": 0.000029628975202277867, "gpu": 0.012316315408599898, "total": 0.015305058846344713 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010573537611950695, "ram": 7.912553488566287e-8, "gpu": 0.000015342234496174, "total": 0.00002599489764301036 }, "efficiency": { "unit": "samples/kWh", "value": 38469087.80842555 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "microsoft/phi-2", "processor": "microsoft/phi-2", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-25T00:12:04
codecarbon
305897cc-973f-4a88-8b63-11335d9287a7
-1,729,542,830.694556
0.010471
0.000027
42.5
218.516294
0.425484
0.004611
0.023708
0.000046
0.028366
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T23:06:58
codecarbon
305897cc-973f-4a88-8b63-11335d9287a7
-1,729,543,081.344916
0.004821
0.000034
42.5
293.075062
0.425378
0.001652
0.011392
0.000017
0.013061
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-24T22:43:37
codecarbon
305897cc-973f-4a88-8b63-11335d9287a7
-1,729,543,220.38621
0.00001
0.000011
42.5
61.826472
0.319144
0.000011
0.000015
0
0.000026
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.001485621188480941, "ram": 0.000014692518964651205, "gpu": 0.010244187250898752, "total": 0.011744500958344343 }, "efficiency": { "unit": "tokens/kWh", "value": 25532119.335130304 }, "measures": [ { "unit": "kWh", "cpu": 0.001649758945001789, "ram": 0.00001631103205265745, "gpu": 0.011374700766419554, "total": 0.013040770743474001 }, { "unit": "kWh", "cpu": 0.0016512926023273596, "ram": 0.000016330160959897894, "gpu": 0.011390980779443893, "total": 0.01305860354273115 }, { "unit": "kWh", "cpu": 0.001651151037707516, "ram": 0.000016330344539283105, "gpu": 0.01139585522778841, "total": 0.013063336610035205 }, { "unit": "kWh", "cpu": 0.0016510045901565818, "ram": 0.00001632869071612149, "gpu": 0.011372358820101791, "total": 0.013039692100974491 }, { "unit": "kWh", "cpu": 0.001650732115502978, "ram": 0.000016326180546432628, "gpu": 0.011389834945194188, "total": 0.013056893241243608 }, { "unit": "kWh", "cpu": 0.0016509325532350176, "ram": 0.000016327997361584713, "gpu": 0.011356127418228112, "total": 0.013023387968824712 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.0016505938081308059, "ram": 0.000016325110782887438, "gpu": 0.011376044378605954, "total": 0.013042963297519652 }, { "unit": "kWh", "cpu": 0.0016498582738880933, "ram": 0.00001631766753717827, "gpu": 0.011403070789115954, "total": 0.013069246730541231 }, { "unit": "kWh", "cpu": 0.00165088795885927, "ram": 0.00001632800515046905, "gpu": 0.01138289938408965, "total": 0.013050115348099384 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.002664958041045172, "ram": 0.000026369096206358968, "gpu": 0.011070685912097256, "total": 0.013762013049348782 }, "efficiency": { "unit": "tokens/kWh", "value": 653974.0928690574 }, "measures": [ { "unit": "kWh", "cpu": 0.0029624372289799384, "ram": 0.00002931683063248545, "gpu": 0.012333280977728123, "total": 0.015325035037340545 }, { "unit": "kWh", "cpu": 0.002960903503123215, "ram": 0.000029297860616750537, "gpu": 0.012290644554730612, "total": 0.015280845918470582 }, { "unit": "kWh", "cpu": 0.00296094779222064, "ram": 0.000029297177945581367, "gpu": 0.012273321485315236, "total": 0.015263566455481457 }, { "unit": "kWh", "cpu": 0.002961316267515459, "ram": 0.000029301027637924565, "gpu": 0.012301544285672428, "total": 0.01529216158082581 }, { "unit": "kWh", "cpu": -0.001650732115502978, "ram": -0.000016326180546432628, "gpu": -0.011389834945194188, "total": -0.013056893241243608 }, { "unit": "kWh", "cpu": 0.0029614788662352433, "ram": 0.00002930271236997148, "gpu": 0.012307717623944114, "total": 0.01529849920254936 }, { "unit": "kWh", "cpu": 0.004611411084479877, "ram": 0.00004562063147449039, "gpu": 0.023677480608635726, "total": 0.02833451232459011 }, { "unit": "kWh", "cpu": 0.002960594404684323, "ram": 0.000029293472587959996, "gpu": 0.012295711225450479, "total": 0.015285599102722744 }, { "unit": "kWh", "cpu": 0.002960881133496917, "ram": 0.000029296356497970303, "gpu": 0.012293377056915578, "total": 0.015283554546910424 }, { "unit": "kWh", "cpu": 0.0029603422452190824, "ram": 0.000029291072846888218, "gpu": 0.01232361624777445, "total": 0.015313249565840431 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 3.829965067173665e-7, "ram": 2.4213983180210174e-9, "gpu": 0, "total": 3.8541790503538754e-7 }, "efficiency": { "unit": "samples/kWh", "value": 2594586258.0208464 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "microsoft/phi-2", "processor": "microsoft/phi-2", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-25T01:40:45
codecarbon
a1e5425c-b4cf-4050-aa70-fcfa28aeb165
-1,729,542,830.683298
0.01047
0.000027
42.5
218.495494
0.420457
0.004611
0.023707
0.000046
0.028363
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T00:35:39
codecarbon
a1e5425c-b4cf-4050-aa70-fcfa28aeb165
-1,729,543,081.441758
0.004817
0.000034
42.5
293.043105
0.420353
0.001651
0.011383
0.000016
0.01305
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T00:12:18
codecarbon
a1e5425c-b4cf-4050-aa70-fcfa28aeb165
-1,729,543,221.2494
0
0.000005
42.5
0
0.296465
0
0
0
0
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00042952262899040045, "ram": 0.000004223514289139063, "gpu": 0.0027924934839930414, "total": 0.0032262396272725808 }, "efficiency": { "unit": "tokens/kWh", "value": 77907728.2032169 }, "measures": [ { "unit": "kWh", "cpu": 0.0004767674529698627, "ram": 0.000004686756691400073, "gpu": 0.0030707632899420467, "total": 0.003552217499603309 }, { "unit": "kWh", "cpu": 0.0004761168440573885, "ram": 0.00000468156140578368, "gpu": 0.003081669132000142, "total": 0.003562467537463315 }, { "unit": "kWh", "cpu": 0.0004762781842692575, "ram": 0.000004683629835662878, "gpu": 0.0031047785949318563, "total": 0.003585740409036777 }, { "unit": "kWh", "cpu": 0.00047698079691387333, "ram": 0.000004690545441760647, "gpu": 0.0031101838770339896, "total": 0.0035918552193896242 }, { "unit": "kWh", "cpu": 0.0004774678291462704, "ram": 0.000004694936535922736, "gpu": 0.003104798594947944, "total": 0.0035869613606301365 }, { "unit": "kWh", "cpu": 0.00047772778730270334, "ram": 0.000004697792063112075, "gpu": 0.003111502489200113, "total": 0.003593928068565927 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.00047796141129025863, "ram": 0.000004699928314731336, "gpu": 0.0031138961022261213, "total": 0.00359655744183111 }, { "unit": "kWh", "cpu": 0.0004777220907558029, "ram": 0.0000046975279509393065, "gpu": 0.0031058152624279245, "total": 0.0035882348811346644 }, { "unit": "kWh", "cpu": 0.00047820389319858797, "ram": 0.000004702464652077899, "gpu": 0.0031215274972202778, "total": 0.0036044338550709418 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0016164460666061049, "ram": 0.00001590753237599822, "gpu": 0.005067730581958951, "total": 0.006700084180941058 }, "efficiency": { "unit": "tokens/kWh", "value": 1343266.704857417 }, "measures": [ { "unit": "kWh", "cpu": 0.001785880177408763, "ram": 0.000017576060515605916, "gpu": 0.005726895414845945, "total": 0.007530351652770312 }, { "unit": "kWh", "cpu": 0.001794188638455832, "ram": 0.000017656503714432373, "gpu": 0.005638713955411712, "total": 0.007450559097581981 }, { "unit": "kWh", "cpu": 0.0017975180591892734, "ram": 0.00001768867778453828, "gpu": 0.0056178475498300795, "total": 0.007433054286803891 }, { "unit": "kWh", "cpu": 0.0017974488683394562, "ram": 0.00001768841365471345, "gpu": 0.005605810873534178, "total": 0.007420948155528349 }, { "unit": "kWh", "cpu": -0.0004774678291462704, "ram": -0.000004694936535922736, "gpu": -0.003104798594947944, "total": -0.0035869613606301365 }, { "unit": "kWh", "cpu": 0.0017941301622098576, "ram": 0.00001765602997296569, "gpu": 0.005613215879458178, "total": 0.007425002071640999 }, { "unit": "kWh", "cpu": 0.0022829135469983955, "ram": 0.00002246275783147848, "gpu": 0.008738117823821945, "total": 0.011043494128651826 }, { "unit": "kWh", "cpu": 0.0018028321023171477, "ram": 0.000017741966243245834, "gpu": 0.005622537275803641, "total": 0.007443111344364043 }, { "unit": "kWh", "cpu": 0.0017882973749673214, "ram": 0.000017598984262985416, "gpu": 0.0056052753175501735, "total": 0.007411171676780511 }, { "unit": "kWh", "cpu": 0.0017987195653212731, "ram": 0.00001770086631593952, "gpu": 0.005613690324281606, "total": 0.007430110755918824 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010432217698123875, "ram": 7.020663859453463e-8, "gpu": 0.000018202236783837478, "total": 0.00002870466112055589 }, "efficiency": { "unit": "samples/kWh", "value": 34837547.66517286 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2-large", "processor": "openai-community/gpt2-large", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-25T15:44:24
codecarbon
e4243e0d-df87-44ee-aaf5-8b066e4079be
-1,729,590,803.420823
0.004073
0.000021
42.5
163.052949
0.418186
0.002277
0.008735
0.000022
0.011035
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T15:12:19
codecarbon
e4243e0d-df87-44ee-aaf5-8b066e4079be
-1,729,590,955.782403
0.001331
0.000033
42.5
277.450426
0.417975
0.000478
0.003122
0.000005
0.003604
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-25T15:05:34
codecarbon
e4243e0d-df87-44ee-aaf5-8b066e4079be
-1,729,590,995.405234
0.000011
0.000012
42.5
74.342119
0.286996
0.00001
0.000018
0
0.000029
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.00010862582424872218, "ram": 0.0000010473600411980985, "gpu": 0.0005686025937706951, "total": 0.0006782757780606155 }, "efficiency": { "unit": "tokens/kWh", "value": 370570508.5307317 }, "measures": [ { "unit": "kWh", "cpu": 0.00012129653973990776, "ram": 0.0000011692015629671131, "gpu": 0.0006198074402909981, "total": 0.000742273181593873 }, { "unit": "kWh", "cpu": 0.00012052802822428124, "ram": 0.0000011620962494660247, "gpu": 0.0006196766068518045, "total": 0.0007413667313255518 }, { "unit": "kWh", "cpu": 0.00012054917758181949, "ram": 0.0000011623102217448143, "gpu": 0.000627620779873439, "total": 0.0007493322676770033 }, { "unit": "kWh", "cpu": 0.00012112848713695715, "ram": 0.000001167999862259742, "gpu": 0.0006283013359746903, "total": 0.0007505978229739069 }, { "unit": "kWh", "cpu": 0.00012062290482305494, "ram": 0.0000011631245148831908, "gpu": 0.0006336207846739939, "total": 0.0007554068140119318 }, { "unit": "kWh", "cpu": 0.00012057837473928678, "ram": 0.00000116270090665448, "gpu": 0.0006421232914766151, "total": 0.0007638643671225567 }, { "unit": "kWh", "cpu": 0, "ram": 0, "gpu": 0, "total": 0 }, { "unit": "kWh", "cpu": 0.00012037484621849016, "ram": 0.0000011605793702521158, "gpu": 0.0006367591205176382, "total": 0.0007582945461063811 }, { "unit": "kWh", "cpu": 0.00012038277642305837, "ram": 0.000001160829471870166, "gpu": 0.0006381185660497835, "total": 0.0007596621719447123 }, { "unit": "kWh", "cpu": 0.00012079710760036608, "ram": 0.000001164758251883339, "gpu": 0.0006399980119979887, "total": 0.0007619598778502382 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.0005922381165179873, "ram": 0.000005713493398350993, "gpu": 0.001357642836113282, "total": 0.0019555944460296197 }, "efficiency": { "unit": "tokens/kWh", "value": 4602181.202893274 }, "measures": [ { "unit": "kWh", "cpu": 0.0006625658388062021, "ram": 0.000006391879774432742, "gpu": 0.001531233169428603, "total": 0.002200190888009238 }, { "unit": "kWh", "cpu": 0.0006620358880190504, "ram": 0.000006386855273198218, "gpu": 0.0015386967865120837, "total": 0.002207119529804332 }, { "unit": "kWh", "cpu": 0.0006593189439701547, "ram": 0.000006360662992191214, "gpu": 0.0015147431562390068, "total": 0.0021804227632013515 }, { "unit": "kWh", "cpu": 0.0006562932787400821, "ram": 0.000006331402237152975, "gpu": 0.0015051423152234733, "total": 0.00216776699620071 }, { "unit": "kWh", "cpu": -0.00012062290482305494, "ram": -0.0000011631245148831908, "gpu": -0.0006336207846739939, "total": -0.0007554068140119318 }, { "unit": "kWh", "cpu": 0.0006550399847062287, "ram": 0.000006319301092105456, "gpu": 0.0014898109140686344, "total": 0.002151170199866965 }, { "unit": "kWh", "cpu": 0.000776585881390646, "ram": 0.000007491333902871301, "gpu": 0.002135279486000563, "total": 0.00291935670129408 }, { "unit": "kWh", "cpu": 0.0006576145317967329, "ram": 0.000006344362028391807, "gpu": 0.0015009384229731637, "total": 0.002164897316798289 }, { "unit": "kWh", "cpu": 0.0006543893637049931, "ram": 0.000006313083185095556, "gpu": 0.0014967336973858991, "total": 0.002157436144275983 }, { "unit": "kWh", "cpu": 0.0006591603588688387, "ram": 0.0000063591780129538475, "gpu": 0.0014974711979753863, "total": 0.002162990734857177 } ] }
{ "memory": null, "latency": null, "throughput": null, "energy": null, "efficiency": null, "measures": null }
{ "memory": null, "latency": null, "throughput": null, "energy": { "unit": "kWh", "cpu": 0.000010461925458002953, "ram": 6.982946294395582e-8, "gpu": 0.00001796279214882901, "total": 0.00002849454706977592 }, "efficiency": { "unit": "samples/kWh", "value": 35094433.94735328 }, "measures": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "openai-community/gpt2", "processor": "openai-community/gpt2", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " AMD EPYC 7R32", "cpu_count": 48, "cpu_ram_mb": 200472.73984, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A10G" ], "gpu_count": 1, "gpu_vram_mb": 24146608128, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
2024-10-03T19:04:17
codecarbon
a786915c-f253-4a5e-be38-23af937e8b0d
-1,727,580,592.207487
0.00108
0.000016
42.5
116.475072
0.409999
0.00078
0.002137
0.000008
0.002925
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T18:53:18
codecarbon
a786915c-f253-4a5e-be38-23af937e8b0d
-1,727,580,648.042218
0.000281
0.000027
42.5
225.233591
0.409939
0.000121
0.00064
0.000001
0.000762
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
null
null
null
null
null
null
null
null
null
2024-10-03T18:51:35
codecarbon
a786915c-f253-4a5e-be38-23af937e8b0d
-1,727,580,657.388506
0.000011
0.000012
42.5
73.154842
0.284643
0.00001
0.000018
0
0.000028
United States
USA
virginia
Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35
3.9.20
2.5.1
48
AMD EPYC 7R32
1
1 x NVIDIA A10G
-77.4903
39.0469
186.704788
process
N
1
text_generation
{ "name": "pytorch", "version": "2.4.0", "_target_": "optimum_benchmark.backends.pytorch.backend.PyTorchBackend", "task": "text-generation", "model": "NousResearch/Hermes-3-Llama-3.1-70B", "processor": "NousResearch/Hermes-3-Llama-3.1-70B", "library": "transformers", "device": "cuda", "device_ids": "0", "seed": 42, "inter_op_num_threads": null, "intra_op_num_threads": null, "hub_kwargs": { "revision": "main", "force_download": false, "local_files_only": false, "trust_remote_code": true }, "no_weights": true, "device_map": null, "torch_dtype": null, "amp_autocast": false, "amp_dtype": null, "eval_mode": true, "to_bettertransformer": false, "low_cpu_mem_usage": null, "attn_implementation": null, "cache_implementation": null, "torch_compile": false, "torch_compile_config": {}, "quantization_scheme": null, "quantization_config": {}, "deepspeed_inference": false, "deepspeed_inference_config": {}, "peft_type": null, "peft_config": {} }
{ "name": "process", "_target_": "optimum_benchmark.launchers.process.launcher.ProcessLauncher", "device_isolation": false, "device_isolation_action": "warn", "start_method": "spawn" }
{ "name": "energy_star", "_target_": "optimum_benchmark.benchmarks.energy_star.benchmark.EnergyStarBenchmark", "dataset_name": "EnergyStarAI/text_generation", "dataset_config": "", "dataset_split": "train", "num_samples": 1000, "input_shapes": { "batch_size": 1 }, "text_column_name": "text", "truncation": true, "max_length": -1, "dataset_prefix1": "", "dataset_prefix2": "", "t5_task": "", "image_column_name": "image", "resize": false, "question_column_name": "question", "context_column_name": "context", "sentence1_column_name": "sentence1", "sentence2_column_name": "sentence2", "audio_column_name": "audio", "iterations": 10, "warmup_runs": 10, "energy": true, "forward_kwargs": {}, "generate_kwargs": { "max_new_tokens": 10, "min_new_tokens": 10 }, "call_kwargs": {} }
{ "cpu": " Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz", "cpu_count": 96, "cpu_ram_mb": 1204530.110464, "system": "Linux", "machine": "x86_64", "platform": "Linux-5.10.192-183.736.amzn2.x86_64-x86_64-with-glibc2.35", "processor": "x86_64", "python_version": "3.9.20", "gpu": [ "NVIDIA A100-SXM4-40GB" ], "gpu_count": 1, "gpu_vram_mb": 42949672960, "optimum_benchmark_version": "0.2.0", "optimum_benchmark_commit": null, "transformers_version": "4.44.0", "transformers_commit": null, "accelerate_version": "0.33.0", "accelerate_commit": null, "diffusers_version": "0.30.0", "diffusers_commit": null, "optimum_version": null, "optimum_commit": null, "timm_version": null, "timm_commit": null, "peft_version": null, "peft_commit": null }
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
End of preview.

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
8