metadata
dataset_info:
features:
- name: task
dtype: string
- name: org
dtype: string
- name: model
dtype: string
- name: hardware
dtype: string
- name: date
dtype: string
- name: prefill
struct:
- name: efficency
struct:
- name: unit
dtype: string
- name: value
dtype: float64
- name: energy
struct:
- name: cpu
dtype: float64
- name: gpu
dtype: float64
- name: ram
dtype: float64
- name: total
dtype: float64
- name: unit
dtype: string
- name: decode
struct:
- name: efficiency
struct:
- name: unit
dtype: string
- name: value
dtype: float64
- name: energy
struct:
- name: cpu
dtype: float64
- name: gpu
dtype: float64
- name: ram
dtype: float64
- name: total
dtype: float64
- name: unit
dtype: string
- name: preprocess
struct:
- name: efficiency
struct:
- name: unit
dtype: string
- name: value
dtype: float64
- name: energy
struct:
- name: cpu
dtype: float64
- name: gpu
dtype: float64
- name: ram
dtype: float64
- name: total
dtype: float64
- name: unit
dtype: string
splits:
- name: benchmark_results
num_bytes: 1886
num_examples: 7
- name: train
num_bytes: 2446
num_examples: 9
download_size: 75548
dataset_size: 4332
configs:
- config_name: default
data_files:
- split: benchmark_results
path: data/train-*
- split: train
path: data/train-*
Analysis of energy usage for HUGS models
Based on the energy_star branch of optimum-benchmark, and using codecarbon.
Fields
- task: Task the model was benchmarked on.
- org: Organization hosting the model.
- model: The specific model. Model names at HF are usually constructed with {org}/{model}.
- date: The date that the benchmark was run.
- prefill: The esimated energy and efficiency for prefilling.
- decode: The estimated energy and efficiency for decoding.
- preprocess: The estimated energy and efficiency for preprocessing.
Code to Reproduce
As I'm devving, I'm hopping between https://huggingface.co/spaces/AIEnergyScore/benchmark-hugs-models and https://huggingface.co/spaces/meg/CalculateCarbon
From there, python code/make_pretty_dataset.py
(included in this repository) takes the raw results and uploads them to the dataset here.