File size: 2,749 Bytes
6b37d32 9da8d49 6b37d32 f232790 6b37d32 f232790 6b37d32 abd9f5c 9da8d49 6b37d32 6dca89f 5b92355 6b37d32 abd9f5c 6b37d32 7a3ec00 6b37d32 242d722 bd5b00e 9f237a8 9016ce4 6a5dc05 9016ce4 b3532a3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
---
dataset_info:
features:
- name: task
dtype: string
- name: org
dtype: string
- name: model
dtype: string
- name: hardware
dtype: string
- name: date
dtype: string
- name: prefill
struct:
- name: efficency
struct:
- name: unit
dtype: string
- name: value
dtype: float64
- name: energy
struct:
- name: cpu
dtype: float64
- name: gpu
dtype: float64
- name: ram
dtype: float64
- name: total
dtype: float64
- name: unit
dtype: string
- name: decode
struct:
- name: efficiency
struct:
- name: unit
dtype: string
- name: value
dtype: float64
- name: energy
struct:
- name: cpu
dtype: float64
- name: gpu
dtype: float64
- name: ram
dtype: float64
- name: total
dtype: float64
- name: unit
dtype: string
- name: preprocess
struct:
- name: efficiency
struct:
- name: unit
dtype: string
- name: value
dtype: float64
- name: energy
struct:
- name: cpu
dtype: float64
- name: gpu
dtype: float64
- name: ram
dtype: float64
- name: total
dtype: float64
- name: unit
dtype: string
splits:
- name: benchmark_results
num_bytes: 1886
num_examples: 7
- name: train
num_bytes: 2446
num_examples: 9
download_size: 75548
dataset_size: 4332
configs:
- config_name: default
data_files:
- split: benchmark_results
path: data/train-*
- split: train
path: data/train-*
---
# Analysis of energy usage for HUGS models
Based on the [energy_star branch](https://github.com/huggingface/optimum-benchmark/tree/energy_star_dev) of [optimum-benchmark](https://github.com/huggingface/optimum-benchmark), and using [codecarbon](https://pypi.org/project/codecarbon/2.1.4/).
# Fields
- **task**: Task the model was benchmarked on.
- **org**: Organization hosting the model.
- **model**: The specific model. Model names at HF are usually constructed with {org}/{model}.
- **date**: The date that the benchmark was run.
- **prefill**: The esimated energy and efficiency for prefilling.
- **decode**: The estimated energy and efficiency for decoding.
- **preprocess**: The estimated energy and efficiency for preprocessing.
# Code to Reproduce
As I'm devving, I'm hopping between https://huggingface.co/spaces/AIEnergyScore/benchmark-hugs-models and https://huggingface.co/spaces/meg/CalculateCarbon
From there, `python code/make_pretty_dataset.py` (included in this repository) takes the raw results and uploads them to the dataset here. |