Edit model card

Unofficial dequantized weight of grok-1 in HF Transformers format.

Note: If you haven't download the weight yet, please use the fp32 revision instead which uses float32 precision for RMSNorm and Router layer for better consistency.

The (fp32) weights are converted using the script here ran inside the grok-1 repo. Since downloading the dequantized weight needs twice as much time, it's recommended to download the original weight and convert on your own.

Benchmarks

(I ran with load_in_8bit using lm-evaluation-harness due to limited hardware, so the result will be slightly worse)

  • MMLU 5-shot: 0.7166
  • BBH 3-shot: 0.5204
Downloads last month
689
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Spaces using keyfan/grok-1-hf 2