metadata
base_model: allenai/OLMo-7B-0424-hf
license: apache-2.0
language:
- en
pipeline_tag: text2text-generation
tags:
- olmo
quantized_by: robolamp
Quantized versions of https://huggingface.co/allenai/OLMo-7B-0424-hf
NB: Q8_K is not supported by default llama.cpp, use Q8_0 instead.
bits per weight vs size plot:
TODO: readme