--- license: apache-2.0 pipeline_tag: text-generation library_name: gguf base_model: fblgit/UNA-SimpleSmaug-34b-v1beta --- **NOTE**: You will need a recent build of llama.cpp to run these quants (i.e. at least commit `494c870`). GGUF importance matrix (imatrix) quants for https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta * The importance matrix was trained for ~50K tokens (105 batches of 512 tokens) using a [general purpose imatrix calibration dataset](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384). * The [imatrix is being used on the K-quants](https://github.com/ggerganov/llama.cpp/pull/4930) as well. | Layers | Context | [Template](https://huggingface.co/fblgit/UNA-SimpleSmaug-34b-v1beta/blob/main/tokenizer_config.json#L31) | | --- | --- | --- | |
60
|
32768
|
\<\|startoftext\|\>[INST] \<\\>
{instructions}
\<\
\>

{prompt} [INST]
|