metadata
base_model:
- Test157t/Pasta-Lake-7b
- Test157t/Prima-LelantaclesV4-7b-16k
library_name: transformers
tags:
- mistral
- quantized
- text-generation-inference
pipeline_tag: text-generation
inference: false
GGUF quantizations for ChaoticNeutrals/Prima-LelantaclesV5-7b.
If you want any specific quantization to be added, feel free to ask.
All credits belong to the respective creators.
Base⇢ GGUF(F16)⇢ GGUF(Quants)
Using llama.cpp-b2222. For --imatrix, included reference imatrix-Q8_0.dat
was used.
Original model information:
https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV5-7b/tree/main/ST%20presets
This model was merged using the DARE TIES merge method.
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
normalize: true
models:
- model: Test157t/Pasta-Lake-7b
parameters:
weight: 1
- model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
weight: 1
dtype: float16