File size: 991 Bytes
a927442 2072730 59ce2d6 2072730 59ce2d6 2072730 a927442 59ce2d6 a927442 59ce2d6 a927442 59ce2d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
---
base_model: nbeerbower/Flammen-Bophades-7B
inference: false
library_name: transformers
license: apache-2.0
merged_models:
- nbeerbower/slerp-bophades-truthy-math-mistral-7B
- nbeerbower/flammen15-gutenberg-DPO-v1-7B
pipeline_tag: text-generation
quantized_by: Suparious
tags:
- 4-bit
- AWQ
- text-generation
- autotrain_compatible
- endpoints_compatible
- mergekit
- merge
---
# nbeerbower/Flammen-Bophades-7B AWQ
- Model Creator [nbeerbower](https://huggingface.co/nbeerbower/)
- Original Model [Flammen-Bophades-7B](https://huggingface.co/nbeerbower/Flammen-Bophades-7B)
## Model Summary
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
The following models were included in the merge:
* [nbeerbower/slerp-bophades-truthy-math-mistral-7B](https://huggingface.co/nbeerbower/slerp-bophades-truthy-math-mistral-7B)
* [nbeerbower/flammen15-gutenberg-DPO-v1-7B](https://huggingface.co/nbeerbower/flammen15-gutenberg-DPO-v1-7B)
|