Edit model card

Magnolia-v1-12B-GGUF

This repo contains GGUF quants of a merge of pre-trained language models created using mergekit. The base is a merge of two models trained for variety in text generation. Instruct was added in at low weight in order to increase the steerability of the model; safety has consequently been reinforced.

Tested at temperature 0.7 and minP 0.01, with ChatML prompting.

Mistral Nemo models tend to have repetition issues in general. For this model at least, various issues can be mitigated somewhat with additional sysprompting, e.g.:

No passage shall exceed 10 lines of text, with turns limited to a maximum of 5 lines per speaker to ensure snappy and engaging dialog and action.
Ensure that all punctuation rules are adhered to without the introduction of spurious intervening spaces.
Avoid redundant phrasing and maintain forward narrative progression by utilizing varied sentence structure, alternative word choices, and active voice.
Employ descriptive details judiciously, ensuring they serve a purpose in advancing the story or revealing character or touching upon setting.

Merge Details

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
  - model: grimjim/magnum-consolidatum-v1-12b
merge_method: slerp
base_model: grimjim/mistralai-Mistral-Nemo-Instruct-2407
parameters:
  t:
   - value: 0.1
dtype: bfloat16
Downloads last month
165
GGUF
Model size
12.2B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for grimjim/Magnolia-v1-12B-GGUF