Quantization made by Richard Erkhov.
DolphinStar-12.5B - GGUF
- Model creator: https://huggingface.co/Noodlz/
- Original model: https://huggingface.co/Noodlz/DolphinStar-12.5B/
Name | Quant method | Size |
---|---|---|
DolphinStar-12.5B.Q2_K.gguf | Q2_K | 4.33GB |
DolphinStar-12.5B.IQ3_XS.gguf | IQ3_XS | 4.81GB |
DolphinStar-12.5B.IQ3_S.gguf | IQ3_S | 5.07GB |
DolphinStar-12.5B.Q3_K_S.gguf | Q3_K_S | 5.04GB |
DolphinStar-12.5B.IQ3_M.gguf | IQ3_M | 5.24GB |
DolphinStar-12.5B.Q3_K.gguf | Q3_K | 5.62GB |
DolphinStar-12.5B.Q3_K_M.gguf | Q3_K_M | 5.62GB |
DolphinStar-12.5B.Q3_K_L.gguf | Q3_K_L | 6.11GB |
DolphinStar-12.5B.IQ4_XS.gguf | IQ4_XS | 6.3GB |
DolphinStar-12.5B.Q4_0.gguf | Q4_0 | 6.57GB |
DolphinStar-12.5B.IQ4_NL.gguf | IQ4_NL | 6.64GB |
DolphinStar-12.5B.Q4_K_S.gguf | Q4_K_S | 6.62GB |
DolphinStar-12.5B.Q4_K.gguf | Q4_K | 6.99GB |
DolphinStar-12.5B.Q4_K_M.gguf | Q4_K_M | 6.99GB |
DolphinStar-12.5B.Q4_1.gguf | Q4_1 | 7.29GB |
DolphinStar-12.5B.Q5_0.gguf | Q5_0 | 8.01GB |
DolphinStar-12.5B.Q5_K_S.gguf | Q5_K_S | 8.01GB |
DolphinStar-12.5B.Q5_K.gguf | Q5_K | 8.22GB |
DolphinStar-12.5B.Q5_K_M.gguf | Q5_K_M | 8.22GB |
DolphinStar-12.5B.Q5_1.gguf | Q5_1 | 8.73GB |
DolphinStar-12.5B.Q6_K.gguf | Q6_K | 9.53GB |
DolphinStar-12.5B.Q8_0.gguf | Q8_0 | 12.35GB |
Original model description:
license: apache-2.0
Custom Model "Dolphin2Star1" Merged by Noodlz. 12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1
have fun =)
[EDIT] - preset wise it seems like it likes the "ChatML" format. [EDIT 2] - Usage Notes - model is sorta picky with the batch size and prompt preset/template. (maybe because merge of ChatML and OpenChat models)
My current recommended setting & findings
- Using LM Studio - use the default preset. GPU acceleration to max. prompt eval size to 1024, context length to 32768. this yields me decent, coherant results. ChatML works too but occasionall spits up odd texts after a couple of turns.
- Using Oobabooga (Windows PC) - runs well using run-in-4bit along with use_flash_attention_2. default presets and everything works just fine.
- Using OobaBooga (Mac) - [investigating]
Instructions Template:
{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ '<s>' }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}
Chat Template:
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{%- if message['content'] -%}
{{- message['content'] + '\n\n' -}}
{%- endif -%}
{%- if user_bio -%}
{{- user_bio + '\n\n' -}}
{%- endif -%}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{- name1 + ': ' + message['content'] + '\n'-}}
{%- else -%}
{{- name2 + ': ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
license: apache-2.0
base_model:
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
- NexusFlow/Starling-LM-7B-beta library_name: transformers tags:
- mergekit
- merge
output_folder
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the linear merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: linear
parameters:
weight: 1.0
slices:
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [0,1]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [0,1]
parameters:
weight: 0
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [1,8]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [4,12]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [8,16]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [12,20]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [16,24]
- sources:
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [20,28]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [24,31]
- sources:
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
layer_range: [31,32]
- model: NexusFlow/Starling-LM-7B-beta
layer_range: [31,32]
parameters:
weight: 0
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
- Downloads last month
- 124