Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

DolphinStar-12.5B - GGUF

Original model description:

license: apache-2.0

image/png

Custom Model "Dolphin2Star1" Merged by Noodlz. 12.5B linear merged from the uncensored mistral 7B v0.2 as the base, with the fine tunes of StarlingLM 7B Beta that's originally mistral 7B v0.1

have fun =)

[EDIT] - preset wise it seems like it likes the "ChatML" format. [EDIT 2] - Usage Notes - model is sorta picky with the batch size and prompt preset/template. (maybe because merge of ChatML and OpenChat models)

My current recommended setting & findings

  • Using LM Studio - use the default preset. GPU acceleration to max. prompt eval size to 1024, context length to 32768. this yields me decent, coherant results. ChatML works too but occasionall spits up odd texts after a couple of turns.
  • Using Oobabooga (Windows PC) - runs well using run-in-4bit along with use_flash_attention_2. default presets and everything works just fine.
  • Using OobaBooga (Mac) - [investigating]

Instructions Template:

{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{{ '<s>' }}{% for message in messages %}{{'<|im_start|>' + message['role'] + '
' + message['content'] + '<|im_end|>' + '
'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant
' }}{% endif %}

Chat Template:

{%- for message in messages %}
    {%- if message['role'] == 'system' -%}
        {%- if message['content'] -%}
            {{- message['content'] + '\n\n' -}}
        {%- endif -%}
        {%- if user_bio -%}
            {{- user_bio + '\n\n' -}}
        {%- endif -%}
    {%- else -%}
        {%- if message['role'] == 'user' -%}
            {{- name1 + ': ' + message['content'] + '\n'-}}
        {%- else -%}
            {{- name2 + ': ' + message['content'] + '\n' -}}
        {%- endif -%}
    {%- endif -%}
{%- endfor -%}

license: apache-2.0


base_model:

  • cognitivecomputations/dolphin-2.8-mistral-7b-v02
  • NexusFlow/Starling-LM-7B-beta library_name: transformers tags:
  • mergekit
  • merge

output_folder

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the linear merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

merge_method: linear
parameters:
  weight: 1.0
slices:
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [0,1]
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [0,1]
        parameters: 
          weight: 0
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [1,8]        
  - sources:
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [4,12]
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [8,16]        
  - sources:
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [12,20]  
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [16,24]        
  - sources:
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [20,28]
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [24,31]        
  - sources:
      - model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
        layer_range: [31,32]
      - model: NexusFlow/Starling-LM-7B-beta
        layer_range: [31,32]
        parameters: 
          weight: 0          
dtype: float16
tokenizer_source: model:cognitivecomputations/dolphin-2.8-mistral-7b-v02
Downloads last month
124
GGUF
Model size
12.5B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .