|
--- |
|
license: llama2 |
|
language: |
|
- en |
|
pipeline_tag: conversational |
|
tags: |
|
- Xwin |
|
- Euryale 1.3 |
|
- Platypus2 |
|
- WinterGoddess |
|
- frankenmerge |
|
- dare |
|
- ties |
|
- 90b |
|
--- |
|
# BigWeave v9 90B |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/65a6db055c58475cf9e6def1/4CbbAN-X7ZWj702JrcCGH.png" width=600> |
|
|
|
The BigWeave models aim to identify merge settings equaling or surpassing the performance of Goliath-120b. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared. |
|
|
|
This version is a DARE-TIES merge of two passthrough merges: Xwin-LM-70b-v0.1 + Euryale-1.3-70b ([BigWeave v6](https://huggingface.co/llmixer/BigWeave-v6-90b)) and Platypus2-70b-instruct + WinterGoddess-1.4x-70b (BigWeave v8). Both models individually show strong performance, and the merged model achieves even lower perplexity than each model separately. |
|
|
|
The 90b size allows for 4bit quants to fit into 48GB of VRAM. |
|
|
|
# Prompting Format |
|
Vicuna and Alpaca. |
|
|
|
# Merge process |
|
The models used in the merge are [Xwin-LM-70b-v0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1), [Euryale-1.3-70b](https://huggingface.co/Sao10K/Euryale-1.3-L2-70B), [Platypus2-70b-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). |
|
|
|
Merge configuration: |
|
``` |
|
slices: |
|
- sources: |
|
- model: Xwin-LM/Xwin-LM-70B-V0.1 |
|
layer_range: [0,12] |
|
- sources: |
|
- model: Sao10K/Euryale-1.3-L2-70B |
|
layer_range: [9,14] |
|
- sources: |
|
- model: Xwin-LM/Xwin-LM-70B-V0.1 |
|
layer_range: [12,62] |
|
- sources: |
|
- model: Sao10K/Euryale-1.3-L2-70B |
|
layer_range: [54,71] |
|
- sources: |
|
- model: Xwin-LM/Xwin-LM-70B-V0.1 |
|
layer_range: [62,80] |
|
merge_method: passthrough |
|
dtype: float16 |
|
--- |
|
slices: |
|
- sources: |
|
- model: garage-bAInd/Platypus2-70B-instruct |
|
layer_range: [0,12] |
|
- sources: |
|
- model: Sao10K/WinterGoddess-1.4x-70B-L2 |
|
layer_range: [9,14] |
|
- sources: |
|
- model: garage-bAInd/Platypus2-70B-instruct |
|
layer_range: [12,62] |
|
- sources: |
|
- model: Sao10/WinterGoddess-1.4x-70B-L2 |
|
layer_range: [54,71] |
|
- sources: |
|
- model: garage-bAInd/Platypus2-70B-instruct |
|
layer_range: [62,80] |
|
merge_method: passthrough |
|
dtype: float16 |
|
--- |
|
models: |
|
- model: llmixer/BigWeave-v8-90b |
|
parameters: |
|
weight: 0.5 |
|
density: 0.5 |
|
merge_method: dare_ties |
|
base_model: llmixer/BigWeave-v6-90b |
|
dtype: float16 |
|
``` |
|
|
|
# Acknowledgements |
|
[@Xwin-LM](https://huggingface.co/Xwin-LM) For creating Xwin |
|
|
|
[@Sao10K](https://huggingface.co/Sao10K) For creating Euryale and WinterGoddess |
|
|
|
[@garage-bAInd](https://huggingface.co/garage-bAInd) For creating Platypus2 |
|
|
|
[@alpindale](https://huggingface.co/alpindale) For creating the original Goliath |
|
|
|
[@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit). |
|
|