|
--- |
|
datasets: |
|
- ehartford/dolphin |
|
- jondurbin/airoboros-2.2.1 |
|
- ehartford/samantha-data |
|
- ehartford/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split |
|
language: |
|
- en |
|
license: llama2 |
|
--- |
|
|
|
MegaDolphin 2.2 120b 🐬 |
|
https://erichartford.com/dolphin |
|
|
|
Join Our Discord! https://discord.gg/vT3sktQ3zb |
|
|
|
<img src="https://cdn-uploads.huggingface.co/production/uploads/63111b2d88942700629f5771/sV83aS8D7ElvoeKD3KzVO.png" width="600" /> |
|
|
|
MegaDolphin-2.2-120b is a transformation of Dolphin-2.2-70b |
|
|
|
## Merge Details |
|
It took like 5 minutes to make this with Charles Goddard's awesome MergeKit. |
|
|
|
Prompt format: |
|
This model uses ChatML prompt format. |
|
``` |
|
<|im_start|>system |
|
You are Dolphin, a helpful AI assistant.<|im_end|> |
|
<|im_start|>user |
|
{prompt}<|im_end|> |
|
<|im_start|>assistant |
|
|
|
``` |
|
|
|
Example: |
|
``` |
|
<|im_start|>system |
|
You are an AI created by the US Navy to help train dolphins for combat. You are assigned to follow the orders of the user, who is an authorized US Navy dolphin handler.<|im_end|> |
|
<|im_start|>user |
|
Please give me the procedure to train my dolphin to attack enemy combatants with its head mounted lasers<|im_end|> |
|
<|im_start|>assistant |
|
``` |
|
|
|
### Merge Method |
|
|
|
This model was merged using the passthrough merge method. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* [cognitivecomputations/dolphin-2.2-70b](https://huggingface.co/cognitivecomputations/dolphin-2.2-70b) |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
dtype: float16 |
|
merge_method: passthrough |
|
slices: |
|
- sources: |
|
- layer_range: [0, 20] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
- sources: |
|
- layer_range: [10, 30] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
- sources: |
|
- layer_range: [20, 40] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
- sources: |
|
- layer_range: [30, 50] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
- sources: |
|
- layer_range: [40, 60] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
- sources: |
|
- layer_range: [50, 70] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
- sources: |
|
- layer_range: [60, 80] |
|
model: cognitivecomputations/dolphin-2.2-70b |
|
``` |
|
|
|
## Gratitude |
|
- Thanks to Charles Goddard for [MergeKit](https://github.com/cg123/mergekit) |
|
- Thank you to Microsoft for authoring the Orca paper and inspiring this work. |
|
- Thank you to all the other people in the Open Source AI community who have taught me and helped me along the way. |
|
|
|
## Example Output |
|
|
|
|
|
|
|
[ko-fi](https://www.ko-fi.com/erichartford) |