wwe180's picture
Upload folder using huggingface_hub
77355bd verified
|
raw
history blame
2.39 kB
metadata
base_model:
  - NousResearch/Hermes-2-Theta-Llama-3-8B
  - camillop/Meta-Llama-3-8B-ORPO-ITA-llama-adapters
  - gradientai/Llama-3-8B-Instruct-Gradient-1048k
  - migtissera/Llama-3-8B-Synthia-v3.5
  - unstoppable123/LLaMA3-8B_chinese_lora_sft_v0.2
  - openchat/openchat-3.6-8b-20240522
  - hfl/llama-3-chinese-8b-instruct-v2-lora
  - Sao10K/L3-8B-Stheno-v3.1
  - Jiar/Llama-3-8B-Chinese
library_name: transformers
tags:
  - mergekit
  - merge

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the passthrough merge method using gradientai/Llama-3-8B-Instruct-Gradient-1048k as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: "Sao10K/L3-8B-Stheno-v3.1+Jiar/Llama-3-8B-Chinese"
        layer_range: [0, 22]
  - sources:
      - model: "NousResearch/Hermes-2-Theta-Llama-3-8B+camillop/Meta-Llama-3-8B-ORPO-ITA-llama-adapters"
        layer_range: [10, 22]
  - sources:
      - model: "migtissera/Llama-3-8B-Synthia-v3.5+unstoppable123/LLaMA3-8B_chinese_lora_sft_v0.2"
        layer_range: [0, 22]
  - sources:
      - model: "openchat/openchat-3.6-8b-20240522+hfl/llama-3-chinese-8b-instruct-v2-lora"
        layer_range: [10,32]
merge_method: passthrough
base_model: "gradientai/Llama-3-8B-Instruct-Gradient-1048k"
dtype: bfloat16