Llama 3 Experiments
Collection
Experimental models for use or further finetuning
β’
5 items
β’
Updated
This is a merge of pre-trained language models created using mergekit.
This model was merged using the passthrough merge method while zeroing o_proj
and down_proj
which led to an decrease in perplexity (good)
compared to similar 15B merges. This was a recommendation from Charles Goddard - thank you for sharing the method of merging as well as Toasty
Pigeon for bringing it to my attention!
A finetuned version of this model can be found at elinas/Llama-3-15B-Instruct-zeroed-ft which seems to improve performance.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
dtype: bfloat16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
- sources:
- layer_range: [8, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [8, 24]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
scale:
- filter: o_proj
value: 0.0
- filter: down_proj
value: 0.0
- value: 1.0
- sources:
- layer_range: [24, 32]
model: meta-llama/Meta-Llama-3-8B-Instruct
Base model
meta-llama/Meta-Llama-3-8B-Instruct