llama-3-experiment-v1-9B
This is an experimental merge, replicating additional layers to the model without post-merge healing. There is damage to the model, but it appears to be tolerable as is; the performance difference in benchmarks from the original 8B Instruct model does not appear to be significant. The resulting impact on narrative text completion may also be of interest.
Light testing performed with instruct prompting and the following sampler settings:
- temp=1 and minP=0.02
- temp=1 and smoothing factor=0.33
Full weights: grimjim/llama-3-experiment-v1-9B
GGUF quants: grimjim/llama-3-experiment-v1-9B-GGUF
This is a merge of pre-trained language model meta-llama/Meta-Llama-3-8B-Instruct created using mergekit.
Built with Meta Llama 3.
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- meta-llama/Meta-Llama-3-8B-Instruct
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [0, 12]
- sources:
- model: meta-llama/Meta-Llama-3-8B-Instruct
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
- Downloads last month
- 12
Model tree for grimjim/llama-3-experiment-v1-9B
Collections including grimjim/llama-3-experiment-v1-9B
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard66.410
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard78.560
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard66.710
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard50.700
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard75.930
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard65.880