--- base_model: - NeverSleep/Nethena-20B - Undi95/PsyMedRP-v1-20B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the SLERP merge method. ### Models Merged The following models were included in the merge: * [NeverSleep/Nethena-20B](https://huggingface.co/NeverSleep/Nethena-20B) * [Undi95/PsyMedRP-v1-20B](https://huggingface.co/Undi95/PsyMedRP-v1-20B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Undi95/PsyMedRP-v1-20B layer_range: [0, 62] # PsyMedRP has 62 layers - model: NeverSleep/Nethena-20B layer_range: [0, 62] # Nethena-20B has 62 layers merge_method: slerp # Changing to SLERP method base_model: Undi95/PsyMedRP-v1-20B # Focus on reasoning from PsyMedRP parameters: t: - filter: self_attn value: [.3, .6, .9, .6, .3] # smooth gradient of focus value: [.3, .6, .9, .6, .3] # consistent level of creativity and abstract reasoning - value: 0.639 dtype: bfloat16 # Use preferred dtype ```