--- base_model: - MaziyarPanahi/calme-2.1-phi3.5-4b - SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored - bunnycore/Phi-3.5-RP-Lora - SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored - nbeerbower/phi3.5-gutenberg-4B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored](https://huggingface.co/SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored) as a base. ### Models Merged The following models were included in the merge: * [MaziyarPanahi/calme-2.1-phi3.5-4b](https://huggingface.co/MaziyarPanahi/calme-2.1-phi3.5-4b) * [SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored](https://huggingface.co/SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored) + [bunnycore/Phi-3.5-RP-Lora](https://huggingface.co/bunnycore/Phi-3.5-RP-Lora) * [nbeerbower/phi3.5-gutenberg-4B](https://huggingface.co/nbeerbower/phi3.5-gutenberg-4B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: nbeerbower/phi3.5-gutenberg-4B - model: SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored+bunnycore/Phi-3.5-RP-Lora - model: MaziyarPanahi/calme-2.1-phi3.5-4b merge_method: model_stock base_model: SicariusSicariiStuff/Phi-3.5-mini-instruct_Uncensored dtype: bfloat16 ```