Yhyu13/LMCocktail-phi-2-v1-GGUF
Quantized GGUF model files for LMCocktail-phi-2-v1 from Yhyu13
Name | Quant method | Size |
---|---|---|
lmcocktail-phi-2-v1.fp16.gguf | fp16 | 5.56 GB |
lmcocktail-phi-2-v1.q2_k.gguf | q2_k | 1.17 GB |
lmcocktail-phi-2-v1.q3_k_m.gguf | q3_k_m | 1.48 GB |
lmcocktail-phi-2-v1.q4_k_m.gguf | q4_k_m | 1.79 GB |
lmcocktail-phi-2-v1.q5_k_m.gguf | q5_k_m | 2.07 GB |
lmcocktail-phi-2-v1.q6_k.gguf | q6_k | 2.29 GB |
lmcocktail-phi-2-v1.q8_0.gguf | q8_0 | 2.96 GB |
Original Model Card:
LM-cocktail phi-2 v1
This is a 50%-50% model of the phi2 alpca gpt4 and phi2 ultrachat200K models.
https://huggingface.co/Yhyu13/phi-2-sft-alpaca_gpt4_en-ep1
https://huggingface.co/venkycs/phi-2-ultrachat200k
Code
The LM-cocktail is novel technique for merging multiple models https://arxiv.org/abs/2311.13534
Code is backed up by this repo https://github.com/FlagOpen/FlagEmbedding.git
Merging scripts available under the ./scripts folder
- Downloads last month
- 66
Model tree for afrideva/LMCocktail-phi-2-v1-GGUF
Base model
Yhyu13/LMCocktail-phi-2-v1