InternLM Math Plus
Collection
4 items
β’
Updated
Llama.cpp imatrix quantization of internlm/internlm2-math-plus-7b
Original Model: internlm/internlm2-math-plus-7b
Original dtype: BF16
(bfloat16
)
Quantized by: llama.cpp b3008
IMatrix dataset: here
Status: β
Available
Link: here
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
internlm2-math-plus-7b.Q8_0.gguf | Q8_0 | 8.22GB | β Available | βͺ Static | π¦ No |
internlm2-math-plus-7b.Q6_K.gguf | Q6_K | 6.35GB | β Available | βͺ Static | π¦ No |
internlm2-math-plus-7b.Q4_K.gguf | Q4_K | 4.71GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.Q3_K.gguf | Q3_K | 3.83GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.Q2_K.gguf | Q2_K | 3.01GB | β Available | π’ IMatrix | π¦ No |
Filename | Quant type | File Size | Status | Uses IMatrix | Is Split |
---|---|---|---|---|---|
internlm2-math-plus-7b.FP16.gguf | F16 | 15.48GB | β Available | βͺ Static | π¦ No |
internlm2-math-plus-7b.BF16.gguf | BF16 | 15.48GB | β Available | βͺ Static | π¦ No |
internlm2-math-plus-7b.Q5_K.gguf | Q5_K | 5.51GB | β Available | βͺ Static | π¦ No |
internlm2-math-plus-7b.Q5_K_S.gguf | Q5_K_S | 5.37GB | β Available | βͺ Static | π¦ No |
internlm2-math-plus-7b.Q4_K_S.gguf | Q4_K_S | 4.48GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.Q3_K_L.gguf | Q3_K_L | 4.13GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.Q3_K_S.gguf | Q3_K_S | 3.48GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.Q2_K_S.gguf | Q2_K_S | 2.82GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ4_NL.gguf | IQ4_NL | 4.47GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ4_XS.gguf | IQ4_XS | 4.24GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ3_M.gguf | IQ3_M | 3.60GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ3_S.gguf | IQ3_S | 3.49GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ3_XS.gguf | IQ3_XS | 3.33GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ3_XXS.gguf | IQ3_XXS | 3.11GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ2_M.gguf | IQ2_M | 2.78GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ2_S.gguf | IQ2_S | 2.59GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ2_XS.gguf | IQ2_XS | 2.45GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ2_XXS.gguf | IQ2_XXS | 2.24GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ1_M.gguf | IQ1_M | 2.01GB | β Available | π’ IMatrix | π¦ No |
internlm2-math-plus-7b.IQ1_S.gguf | IQ1_S | 1.87GB | β Available | π’ IMatrix | π¦ No |
If you do not have hugginface-cli installed:
pip install -U "huggingface_hub[cli]"
Download the specific file you want:
huggingface-cli download legraphista/internlm2-math-plus-7b-IMat-GGUF --include "internlm2-math-plus-7b.Q8_0.gguf" --local-dir ./
If the model file is big, it has been split into multiple files. In order to download them all to a local folder, run:
huggingface-cli download legraphista/internlm2-math-plus-7b-IMat-GGUF --include "internlm2-math-plus-7b.Q8_0/*" --local-dir internlm2-math-plus-7b.Q8_0
# see FAQ for merging GGUF's
<s><|im_start|>user
Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
<|im_start|>assistant
Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
<|im_start|>user
What about solving an 2x + 3 = 7 equation?<|im_end|>
<s><|im_start|>system
You are a helpful AI.<|im_end|>
<|im_start|>user
Can you provide ways to eat combinations of bananas and dragonfruits?<|im_end|>
<|im_start|>assistant
Sure! Here are some ways to eat bananas and dragonfruits together:
1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey.
2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey.<|im_end|>
<|im_start|>user
What about solving an 2x + 3 = 7 equation?<|im_end|>
llama.cpp/main -m internlm2-math-plus-7b.Q8_0.gguf --color -i -p "prompt here (according to the chat template)"
According to this investigation, it appears that lower quantizations are the only ones that benefit from the imatrix input (as per hellaswag results).
gguf-split
availablegguf-split
, navigate to https://github.com/ggerganov/llama.cpp/releasesgguf-split
internlm2-math-plus-7b.Q8_0
)gguf-split --merge internlm2-math-plus-7b.Q8_0/internlm2-math-plus-7b.Q8_0-00001-of-XXXXX.gguf internlm2-math-plus-7b.Q8_0.gguf
gguf-split
to the first chunk of the split.Got a suggestion? Ping me @legraphista!
Base model
internlm/internlm2-math-plus-7b