metadata
license: cc-by-nc-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- Sao10K/Fimbulvetr-10.7B-v1
- saishf/Kuro-Lotus-10.7B
model-index:
- name: Fimbulvetr-Kuro-Lotus-10.7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 69.54
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 87.87
name: normalized accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.99
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.95
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.14
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.87
name: accuracy
source:
url: >-
https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=saishf/Fimbulvetr-Kuro-Lotus-10.7B
name: Open LLM Leaderboard
quantized_by: bartowski
pipeline_tag: text-generation
Llamacpp Quantizations of Fimbulvetr-Kuro-Lotus-10.7B
Using llama.cpp release b2440 for quantization.
Original model: https://huggingface.co/saishf/Fimbulvetr-Kuro-Lotus-10.7B
Download a file (not the whole branch) from below:
Filename | Quant type | File Size | Description |
---|---|---|---|
Fimbulvetr-Kuro-Lotus-10.7B-Q8_0.gguf | Q8_0 | 11.40GB | Extremely high quality, generally unneeded but max available quant. |
Fimbulvetr-Kuro-Lotus-10.7B-Q6_K.gguf | Q6_K | 8.80GB | Very high quality, near perfect, recommended. |
Fimbulvetr-Kuro-Lotus-10.7B-Q5_K_M.gguf | Q5_K_M | 7.59GB | High quality, very usable. |
Fimbulvetr-Kuro-Lotus-10.7B-Q5_K_S.gguf | Q5_K_S | 7.39GB | High quality, very usable. |
Fimbulvetr-Kuro-Lotus-10.7B-Q5_0.gguf | Q5_0 | 7.39GB | High quality, older format, generally not recommended. |
Fimbulvetr-Kuro-Lotus-10.7B-Q4_K_M.gguf | Q4_K_M | 6.46GB | Good quality, similar to 4.25 bpw. |
Fimbulvetr-Kuro-Lotus-10.7B-Q4_K_S.gguf | Q4_K_S | 6.11GB | Slightly lower quality with small space savings. |
Fimbulvetr-Kuro-Lotus-10.7B-Q4_0.gguf | Q4_0 | 6.07GB | Decent quality, older format, generally not recommended. |
Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_L.gguf | Q3_K_L | 5.65GB | Lower quality but usable, good for low RAM availability. |
Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_M.gguf | Q3_K_M | 5.19GB | Even lower quality. |
Fimbulvetr-Kuro-Lotus-10.7B-Q3_K_S.gguf | Q3_K_S | 4.66GB | Low quality, not recommended. |
Fimbulvetr-Kuro-Lotus-10.7B-Q2_K.gguf | Q2_K | 4.00GB | Extremely low quality, not recommended. |
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski