Triangle104 commited on
Commit
6a7bbe9
1 Parent(s): b12ad75

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +149 -0
README.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: nbeerbower/mistral-nemo-cc-12B
3
+ datasets:
4
+ - flammenai/casual-conversation-DPO
5
+ library_name: transformers
6
+ license: apache-2.0
7
+ tags:
8
+ - llama-cpp
9
+ - gguf-my-repo
10
+ model-index:
11
+ - name: mistral-nemo-cc-12B
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: IFEval (0-Shot)
18
+ type: HuggingFaceH4/ifeval
19
+ args:
20
+ num_few_shot: 0
21
+ metrics:
22
+ - type: inst_level_strict_acc and prompt_level_strict_acc
23
+ value: 14.35
24
+ name: strict accuracy
25
+ source:
26
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-cc-12B
27
+ name: Open LLM Leaderboard
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: BBH (3-Shot)
33
+ type: BBH
34
+ args:
35
+ num_few_shot: 3
36
+ metrics:
37
+ - type: acc_norm
38
+ value: 34.45
39
+ name: normalized accuracy
40
+ source:
41
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-cc-12B
42
+ name: Open LLM Leaderboard
43
+ - task:
44
+ type: text-generation
45
+ name: Text Generation
46
+ dataset:
47
+ name: MATH Lvl 5 (4-Shot)
48
+ type: hendrycks/competition_math
49
+ args:
50
+ num_few_shot: 4
51
+ metrics:
52
+ - type: exact_match
53
+ value: 1.81
54
+ name: exact match
55
+ source:
56
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-cc-12B
57
+ name: Open LLM Leaderboard
58
+ - task:
59
+ type: text-generation
60
+ name: Text Generation
61
+ dataset:
62
+ name: GPQA (0-shot)
63
+ type: Idavidrein/gpqa
64
+ args:
65
+ num_few_shot: 0
66
+ metrics:
67
+ - type: acc_norm
68
+ value: 8.72
69
+ name: acc_norm
70
+ source:
71
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-cc-12B
72
+ name: Open LLM Leaderboard
73
+ - task:
74
+ type: text-generation
75
+ name: Text Generation
76
+ dataset:
77
+ name: MuSR (0-shot)
78
+ type: TAUR-Lab/MuSR
79
+ args:
80
+ num_few_shot: 0
81
+ metrics:
82
+ - type: acc_norm
83
+ value: 14.26
84
+ name: acc_norm
85
+ source:
86
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-cc-12B
87
+ name: Open LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: MMLU-PRO (5-shot)
93
+ type: TIGER-Lab/MMLU-Pro
94
+ config: main
95
+ split: test
96
+ args:
97
+ num_few_shot: 5
98
+ metrics:
99
+ - type: acc
100
+ value: 28.87
101
+ name: accuracy
102
+ source:
103
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=nbeerbower/mistral-nemo-cc-12B
104
+ name: Open LLM Leaderboard
105
+ ---
106
+
107
+ # Triangle104/mistral-nemo-cc-12B-Q4_K_M-GGUF
108
+ This model was converted to GGUF format from [`nbeerbower/mistral-nemo-cc-12B`](https://huggingface.co/nbeerbower/mistral-nemo-cc-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
109
+ Refer to the [original model card](https://huggingface.co/nbeerbower/mistral-nemo-cc-12B) for more details on the model.
110
+
111
+ ## Use with llama.cpp
112
+ Install llama.cpp through brew (works on Mac and Linux)
113
+
114
+ ```bash
115
+ brew install llama.cpp
116
+
117
+ ```
118
+ Invoke the llama.cpp server or the CLI.
119
+
120
+ ### CLI:
121
+ ```bash
122
+ llama-cli --hf-repo Triangle104/mistral-nemo-cc-12B-Q4_K_M-GGUF --hf-file mistral-nemo-cc-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
123
+ ```
124
+
125
+ ### Server:
126
+ ```bash
127
+ llama-server --hf-repo Triangle104/mistral-nemo-cc-12B-Q4_K_M-GGUF --hf-file mistral-nemo-cc-12b-q4_k_m.gguf -c 2048
128
+ ```
129
+
130
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
131
+
132
+ Step 1: Clone llama.cpp from GitHub.
133
+ ```
134
+ git clone https://github.com/ggerganov/llama.cpp
135
+ ```
136
+
137
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
138
+ ```
139
+ cd llama.cpp && LLAMA_CURL=1 make
140
+ ```
141
+
142
+ Step 3: Run inference through the main binary.
143
+ ```
144
+ ./llama-cli --hf-repo Triangle104/mistral-nemo-cc-12B-Q4_K_M-GGUF --hf-file mistral-nemo-cc-12b-q4_k_m.gguf -p "The meaning to life and the universe is"
145
+ ```
146
+ or
147
+ ```
148
+ ./llama-server --hf-repo Triangle104/mistral-nemo-cc-12B-Q4_K_M-GGUF --hf-file mistral-nemo-cc-12b-q4_k_m.gguf -c 2048
149
+ ```