cobrakenji
commited on
Commit
•
9083e15
1
Parent(s):
c92f68b
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,129 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
library_name: transformers
|
4 |
+
tags:
|
5 |
+
- code
|
6 |
+
- granite
|
7 |
+
- llama-cpp
|
8 |
+
- gguf-my-repo
|
9 |
+
datasets:
|
10 |
+
- codeparrot/github-code-clean
|
11 |
+
- bigcode/starcoderdata
|
12 |
+
- open-web-math/open-web-math
|
13 |
+
- math-ai/StackMathQA
|
14 |
+
metrics:
|
15 |
+
- code_eval
|
16 |
+
pipeline_tag: text-generation
|
17 |
+
inference: true
|
18 |
+
model-index:
|
19 |
+
- name: granite-20b-code-base
|
20 |
+
results:
|
21 |
+
- task:
|
22 |
+
type: text-generation
|
23 |
+
dataset:
|
24 |
+
name: MBPP
|
25 |
+
type: mbpp
|
26 |
+
metrics:
|
27 |
+
- type: pass@1
|
28 |
+
value: 43.8
|
29 |
+
name: pass@1
|
30 |
+
- task:
|
31 |
+
type: text-generation
|
32 |
+
dataset:
|
33 |
+
name: MBPP+
|
34 |
+
type: evalplus/mbppplus
|
35 |
+
metrics:
|
36 |
+
- type: pass@1
|
37 |
+
value: 51.6
|
38 |
+
name: pass@1
|
39 |
+
- task:
|
40 |
+
type: text-generation
|
41 |
+
dataset:
|
42 |
+
name: HumanEvalSynthesis(Python)
|
43 |
+
type: bigcode/humanevalpack
|
44 |
+
metrics:
|
45 |
+
- type: pass@1
|
46 |
+
value: 48.2
|
47 |
+
name: pass@1
|
48 |
+
- type: pass@1
|
49 |
+
value: 50.0
|
50 |
+
name: pass@1
|
51 |
+
- type: pass@1
|
52 |
+
value: 59.1
|
53 |
+
name: pass@1
|
54 |
+
- type: pass@1
|
55 |
+
value: 32.3
|
56 |
+
name: pass@1
|
57 |
+
- type: pass@1
|
58 |
+
value: 40.9
|
59 |
+
name: pass@1
|
60 |
+
- type: pass@1
|
61 |
+
value: 35.4
|
62 |
+
name: pass@1
|
63 |
+
- type: pass@1
|
64 |
+
value: 17.1
|
65 |
+
name: pass@1
|
66 |
+
- type: pass@1
|
67 |
+
value: 18.3
|
68 |
+
name: pass@1
|
69 |
+
- type: pass@1
|
70 |
+
value: 23.2
|
71 |
+
name: pass@1
|
72 |
+
- type: pass@1
|
73 |
+
value: 10.4
|
74 |
+
name: pass@1
|
75 |
+
- type: pass@1
|
76 |
+
value: 25.6
|
77 |
+
name: pass@1
|
78 |
+
- type: pass@1
|
79 |
+
value: 18.3
|
80 |
+
name: pass@1
|
81 |
+
- type: pass@1
|
82 |
+
value: 23.2
|
83 |
+
name: pass@1
|
84 |
+
- type: pass@1
|
85 |
+
value: 23.8
|
86 |
+
name: pass@1
|
87 |
+
- type: pass@1
|
88 |
+
value: 14.6
|
89 |
+
name: pass@1
|
90 |
+
- type: pass@1
|
91 |
+
value: 26.2
|
92 |
+
name: pass@1
|
93 |
+
- type: pass@1
|
94 |
+
value: 15.2
|
95 |
+
name: pass@1
|
96 |
+
- type: pass@1
|
97 |
+
value: 3.0
|
98 |
+
name: pass@1
|
99 |
+
---
|
100 |
+
|
101 |
+
# cobrakenji/granite-20b-code-base-Q3_K_M-GGUF
|
102 |
+
This model was converted to GGUF format from [`ibm-granite/granite-20b-code-base`](https://huggingface.co/ibm-granite/granite-20b-code-base) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
103 |
+
Refer to the [original model card](https://huggingface.co/ibm-granite/granite-20b-code-base) for more details on the model.
|
104 |
+
## Use with llama.cpp
|
105 |
+
|
106 |
+
Install llama.cpp through brew.
|
107 |
+
|
108 |
+
```bash
|
109 |
+
brew install ggerganov/ggerganov/llama.cpp
|
110 |
+
```
|
111 |
+
Invoke the llama.cpp server or the CLI.
|
112 |
+
|
113 |
+
CLI:
|
114 |
+
|
115 |
+
```bash
|
116 |
+
llama-cli --hf-repo cobrakenji/granite-20b-code-base-Q3_K_M-GGUF --model granite-20b-code-base.Q3_K_M.gguf -p "The meaning to life and the universe is"
|
117 |
+
```
|
118 |
+
|
119 |
+
Server:
|
120 |
+
|
121 |
+
```bash
|
122 |
+
llama-server --hf-repo cobrakenji/granite-20b-code-base-Q3_K_M-GGUF --model granite-20b-code-base.Q3_K_M.gguf -c 2048
|
123 |
+
```
|
124 |
+
|
125 |
+
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
126 |
+
|
127 |
+
```
|
128 |
+
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m granite-20b-code-base.Q3_K_M.gguf -n 128
|
129 |
+
```
|