mav23 commited on
Commit
2998cbf
1 Parent(s): d42586b

Upload folder using huggingface_hub

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ granite-8b-code-instruct-128k.Q4_0.gguf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,177 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference: false
4
+ license: apache-2.0
5
+ datasets:
6
+ - bigcode/commitpackft
7
+ - TIGER-Lab/MathInstruct
8
+ - meta-math/MetaMathQA
9
+ - glaiveai/glaive-code-assistant-v3
10
+ - glaive-function-calling-v2
11
+ - bugdaryan/sql-create-context-instruction
12
+ - garage-bAInd/Open-Platypus
13
+ - nvidia/HelpSteer
14
+ - bigcode/self-oss-instruct-sc2-exec-filter-50k
15
+ metrics:
16
+ - code_eval
17
+ library_name: transformers
18
+ tags:
19
+ - code
20
+ - granite
21
+ model-index:
22
+ - name: granite-8B-Code-instruct-128k
23
+ results:
24
+ - task:
25
+ type: text-generation
26
+ dataset:
27
+ type: bigcode/humanevalpack
28
+ name: HumanEvalSynthesis (Python)
29
+ metrics:
30
+ - name: pass@1
31
+ type: pass@1
32
+ value: 62.2
33
+ verified: false
34
+ - task:
35
+ type: text-generation
36
+ dataset:
37
+ type: bigcode/humanevalpack
38
+ name: HumanEvalSynthesis (Average)
39
+ metrics:
40
+ - name: pass@1
41
+ type: pass@1
42
+ value: 51.4
43
+ verified: false
44
+ - task:
45
+ type: text-generation
46
+ dataset:
47
+ type: bigcode/humanevalpack
48
+ name: HumanEvalExplain (Average)
49
+ metrics:
50
+ - name: pass@1
51
+ type: pass@1
52
+ value: 38.9
53
+ verified: false
54
+ - task:
55
+ type: text-generation
56
+ dataset:
57
+ type: bigcode/humanevalpack
58
+ name: HumanEvalFix (Average)
59
+ metrics:
60
+ - name: pass@1
61
+ type: pass@1
62
+ value: 38.3
63
+ verified: false
64
+ - task:
65
+ type: text-generation
66
+ dataset:
67
+ type: repoqa
68
+ name: RepoQA (Python@16K)
69
+ metrics:
70
+ - name: pass@1 (thresh=0.5)
71
+ type: pass@1 (thresh=0.5)
72
+ value: 73.0
73
+ verified: false
74
+ - task:
75
+ type: text-generation
76
+ dataset:
77
+ type: repoqa
78
+ name: RepoQA (C++@16K)
79
+ metrics:
80
+ - name: pass@1 (thresh=0.5)
81
+ type: pass@1 (thresh=0.5)
82
+ value: 37.0
83
+ verified: false
84
+ - task:
85
+ type: text-generation
86
+ dataset:
87
+ type: repoqa
88
+ name: RepoQA (Java@16K)
89
+ metrics:
90
+ - name: pass@1 (thresh=0.5)
91
+ type: pass@1 (thresh=0.5)
92
+ value: 73.0
93
+ verified: false
94
+ - task:
95
+ type: text-generation
96
+ dataset:
97
+ type: repoqa
98
+ name: RepoQA (TypeScript@16K)
99
+ metrics:
100
+ - name: pass@1 (thresh=0.5)
101
+ type: pass@1 (thresh=0.5)
102
+ value: 62.0
103
+ verified: false
104
+ - task:
105
+ type: text-generation
106
+ dataset:
107
+ type: repoqa
108
+ name: RepoQA (Rust@16K)
109
+ metrics:
110
+ - name: pass@1 (thresh=0.5)
111
+ type: pass@1 (thresh=0.5)
112
+ value: 63.0
113
+ verified: false
114
+ ---
115
+
116
+
117
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/62cd5057674cdb524450093d/1hzxoPwqkBJXshKVVe6_9.png)
118
+
119
+ # Granite-8B-Code-Instruct-128K
120
+
121
+ ## Model Summary
122
+ **Granite-8B-Code-Instruct-128K** is a 8B parameter long-context instruct model fine tuned from *Granite-8B-Code-Base-128K* on a combination of **permissively licensed** data used in training the original Granite code instruct models, in addition to synthetically generated code instruction datasets tailored for solving long context problems. By exposing the model to both short and long context data, we aim to enhance its long-context capability without sacrificing code generation performance at short input context.
123
+
124
+ - **Developers:** IBM Research
125
+ - **GitHub Repository:** [ibm-granite/granite-code-models](https://github.com/ibm-granite/granite-code-models)
126
+ - **Paper:** [Scaling Granite Code Models to 128K Context](https://arxiv.org/abs/2407.13739)
127
+ - **Release Date**: July 18th, 2024
128
+ - **License:** [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0).
129
+
130
+ ## Usage
131
+ ### Intended use
132
+ The model is designed to respond to coding related instructions over long-conext input up to 128K length and can be used to build coding assistants.
133
+
134
+ <!-- TO DO: Check starcoder2 instruct code example that includes the template https://huggingface.co/bigcode/starcoder2-15b-instruct-v0.1 -->
135
+
136
+ ### Generation
137
+ This is a simple example of how to use **Granite-8B-Code-Instruct** model.
138
+
139
+ ```python
140
+ import torch
141
+ from transformers import AutoModelForCausalLM, AutoTokenizer
142
+ device = "cuda" # or "cpu"
143
+ model_path = "ibm-granite/granite-8B-Code-instruct-128k"
144
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
145
+ # drop device_map if running on CPU
146
+ model = AutoModelForCausalLM.from_pretrained(model_path, device_map=device)
147
+ model.eval()
148
+ # change input text as desired
149
+ chat = [
150
+ { "role": "user", "content": "Write a code to find the maximum value in a list of numbers." },
151
+ ]
152
+ chat = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
153
+ # tokenize the text
154
+ input_tokens = tokenizer(chat, return_tensors="pt")
155
+ # transfer tokenized inputs to the device
156
+ for i in input_tokens:
157
+ input_tokens[i] = input_tokens[i].to(device)
158
+ # generate output tokens
159
+ output = model.generate(**input_tokens, max_new_tokens=100)
160
+ # decode output tokens into text
161
+ output = tokenizer.batch_decode(output)
162
+ # loop over the batch to print, in this example the batch size is 1
163
+ for i in output:
164
+ print(i)
165
+ ```
166
+
167
+ <!-- TO DO: Check this part -->
168
+ ## Training Data
169
+ Granite Code Instruct models are trained on a mix of short and long context data as follows.
170
+ * Short-Context Instruction Data: [CommitPackFT](https://huggingface.co/datasets/bigcode/commitpackft), [BigCode-SC2-Instruct](bigcode/self-oss-instruct-sc2-exec-filter-50k), [MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct), [MetaMathQA](https://huggingface.co/datasets/meta-math/MetaMathQA), [Glaive-Code-Assistant-v3](https://huggingface.co/datasets/glaiveai/glaive-code-assistant-v3), [Glaive-Function-Calling-v2](https://huggingface.co/datasets/glaiveai/glaive-function-calling-v2), [NL2SQL11](https://huggingface.co/datasets/bugdaryan/sql-create-context-instruction), [HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer), [OpenPlatypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus) including a synthetically generated dataset for API calling and multi-turn code interactions with execution feedback. We also include a collection of hardcoded prompts to ensure our model generates correct outputs given inquiries about its name or developers.
171
+ * Long-Context Instruction Data: A synthetically-generated dataset by bootstrapping the repository-level file-packed documents through Granite-8b-Code-Instruct to improve long-context capability of the model.
172
+
173
+ ## Infrastructure
174
+ We train the Granite Code models using two of IBM's super computing clusters, namely Vela and Blue Vela, both outfitted with NVIDIA A100 and H100 GPUs respectively. These clusters provide a scalable and efficient infrastructure for training our models over thousands of GPUs.
175
+
176
+ ## Ethical Considerations and Limitations
177
+ Granite code instruct models are primarily finetuned using instruction-response pairs across a specific set of programming languages. Thus, their performance may be limited with out-of-domain programming languages. In this situation, it is beneficial providing few-shot examples to steer the model's output. Moreover, developers should perform safety testing and target-specific tuning before deploying these models on critical applications. The model also inherits ethical considerations and limitations from its base model. For more information, please refer to *[Granite-8B-Code-Base-128K](https://huggingface.co/ibm-granite/granite-8B-Code-base-128k)* model card.
granite-8b-code-instruct-128k.Q4_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7538a47af660b341befb2737a559221ac7f6b2c6e09e4a48be2ba6eabf64d629
3
+ size 4590894944