File size: 6,107 Bytes
72787fb 2203981 19057a7 642ebbe 6b5f674 72787fb a0028f6 6f2a401 72787fb 9268a68 72787fb 9268a68 72787fb 9268a68 72787fb 9268a68 72787fb 9268a68 72787fb 9268a68 72787fb 9268a68 72787fb 24f4ef7 72787fb 24f4ef7 72787fb 24f4ef7 72787fb 24f4ef7 72787fb 8f31645 72787fb 6f8424d 642ebbe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 |
---
license: apache-2.0
language:
- en
metrics:
- bleu
- rouge
tags:
- causal-lm
- code
- cypher
- graph
- neo4j
inference: false
widget:
- text: >-
Show me the people who have Python and Cloud skills and have been in the
company for at least 3 years.
example_title: Example 1
- text: What is the IMDb rating of Pulp Fiction?
example_title: Example 2
- text: >-
Display the first 3 users followed by 'Neo4j' who have more than 10000
followers.
example_title: Example 3
base_model:
- stabilityai/stable-code-instruct-3b
base_model_relation: finetune
---
## Model Description
A specialized 3B parameters model beating SoA models such as GPT4-o at generating CYPHER.
It's a finetune of https://huggingface.co/stabilityai/stable-code-instruct-3b trained on https://github.com/neo4j-labs/text2cypher/tree/main/datasets/synthetic_opus_demodbs to generate CYPHER queries from text to query GraphDB such as neo4j.
## Usage
### Safetensors (recommended)
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("lakkeo/stable-cypher-instruct-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lakkeo/stable-cypher-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)
messages = [
{
"role": "user",
"content": "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
}
]
prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
tokens = model.generate(
**inputs,
max_new_tokens=128,
do_sample=True,
top_p=0.9,
temperature=0.2,
pad_token_id=tokenizer.eos_token_id,
)
outputs = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]
```
### GGUF
```python
from llama_cpp import Llama
# Load the GGUF model
print("Loading model...")
model = Llama(
model_path=r"C:\Users\John\stable-cypher-instruct-3b.Q4_K_M.gguf",
n_ctx=512,
n_batch=512,
n_gpu_layers=-1, # Use all available GPU layers
max_tokens=128,
top_p=0.9,
temperature=0.2,
verbose=False
)
# Define your question
question = "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
# Create the full prompt (simulating the apply_chat_template function)
full_prompt = f"<|im_start|>system\nCreate a Cypher statement to answer the following question:<|im_end|>\n<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant\n"
# Generate response
print("Generating response...")
response = model(
full_prompt,
max_tokens=128,
stop=["<|im_end|>", "<|im_start|>"],
echo=False
)
# Extract and print the generated response
answer = response['choices'][0]['text'].strip()
print("\nQuestion:", question)
print("\nGenerated Cypher statement:")
print(answer)
```
## Performance
| Metric | stable-code-instruct-3b | gpt4-o | stable-cypher-instruct-3b |
| :----------: | :---------------------: | :--------: | :-----------------------: |
| BLEU-4 | 19.07 | 32.35 | **88.63** |
| ROUGE-1 | 39.49 | 69.17 | **95.09** |
| ROUGE-2 | 24.82 | 46.97 | **90.71** |
| ROUGE-L | 29.63 | 65.24 | **91.51** |
| Jaro-Winkler | 52.21 | 86.38 | **95.69** |
| Jaccard | 25.55 | 72.80 | **90.78** |
| Pass@1 | 0.00 | 0.00 | **51.80** |
### Example
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6504bb76423b46492e7f38c7/pweL4qgmFaknLBYp-CGHm.png)
### Eval params
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6504bb76423b46492e7f38c7/AT80-09XrHNz-dJs9TH3M.png)
## Reproducability
This is the config file from Llama Factory :
```json
{
"top.model_name": "Custom",
"top.finetuning_type": "lora",
"top.adapter_path": [],
"top.quantization_bit": "none",
"top.template": "default",
"top.rope_scaling": "none",
"top.booster": "none",
"train.training_stage": "Supervised Fine-Tuning",
"train.dataset_dir": "data",
"train.dataset": [
"cypher_opus"
],
"train.learning_rate": "2e-4",
"train.num_train_epochs": "5.0",
"train.max_grad_norm": "1.0",
"train.max_samples": "5000",
"train.compute_type": "fp16",
"train.cutoff_len": 256,
"train.batch_size": 16,
"train.gradient_accumulation_steps": 2,
"train.val_size": 0.1,
"train.lr_scheduler_type": "cosine",
"train.logging_steps": 10,
"train.save_steps": 100,
"train.warmup_steps": 20,
"train.neftune_alpha": 0,
"train.optim": "adamw_torch",
"train.resize_vocab": false,
"train.packing": false,
"train.upcast_layernorm": false,
"train.use_llama_pro": false,
"train.shift_attn": false,
"train.report_to": false,
"train.num_layer_trainable": 3,
"train.name_module_trainable": "all",
"train.lora_rank": 64,
"train.lora_alpha": 64,
"train.lora_dropout": 0.1,
"train.loraplus_lr_ratio": 0,
"train.create_new_adapter": false,
"train.use_rslora": false,
"train.use_dora": true,
"train.lora_target": "",
"train.additional_target": "",
"train.dpo_beta": 0.1,
"train.dpo_ftx": 0,
"train.orpo_beta": 0.1,
"train.reward_model": null,
"train.use_galore": false,
"train.galore_rank": 16,
"train.galore_update_interval": 200,
"train.galore_scale": 0.25,
"train.galore_target": "all"
}
```
I used llama.cpp to merge the LoRa and generate the quants.
The progress achieved from the base model is significant but you will still need to finetune on your company's syntax and entities.
I've been tickering with the training parameters for a few batches of training but there is room for improvements.
I'm open to the idea of making a full tutorial if there is enough interest in this project. |