Edit model card

Model Description

A specialized 3B parameters model beating SOTA models such as GPT4-o at generating CYPHER. It's a finetune of https://huggingface.co/stabilityai/stable-code-instruct-3b trained on https://github.com/neo4j-labs/text2cypher/tree/main/datasets/synthetic_opus_demodbs to generate CYPHER queries from text to query GraphDB such as neo4j.

Usage

Safetensors (recommended)

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("lakkeo/stable-cypher-instruct-3b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("lakkeo/stable-cypher-instruct-3b", torch_dtype=torch.bfloat16, trust_remote_code=True)

messages = [
    {
        "role": "user",
        "content": "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."
    }
]

prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)

inputs = tokenizer([prompt], return_tensors="pt").to(model.device)

tokens = model.generate(
        **inputs,
        max_new_tokens=128,
        do_sample=True,
        top_p=0.9,
        temperature=0.2,
        pad_token_id=tokenizer.eos_token_id,
    )

outputs = tokenizer.batch_decode(tokens[:, inputs.input_ids.shape[-1]:], skip_special_tokens=False)[0]

GGUF

from llama_cpp import Llama

# Load the GGUF model
print("Loading model...")
model = Llama(
    model_path=r"C:\Users\John\stable-cypher-instruct-3b.Q4_K_M.gguf",
    n_ctx=512,
    n_batch=512,
    n_gpu_layers=-1,  # Use all available GPU layers
    max_tokens=128,
    top_p=0.9,
    temperature=0.2,
    verbose=False 
)

# Define your question
question = "Show me the people who have Python and Cloud skills and have been in the company for at least 3 years."

# Create the full prompt (simulating the apply_chat_template function)
full_prompt = f"<|im_start|>system\nCreate a Cypher statement to answer the following question:<|im_end|>\n<|im_start|>user\n{question}<|im_end|>\n<|im_start|>assistant\n"

# Generate response
print("Generating response...")
response = model(
    full_prompt,
    max_tokens=128,
    stop=["<|im_end|>", "<|im_start|>"],
    echo=False
)

# Extract and print the generated response
answer = response['choices'][0]['text'].strip()
print("\nQuestion:", question)
print("\nGenerated Cypher statement:")
print(answer)

Performance

Metric stable-code-instruct-3b gpt4-o stable-cypher-instruct-3b
BLEU-4 19.07 32.35 88.63
ROUGE-1 39.49 69.17 95.09
ROUGE-2 24.82 46.97 90.71
ROUGE-L 29.63 65.24 91.51
Jaro-Winkler 52.21 86.38 95.69
Jaccard 25.55 72.80 90.78
Pass@1 0.00 0.00 51.80

Example

image/png

Eval params

image/png

Reproducability

This is the config file from Llama Factory :

{
  "top.model_name": "Custom",
  "top.finetuning_type": "lora",
  "top.adapter_path": [],
  "top.quantization_bit": "none",
  "top.template": "default",
  "top.rope_scaling": "none",
  "top.booster": "none",
  "train.training_stage": "Supervised Fine-Tuning",
  "train.dataset_dir": "data",
  "train.dataset": [
    "cypher_opus"
  ],
  "train.learning_rate": "2e-4",
  "train.num_train_epochs": "5.0",
  "train.max_grad_norm": "1.0",
  "train.max_samples": "5000",
  "train.compute_type": "fp16",
  "train.cutoff_len": 256,
  "train.batch_size": 16,
  "train.gradient_accumulation_steps": 2,
  "train.val_size": 0.1,
  "train.lr_scheduler_type": "cosine",
  "train.logging_steps": 10,
  "train.save_steps": 100,
  "train.warmup_steps": 20,
  "train.neftune_alpha": 0,
  "train.optim": "adamw_torch",
  "train.resize_vocab": false,
  "train.packing": false,
  "train.upcast_layernorm": false,
  "train.use_llama_pro": false,
  "train.shift_attn": false,
  "train.report_to": false,
  "train.num_layer_trainable": 3,
  "train.name_module_trainable": "all",
  "train.lora_rank": 64,
  "train.lora_alpha": 64,
  "train.lora_dropout": 0.1,
  "train.loraplus_lr_ratio": 0,
  "train.create_new_adapter": false,
  "train.use_rslora": false,
  "train.use_dora": true,
  "train.lora_target": "",
  "train.additional_target": "",
  "train.dpo_beta": 0.1,
  "train.dpo_ftx": 0,
  "train.orpo_beta": 0.1,
  "train.reward_model": null,
  "train.use_galore": false,
  "train.galore_rank": 16,
  "train.galore_update_interval": 200,
  "train.galore_scale": 0.25,
  "train.galore_target": "all"
}

I used llama.cpp to merge the LoRa and generate the quants.

The progress achieved from the base model is significant but you will still need to finetune on your company's syntax and entities. I've been tickering with the training parameters for a few batches of training but there is room for improvements. I'm open to the idea of making a full tutorial if there is enough interest in this project.

Downloads last month
1,975
Safetensors
Model size
2.8B params
Tensor type
F32
·
I8
·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for lakkeo/stable-cypher-instruct-3b

Finetuned
(11)
this model