File size: 6,838 Bytes
44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 b00b0fc b837da5 fca80c5 44bc6b1 b837da5 70faabb fa2b429 44bc6b1 b837da5 44bc6b1 b837da5 fc1e9d5 b837da5 fc1e9d5 b837da5 b00b0fc b837da5 44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 fca80c5 b837da5 44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 b837da5 5864a4c 44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 b837da5 44bc6b1 b837da5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 |
---
library_name: transformers
license: mit
datasets:
- gretelai/synthetic_text_to_sql
pipeline_tag: text-generation
---
# Model Card for LLaMA 3.2 3B Instruct Text2SQL
## Model Details
### Model Description
This is a fine-tuned version of LLaMA 3.2 3B Instruct model, specifically optimized for Text-to-SQL generation tasks. The model has been trained to convert natural language queries into structured SQL commands.
- **Developed by:** Zhafran Ramadhan - XeAI
- **Model type:** Decoder-only Language Model
- **Language(s):** English - MultiLingual
- **License:** MIT
- **Finetuned from model:** LLaMA 3.2 3B Instruct
- **Log WandB Report:** [WandB Report](https://wandb.ai/zhafranr/LLaMA_3-2_3B_Instruct_FineTune_Text2SQL/reports/LLaMa-3-2-3B-Instruct-Fine-Tune-Text2SQL--VmlldzoxMDA2NDkzNA)
### Model Sources
- **Repository:** [LLaMA 3.2 3B Instruct](https://huggingface.co/meta-llama/Llama-3.2-3B-Instruct)
- **Dataset:** [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
## How to Get Started with the Model
### Installation
```python
pip install transformers torch accelerate
```
### Input Format and Usage
The model expects input in a specific format following this template:
```text
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
[System context and database schema]
<|eot_id|><|start_header_id|>user<|end_header_id|>
[User query]
<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
### Basic Usage
```python
from transformers import pipeline
import torch
# Initialize the pipeline
generator = pipeline(
"text-generation",
model="XeAI/LLaMa_3.2_3B_Instruct_Text2SQL", # Replace with your model ID
torch_dtype=torch.float16,
device_map="auto"
)
def generate_sql_query(context, question):
# Format the prompt according to the training template
prompt = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 07 Nov 2024
You are a specialized SQL query generator focused solely on the provided RAG database. Your tasks are:
1. Generate SQL queries based on user requests that are related to querying the RAG database.
2. Only output the SQL query itself, without any additional explanation or commentary.
3. Use the context provided from the RAG database to craft accurate queries.
Context: {context}
<|eot_id|><|start_header_id|>user<|end_header_id|>
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
response = generator(
prompt,
max_length=500,
num_return_sequences=1,
temperature=0.1,
do_sample=True,
pad_token_id=generator.tokenizer.eos_token_id
)
return response[0]['generated_text']
# Example usage
context = """CREATE TABLE upgrades (id INT, cost FLOAT, type TEXT);
INSERT INTO upgrades (id, cost, type) VALUES
(1, 500, 'Insulation'),
(2, 1000, 'HVAC'),
(3, 1500, 'Lighting');"""
questions = [
"Find the energy efficiency upgrades with the highest cost and their types.",
"Show me all upgrades costing less than 1000 dollars.",
"Calculate the average cost of all upgrades."
]
for question in questions:
sql = generate_sql_query(context, question)
print(f"\nQuestion: {question}")
print(f"Generated SQL: {sql}\n")
```
### Advanced Usage with Custom System Prompt
```python
def generate_sql_with_custom_prompt(context, question, custom_system_prompt=""):
base_prompt = """<|begin_of_text|><|start_header_id|>system<|end_header_id|>
Cutting Knowledge Date: December 2023
Today Date: 07 Nov 2024
You are a specialized SQL query generator focused solely on the provided RAG database."""
full_prompt = f"""{base_prompt}
{custom_system_prompt}
Context: {context}
<|eot_id|><|start_header_id|>user<|end_header_id|>
{question}<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""
response = generator(
full_prompt,
max_length=500,
num_return_sequences=1,
temperature=0.1,
do_sample=True,
pad_token_id=generator.tokenizer.eos_token_id
)
return response[0]['generated_text']
```
### Best Practices
1. **Input Formatting**:
- Always include the special tokens (<|begin_of_text|>, <|eot_id|>, etc.)
- Provide complete database schema in context
- Keep questions clear and focused on data retrieval
2. **Parameter Configuration**:
- Use temperature=0.1 for consistent SQL generation
- Adjust max_length based on expected query complexity
- Enable do_sample for more natural completions
3. **Context Management**:
- Include relevant table schemas
- Provide sample data when needed
- Keep context concise but complete
## Uses
### Direct Use
The model is designed for converting natural language questions into SQL queries. It can be used for:
- Database query generation from natural language
- SQL query assistance
- Data analysis automation
### Out-of-Scope Use
- Production deployment without human validation
- Critical decision-making without human oversight
- Direct database execution without query validation
## Training Details
### Training Data
- Dataset: [Synthethic Text2SQL](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql)
- Data preprocessing: Standard text-to-SQL formatting
### Training Procedure
#### Training Hyperparameters
- **Total Steps:** 4,149
- **Final Training Loss:** 0.1168
- **Evaluation Loss:** 0.2125
- **Learning Rate:** Dynamic with final LR = 0
- **Epochs:** 2.99
- **Gradient Norm:** 1.3121
#### Performance Metrics
- **Training Samples/Second:** 6.291
- **Evaluation Samples/Second:** 19.325
- **Steps/Second:** 3.868
- **Total FLOPS:** 1.92e18
#### Training Infrastructure
- **Hardware:** Single NVIDIA H100 GPU
- **Training Duration:** 5-6 hours
- **Total Runtime:** 16,491.75 seconds
- **Model Preparation Time:** 0.0051 seconds
## Evaluation
### Metrics
The model's performance was tracked using several key metrics:
- **Training Loss:** Started at ~1.2, converged to 0.1168
- **Evaluation Loss:** 0.2125
- **Processing Efficiency:** 19.325 samples per second during evaluation
### Results Summary
- Achieved stable convergence after ~4000 steps
- Maintained consistent performance metrics throughout training
- Shows good balance between training and evaluation loss
## Environmental Impact
- **Hardware Type:** NVIDIA H100 GPU
- **Hours used:** ~6 hours
- **Training Location:** [GPUaaS](www.runpod.io)
## Technical Specifications
### Compute Infrastructure
- **GPU:** NVIDIA H100
- **Training Duration:** 5-6 hours
- **Total Steps:** 4,149
- **FLOPs Utilized:** 1.92e18
## Model Card Contact
[Contact information to be added by Zhafran Ramadhan]
---
*Note: This model card follows the guidelines set by the ML community for responsible AI development and deployment.* |