Edit model card

Inference

import time
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("suriya7/Gpt-Neo-SQL")
model = AutoModelForCausalLM.from_pretrained("suriya7/Gpt-Neo-SQL")
BOS_TOKEN = "<sos>"
alpaca_prompt = BOS_TOKEN + """You are an intelligent AI specialized in generating SQL queries.Your task is to assist handling Sql query in to retrieve specific information from a database Please provide the SQL query corresponding to the given instruction and input.

### Instruction:
CREATE TABLE temp_scan(
id SERIAL PRIMARY KEY,
email TEXT, 
frequency TEXT, 
status TEXT, 
n_lines BIGINT,
n_files BIGINT,
vulns INT
);

The schema for the temp_scan table is as follows:
email: Email address of the user who initiated the scan.
frequency: How often the scan is run (e.g., Once, Daily, Weekly,Monthly).
status: The current status of the scan (e.g., COMPLETED, RUNNING, SCANNING, FAILED,STARTED,CLONING,CLOCING).
n_lines: The number of lines of code scanned.
n_files: The number of files scanned.
vulns: The number of vulnerabilities found.

### Input:
{}

### Response:
"""
input_ques = "give me the scan that are running without vulnerability."

s = time.time()
prompt = alpaca_prompt.format(input_ques)
encodeds = tokenizer(prompt, return_tensors="pt",truncation=True).input_ids

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model.to(device)
inputs = encodeds.to(device)

# Increase max_new_tokens if needed
generated_ids = model.generate(inputs, max_new_tokens=256,temperature=0.1, top_p=0.90, do_sample=True,pad_token_id=50259,eos_token_id=50259,num_return_sequences=1)
print(tokenizer.decode(generated_ids[0]).replace(prompt,'').split('<eos>')[0])
e = time.time()
print(f'time taken:{e-s}')
Downloads last month
2
Safetensors
Model size
125M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.