itsme-nishanth's picture
Update README.md
101bea5 verified
metadata
library_name: transformers
tags:
  - unsloth
  - Sparky-SQL-Llama-3.2-1B
license: apache-2.0
datasets:
  - shreeyashm/SQL-Queries-Dataset
language:
  - en
base_model:
  - unsloth/Llama-3.2-1B
pipeline_tag: text-generation

Model Card for Model ID

Model Details

Model Description

The model was fintuned from Llama-3.2-1B base model with SQL query data

How to Get Started with the Model

from transformers import pipeline
model_id = "itsme-nishanth/Sparky-SQL-Llama-3.2-1B"
pipe = pipeline("text-generation", model_id, device="cuda")
messages = [
    {"role": "user", "content": "list down the product names and its type provided by vendor 'vanhelsing' from 'products' table?"},
]
print(pipe(messages, max_new_tokens=100)[0]['generated_text'][-1])  # Print the assistant's response
  • Developed by: Nishanth
  • Model type: Llama
  • Language(s) (NLP): English
  • License: Apache license 2.0
  • Finetuned from model : Llama-3.2-1B

Training Details

Training Data

Training Procedure

Preprocessing

Dataset had empty records. Removed them before training.

Training Hyperparameters

  • Training regime:
  • gradient_accumulation_steps = 4,
  • warmup_steps = 5,
  • max_steps = 60,
  • learning_rate = 2e-4,
  • fp16 = not is_bfloat16_supported(),
  • bf16 = is_bfloat16_supported(),
  • optim = "adamw_8bit",
  • weight_decay = 0.01,
  • lr_scheduler_type = "linear",
  • seed = 3407

Technical Specifications

Hardware

  • Google-Colab (Tesla T4)

Software

  • Transformers
  • Unsloth

Model Card Contact

[email protected]