Edit model card

Model Card for Model ID

Model Details

Model Description

  • Developed by: hack337
  • Model type: qwen2
  • Finetuned from model: Qwen/Qwen2-1.5B-Instruct

Model Sources [optional]

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

device = "cuda" # the device to load the model onto
model_path = "Hack337/WavGPT-1.0"

model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2-1.5B-Instruct",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2-1.5B-Instruct")
model = PeftModel.from_pretrained(model, model_path)

prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "system", "content": "Вы очень полезный помощник."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
  • PEFT 0.11.1
Downloads last month
27
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Hack337/WavGPT-1.0

Adapter
(577)
this model

Space using Hack337/WavGPT-1.0 1