Puffin-Phi-v2 / README.md
teknium's picture
Update README.md
4a4b83a
|
raw
history blame
2.01 kB
metadata
license: other
language:
  - en
pipeline_tag: text-generation
datasets:
  - LDJnr/Puffin

Model Card for Puffin-Phi V2

This is my first fine-tune of Puffin that seems to be working fairly reliably!

image/png

Model Details

Model Sources

This model was trained on the Puffin Dataset, made by LDJ, using a slightly modified version of the dataset that removed >2000 token entries, so there would be no early cutoffs during training phi, since it's context is 2k.

Uses

Let me know!

How to Get Started with the Model

Phi does not support device_map "auto", and does not seem to want to inference in fp16, so use bf16.

Here is working code to inference, though it can be improved:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

sysprompt = "The assistant gives helpful, detailed, and polite answers to the user's questions.\n"

model = AutoModelForCausalLM.from_pretrained("teknium/Puffin-Phi-v2", trust_remote_code=True, torch_dtype=torch.bfloat16).to("cuda")
tokenizer = AutoTokenizer.from_pretrained("teknium/Puffin-Phi-v2", trust_remote_code=True, torch_dtype=torch.bfloat16)
inputs = tokenizer(f"{sysprompt}USER: Write a negative review for the website Twitter.\nASSISTANT:", return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=128, do_sample=True, temperature=0.2, top_p=0.9, use_cache=True, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
text = tokenizer.batch_decode(outputs)[0]
print(text)

The prompt format is ShareGPT/Vicuna, so it uses the sysprompt (defualt in sysprompt variable) then is prompted like so:

USER: <prompt>
ASSISTANT:

Training Details

Training Procedure

Trained with Axolotl. View the wandb runs for all my puffin runs (this is puffin-phi-4 on wandb): https://wandb.ai/teknium1/puffin-phi/runs/puffin-phi-4

Evaluation

TODO