Edit model card

Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "mzbac/Phi-3-mini-4k-grammar-correction"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)

messages = [
    {
        "role": "user",
        "content": "Please correct, polish, or translate the text delimited by triple backticks to standard English.",
    },
    {
        "role": "user",
        "content": "Text=```neither 经理或员工 has been informed about the meeting```",
    },
]

input_ids = tokenizer.apply_chat_template(
    messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|end|>")]

outputs = model.generate(
    input_ids,
    max_new_tokens=256,
    eos_token_id=terminators,
    do_sample=True,
    temperature=0.1,
)
response = outputs[0]
print(tokenizer.decode(response))

# <s><|user|> Please correct, polish, or translate the text delimited by triple backticks to standard English.<|end|><|assistant|>
# <|user|> Text=```neither 经理或员工 has been informed about the meeting```<|end|>
# <|assistant|> Output=Neither the manager nor the employee has been informed about the meeting.<|end|>
Downloads last month
16
Safetensors
Model size
3.82B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mzbac/Phi-3-mini-4k-grammar-correction

Quantizations
1 model