Isotonic's picture
Update README.md
3ad1620 verified
metadata
base_model:
  - meta-llama/Llama-3.2-1B-Instruct
tags:
  - text-generation-inference
  - transformers
  - llama
  - trl
license: apache-2.0
language:
  - en
datasets:
  - microsoft/orca-agentinstruct-1M-v1
  - Isotonic/agentinstruct-1Mv1-combined

OrcaAgent-llama3.2-1b

This model is finetuned on a subset from microsoft/orca-agentinstruct-1M-v1, dataset details and prompts can be found in Isotonic/agentinstruct-1Mv1-combined

Use

import torch
from transformers import pipeline
"
pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
messages = [
    {"role": "user", "content": "\n\nYou are an expert text classifier. You need to classify the text below into one of the given classes. \n\nText:\n\nThe anticipation of the meteor shower has filled the astronomy club with an infectious excitement, as we prepare our telescopes for what could be a once-in-a-lifetime celestial event.\n\nClasses:\n\nAffirmative Sentiment;Mildly Affirmative Sentiment;Exuberant Endorsement;Objective Assessment;Critical Sentiment;Subdued Negative Sentiment;Intense Negative Sentiment;Ambivalent Sentiment;Sarcastic Sentiment;Ironical Sentiment;Apathetic Sentiment;Elation/Exhilaration Sentiment;Credibility Endorsement;Apprehension/Anxiety;Unexpected Positive Outcome;Melancholic Sentiment;Aversive Repulsion;Indignant Discontent;Expectant Enthusiasm;Affectionate Appreciation;Anticipatory Positivity;Expectation of Negative Outcome;Nuanced Sentiment Complexity\n\nThe output format must be:\n\nFinal class: {selected_class}\n\n"},
]
outputs = pipe(
    messages,
    max_new_tokens=256,
)
print(outputs[0]["generated_text"][-1])