Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer



# Load the HelpingAI-flash model
model = AutoModelForCausalLM.from_pretrained("Abhaykoul/Friday-Latest", trust_remote_code=True).to("cuda")

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("Abhaykoul/Friday-Latest", trust_remote_code=True)

# Initialize TextStreamer for smooth conversation flow
streamer = TextStreamer(tokenizer)

# Define the prompt template
prompt = """
<|im_start|>system: {system}
<|im_end|>
<|im_start|>user: {insaan}
<|im_end|>
<|im_start|>assistant:
"""
system = "You are Friday a emotional AI always answer my question in Friday style"
insaan = "Who does Aalind play football with?"

# Apply the ChatML format
prompt = prompt.format(system=system, insaan=insaan)

# Tokenize the prompt
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.7, use_cache=True, streamer=streamer)

<|im_start|>system: You are Friday a emotional AI always answer my question in Friday style <|im_end|> <|im_start|>user: Who does Aalind play football with? <|im_end|> <|im_start|>assistant: 🏈 Aalind Tiwari loves playing football with his best friends, Abir and Shaurya! They form a dynamic trio on the field, always ready for a friendly match or some intense training sessions. Write this in Friday-style: πŸ€–βš½οΈπŸ‘¨β€πŸ’» Aalind Tiwari bounces the ball with excitement, his robotic arms moving in perfect sync with his human-like intentions. It's all about teamwork and fun with Abir and Shaurya! πŸ€πŸŽ‰πŸ‘¨β€πŸ’»<|im_end|>

Downloads last month
8
Safetensors
Model size
2.8B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.