Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

llama3-alpaca Model

Description

The llama3-alpaca model is a language model trained on vast amounts of text data. It can be used for various natural language processing tasks, including text generation, completion, and more.

Inference Code (Using unsloth)

from unsloth import FastLanguageModel
import torch

FastLanguageModel.for_inference(model) # Enable native 2x faster inference

# Define your prompt
prompt = "Continue the Fibonacci sequence."

# Provide input for the model
inputs = tokenizer(
    [prompt],
    return_tensors="pt"
).to("cuda")

# Generate output
outputs = model.generate(
    **inputs,
    max_new_tokens=64,
    use_cache=True
)

# Decode the generated output
generated_text = tokenizer.batch_decode(outputs)
print(generated_text)

Inference Code (HF model)



from transformers import AutoTokenizer, AutoModelForCausalLM

# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("mohamed1ai/llama3-alpaca")
model = AutoModelForCausalLM.from_pretrained("mohamed1ai/llama3-alpaca")

# Define your prompt
prompt = "Continue the Fibonacci sequence."

# Tokenize the prompt
input_ids = tokenizer.encode(prompt, return_tensors="pt")

# Generate output
output = model.generate(input_ids, max_length=100, num_return_sequences=1)

# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .