RLM-mini
RLM-mini is a 7.2 Billion parameter model,RLM-mini is designed to provide a robust and versatile natural language processing (NLP) capability, leveraging the strengths of two foundational models. By combining models from different sources, RLM-mini aims to inherit diverse linguistic features and training data nuances, resulting in improved performance across a wide range of NLP tasks. This includes more robust understanding and generation capabilities, especially in handling nuanced and context-heavy queries. The fine-tuning process integrates the best practices and optimizations from both parent models. This ensures that RLM-mini not only maintains high accuracy but also delivers responses more efficiently.
It is base model and requires Fine tuning.
Two Merged Models
Usage
Direct Model
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("rudrashah/RLM-mini")
model = AutoModelForCausalLM.from_pretrained("rudrashah/RLM-mini")
input_token = tokenizer("How to make Pav Bhaji?", return_tensors="pt")
output = model.generate(**input_token, max_length=250)
output = tokenizer.decode(output[0])
Using Pipeline
from transformers import AutoTokenizer
import transformers
import torch
model = "rudrashah/RLM-mini"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
- Downloads last month
- 3