Edit model card

Nakshatra: Human-like Conversational AI Prototype

logo

Overview

Nakshatra is a groundbreaking prototype AI model, boasting 10x better human-like responses compared to the previous HelpingAI models. Designed by Abhay Koul (OEvortex), Nakshatra leverages advanced conversational techniques to deliver highly coherent, empathetic, and contextually aware interactions, making it a major leap forward in AI-human interaction.

  • Delivers near-human conversational quality and responsiveness.- Delivers near-human conversational quality and responsiveness.
  • Exhibits deep contextual understanding and emotional intelligence in interactions.
  • Aimed at providing more natural, emotionally intuitive dialogue experiences.- Aimed at providing more natural, emotionally intuitive dialogue experiences.

Methodology

Nakshatra employs a combination of the following techniques to achieve its remarkable conversational capabilities:

  • Supervised Learning: Trained with vast dialogue datasets, including those with emotional annotations, to ensure it can handle a wide range of conversational contexts.
  • Human-like Conversation Training: Fine-tuned to imitate natural human conversational patterns.
  • Prototype Optimization: This version is still in the prototype phase but showcases significant advancements in language coherence, tone, and emotional sensitivity.

Limitations

While Nakshatra represents a significant advancement in conversational AI, it is important to acknowledge its limitations:

  • Prototype Status: Nakshatra is currently in the prototype phase, which means it may not be fully optimized for all conversational scenarios. Users should be aware that further refinements and updates are expected.

  • Factual Accuracy: The model is designed to mimic human conversational styles and may generate responses that sound plausible but are factually incorrect. Users should verify critical information from reliable sources.

  • Contextual Limitations: Although Nakshatra exhibits deep contextual understanding, it may still struggle with complex or nuanced topics, leading to misunderstandings or irrelevant responses.

  • Bias and Ethical Considerations: Like all AI models, Nakshatra may inadvertently reflect biases present in the training data. Users should be mindful of this and approach interactions with a critical perspective.

  • Dependence on Input Quality: The quality of the model's responses is highly dependent on the clarity and context of the input it receives. Ambiguous or poorly structured queries may result in less coherent outputs.

Usage Code

import torch  
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load the Nakshatra model  
model = AutoModelForCausalLM.from_pretrained("OEvortex/Nakshatra", trust_remote_code=True)  
# Load the tokenizer  
tokenizer = AutoTokenizer.from_pretrained("OEvortex/Nakshatra", trust_remote_code=True)  

# Define the chat input  
chat = [  
    { "role": "system", "content": "You are Nakshatra, a human-like conversational AI. Answer in the most human way possible, Provide concise and to-the-point answers." },  
    { "role": "user", "content": "Introduce yourself!" }  
]

inputs = tokenizer.apply_chat_template(  
    chat,  
    add_generation_prompt=True,  
    return_tensors="pt"  
).to(model.device)

# Generate text  
outputs = model.generate(  
    inputs,  
    max_new_tokens=256,  
    do_sample=True,  
    temperature=0.6,  
    top_p=0.9,  
    eos_token_id=tokenizer.eos_token_id  
)

response = outputs[0][inputs.shape[-1]:]  
print(tokenizer.decode(response, skip_special_tokens=True))

Using the Model with GGUF

# %pip install -U 'webscout[local]' -q  

from webscout.Local.utils import download_model  
from webscout.Local.model import Model  
from webscout.Local.thread import Thread  
from webscout.Local import formats  
from webscout.Local.samplers import SamplerSettings  

# Download the model  
repo_id = "OEvortex/Nakshatra"  
filename = "nakshatra-q4_k_m.gguf"  
model_path = download_model(repo_id, filename, token=None)  

# Load the model  
model = Model(model_path, n_gpu_layers=40)  

# Define the system prompt  
system_prompt = "You are Nakshatra, a human-like conversational AI. Answer in the most human way possible."

# Create a chat format with your system prompt  
nakshatra_format = formats.llama3.copy()  
nakshatra_format['system_prompt'] = system_prompt  nakshatra_format['system_content'] = system_prompt  

# Define your sampler settings (optional)  
sampler = SamplerSettings(temp=0.7, top_p=0.9)  

# Create a Thread with the custom format and sampler  
thread = Thread(model, nakshatra_format, sampler=sampler)  

# Start interacting with the model  
thread.interact(header="🌟 Nakshatra - Human-like AI Prototype 🚀", color=True)  
Downloads last month
321
Safetensors
Model size
5.85B params
Tensor type
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Collection including OEvortex/Nakshatra