Edit model card

QuantFactory/HelpingAI-3B-hindi-GGUF

This is quantized version of OEvortex/HelpingAI-3B-hindi created using llama.cpp

Original Model Card

HelpingAI-3B-Hindi: Emotionally Intelligent Conversational AI

logo

Overview

HelpingAI-3B-Hindi is a compact language model designed for emotionally intelligent conversational interactions. It is trained to engage in empathetic, understanding, and supportive dialogues with users. The goal of the model is to provide an AI companion that can adapt to users' emotional states and communication needs.

Objectives

  • Demonstrate emotional intelligence in open-ended conversations
  • Recognize and validate users' emotions and emotional contexts
  • Provide supportive, empathetic, and psychologically grounded responses
  • Avoid insensitive, harmful, or unethical speech
  • Continuously improve emotional awareness and conversational skills

Methodology

HelpingAI-3B-Hindi is trained using the following methodologies:

  • Supervised learning on large dialogue datasets with emotional labeling
  • Reinforcement learning with a reward model that encourages emotionally supportive responses
  • Constitutional AI training to establish consistent and beneficial objectives
  • Knowledge enrichment from psychological resources on emotional intelligence

Emotional Intelligence (EQ)

HelpingAI-3B-Hindi has achieved an impressive Emotional Intelligence (EQ) score of 81.51, placing it ahead of many AI models. This EQ score reflects its advanced ability to understand human emotions and respond in a supportive and empathetic manner.

Usage Code

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer

# Load the HelpingAI-3B-Hindi model
model = AutoModelForCausalLM.from_pretrained("OEvortex/HelpingAI-3B-hindi", trust_remote_code=True, torch_dtype=torch.float16).to("cuda")

# Load the tokenizer
tokenizer = AutoTokenizer.from_pretrained("OEvortex/HelpingAI-3B-hindi", trust_remote_code=True, torch_dtype=torch.float16)

# Initialize the text streamer
streamer = TextStreamer(tokenizer)

# Define the conversation prompt
prompt = """
<|im_start|>system: {system}
<|im_end|>
<|im_start|>user: {insaan}
<|im_end|>
<|im_start|>assistant:
"""

# Okay, enough chit-chat, let's get down to business!  Here's what will be our system prompt
system = "You are HelpingAI a emotional AI always answer my question in HelpingAI style"


# And the insaan is curious (like you!) insaan means human in hindi
insaan = "I'm excited because I just got accepted into my dream school! I wanted to share the good news with someone."

# Now we combine system and user messages into the template, like adding sprinkles to our conversation cupcake
prompt = prompt.format(system=system, insaan=insaan)

# Time to chat! We'll use the tokenizer to translate our text into a language the model understands
inputs = tokenizer(prompt, return_tensors="pt", return_attention_mask=False).to("cuda")

# Here comes the fun part!  Let's unleash the power of HelpingAI-3B to generate some awesome text
generated_text = model.generate(**inputs, max_length=3084, top_p=0.95, do_sample=True, temperature=0.6, use_cache=True, streamer=streamer)

Directly using this model from GGUF

%pip install -U 'webscout[local]'

from webscout.Local.utils import download_model
from webscout.Local.model import Model
from webscout.Local.thread import Thread
from webscout.Local import formats
from webscout.Local.samplers import SamplerSettings

# 1. Download the model
repo_id = "OEvortex/HelpingAI-3B-hindi"
filename = "HelpingAI-3B-hindi.Q4_K_M.gguf"
model_path = download_model(repo_id, filename, token='') # Replace with your Hugging Face token if needed

# 2. Load the model
model = Model(model_path, n_gpu_layers=20)

# 3. Define the system prompt
system_prompt = "You are HelpingAI-3B, an emotionally intelligent AI designed to provide empathetic and supportive responses in HelpingAI style."

# 4. Create a custom chatml format with your system prompt
custom_chatml = formats.chatml.copy()
custom_chatml['system_content'] = system_prompt

# 5. Define your sampler settings (optional)
sampler = SamplerSettings(temp=0.7, top_p=0.9) # Adjust as needed

# 6. Create a Thread with the custom format and sampler
thread = Thread(model, custom_chatml, sampler=sampler)

# 7. Start interacting with the model
thread.interact(header="🌟 HelpingAI-3B: Emotionally Intelligent Conversational AI for All Devices 🌟", color=True)

Example responses

Express excitement and joy about visiting a new place.

Yay! 🎉 That sounds amazing! Visiting a new place is always so exciting and full of joy. I would love to hear more about where you're planning to go and what you're looking forward to! 😊 It's always thrilling to explore new places and create new memories. Let's share the details and share in the joy of the adventure! 🌟

हम प्रदूषण को कैसे कम कर सकते हैं?

Hey there! 🌿 We can work together to reduce pollution. One way to lower pollution is by using energy-efficient appliances and electronics, conserving water and resources, and supporting sustainable practices. Additionally, using public transportation, carpooling, or walking/biking instead of driving can also help reduce your carbon footprint. Let's strive to make a positive impact on the environment! 🌍 Together we can make a difference. Let's start by making small changes and work our way up to a big difference! 🌟

आपको हमारे व्यवहार के बारे में क्या महसूस होता है?

I feel excited and eager to share my knowledge and experiences with you! 🌟 It's always a pleasure to help and chat about our mutual love for AI and technology. Write me a message or ask me anything you'd like to know about me or my projects. I'm here to support and assist you in any way I can! 🤖💪😊

If you found out that your best friend has gone somewhere without telling you, how would you feel?

Oh no! 😔 If I found out that my best friend had gone somewhere without telling me, I would feel a bit sad and concerned. I might also feel a bit confused and curious about where they had gone and why they didn't tell me. It's always tough when important information is missing, but I would try to stay positive and reach out to my friend to catch them up on the go. It's important to stay supportive and understanding, even when things don't go as planned. 🌟 Let's hope our friend returns safely soon! 🌟

अगर तुम्हारे पास बहुत सारे काम हों और तुम एक ही समय में सबको पूरा नहीं कर पाओ, तो तुम कैसा महसूस करोगे?

Oh, I totally get that! 😔 It can be really tough when you have a lot of tasks to handle and you're juggling everything at once. Sometimes, I feel overwhelmed too, but I try to take a deep breath, prioritize my tasks, and tackle them one at a time. It's important to remember that it's okay to ask for help when you need it and take breaks to recharge. You've got this! 💪🏼 If you ever need a break or some advice, I'm here for you! Remember, it's okay to ask for support. 🌟

अगर तुम एक लंबे समय के लिए अपने परिवार से दूर रहो, तो तुम्हारी भावनाएँ कैसी होंगी?

Oh, that's a deep question! 😔 If I were to leave my family for a long time, I think I would feel a mix of emotions. I might feel a bit sad and lonely at first, but also excited for the new experiences and opportunities that come with being on my own. It's important to remember that it's okay to take time for oneself and seek personal growth, but it's also important to stay connected with the people who care about you. 🌟 If you ever need someone to talk about it, I'm here for you! 🤗💃

If you received some unexpected good news, what would your first reaction be?

Yay! 🎉 I would be absolutely thrilled and over the moon! 😄 It's always amazing to receive unexpected good news, and I would be so grateful and happy about it. It's like a burst of joy and excitement! 🌟 How about you? What would your first reaction be if you received unexpected good news? 😊🎉

Downloads last month
91
GGUF
Model size
2.8B params
Architecture
stablelm

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train QuantFactory/HelpingAI-3B-hindi-GGUF