RAG
Hi All
I am new to LLM, I am looking for a small model that can be used for RAG,
While using this model for RAG, I am getting below error, Please help to solve this issue or their other model that can used for this purpose
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
service_context = ServiceContext.from_defaults(chunk_size=256,llm=model, embed_model=model)
Error :
File "C:\Users\xxxxxxxx\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\nn\modules\module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'MistralForCausalLM' object has no attribute 'system_prompt'
Thanks
Ram
Hi,
Could you share the complete code you are trying to run?
Based on the information you provided, it looks like you are attempting to set or access a system_prompt attribute on a MistralForCausalLM object, which does not exist.
I would also recommend fine tuning this model for RAG tasks.
Hi Edwko
Below is the complete code and error
Thanks for the support
Ram
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from llama_index.readers.web import BeautifulSoupWebReader
from llama_index.core.response.notebook_utils import display_response
from llama_index.llms.huggingface import HuggingFaceLLM
from llama_index.core import Settings
from llama_index.embeddings.huggingface import HuggingFaceEmbedding
from llama_index.core import VectorStoreIndex,SimpleDirectoryReader,ServiceContext
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Oute-1-300M-Instruct").to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Oute-1-300M-Instruct")
def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.12) -> str:
# Apply the chat template and convert to PyTorch tensors
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(device)
# Generate the response
output = model.generate(
input_ids,
max_length=512,
temperature=temperature,
repetition_penalty=repetition_penalty,
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
message = "I'd like to learn about language models. Can you break down the concept for me?"
response = generate_response(message)
url = "https://www.theverge.com/2023/9/29/23895675/ai-bot-social-network-openai-meta-chatbots"
print("Before reading the url ")
documents = BeautifulSoupWebReader().load_data([url])
print("imported the text file ")
print("before embed_model")
embed_model = HuggingFaceEmbedding(model_name="BAAI/bge-small-en-v1.5")
print("After embed_model")
service_context = ServiceContext.from_defaults(chunk_size=256,llm=model, embed_model=model)
=====================================================================================================
stdout :
Before reading the url
imported the text file
before embed_model
After embed_model
C:\Users\ramabadran\learn\llm_test20.py:54: DeprecationWarning: Call to deprecated class method from_defaults. (ServiceContext is deprecated, please use llama_index.settings.Settings
instead.) -- Deprecated since version 0.10.0.
service_context = ServiceContext.from_defaults(chunk_size=256,llm=model, embed_model=model)
Traceback (most recent call last):
File "C:\Users\ramabadran\learn\llm_test20.py", line 54, in
service_context = ServiceContext.from_defaults(chunk_size=256,llm=model, embed_model=model)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ramabadran\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\deprecated\classic.py", line 285, in wrapper_function
return wrapped_(*args_, **kwargs_)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\ramabadran\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\llama_index\core\service_context.py", line 179, in from_defaults
llm.system_prompt = llm.system_prompt or system_prompt
^^^^^^^^^^^^^^^^^
File "C:\Users\ramabadran\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\LocalCache\local-packages\Python312\site-packages\torch\nn\modules\module.py", line 1688, in getattr
raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'")
AttributeError: 'MistralForCausalLM' object has no attribute 'system_prompt'
I don't think the issue lies with the model itself, but rather with how you're using the Llama Index library.
I am not very familiar with this library, but after taking a quick look at their documentation, it seems that you need to load the model using their wrapper.
You can find more information on how to do this in their documentation: https://docs.llamaindex.ai/en/stable/examples/llm/huggingface/
You can try loading the model like this:
from llama_index.llms.huggingface import HuggingFaceLLM
model = HuggingFaceLLM(model_name="OuteAI/Lite-Oute-1-300M-Instruct")
It looks like you are using the "Usage with HuggingFace transformers" example that I provided for this model.
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Oute-1-300M-Instruct")
However, the MistralForCausalLM class does not have a system_prompt object.