Function call
Hi,
I am developing an assistant application by using Mistral-7B-v0.3 model. Assitant is able to answer using RAG as well as calling the function if it is required. But facing while calling a function. I have 4 simple functions and assigned them as a toolset to the model.
Sometimes it is generating text below which is wrong;
To import the transport requests SM1K9001232 and SM1K90181 from xyz Qa to abc production, use the following command:
start_transport_request(transport_requests=["SM1K9001232", "SM1K90181"], source_system="xyz Qa", target_system="abc production")
After importing the transport requests, you can confirm them using the do_transport_request function:
do_transport_request(confirmed=True)
To import the transport requests SM1K9001232 and SM1K90181 from KoçSistem Qa to KoçSistem production, use the following command:
start_transport_request(transport_requests=["SM1K9001232", "SM1K90181"], source_system="xyz Qa", target_system="abc production")
After importing the transport requests, you can confirm them using the do_transport_request function:
do_transport_request(confirmed=True)
but sometimes correct as below;
[{"name": "start_transport_request", "arguments": {"transport_requests": ["SM1K9001232", "SM1K90181"], "source_system": "xyz Qa", "target_system": "abc production"}}]
Please find the code below;
tool_call_model_name = "mistralai/Mistral-7B-Instruct-v0.3"
tool_call_tokenizer = AutoTokenizer.from_pretrained(tool_call_model_name)
tool_call_model = AutoModelForCausalLM.from_pretrained(tool_call_model_name)
_question = "import SM1K9001232, SM1K90181 from XYZ to ABC"
messages = [
{"role": "system", "content": """
Use the following pieces of context to answer the question at the end.
Please follow the following rules:
1. If you decided to invoke a function, DO NOT add an additional text into the answer
"""
},
{"role": "user", "content": _question}]
import torch
input_ids = tool_call_tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt", tools = tools,).to(tool_call_model.device)
attention_mask = torch.ones_like(input_ids)
outputs = tool_call_model.generate(input_ids,max_new_tokens=1000,eos_token_id=tool_call_tokenizer.eos_token_id,do_sample=True,temperature=0.6,top_p=0.9,attention_mask=attention_mask,)
response = outputs[0][input_ids.shape[-1]:]
answer = tool_call_tokenizer.decode(response, skip_special_tokens=True)
print(answer)
I am using same question in the prompt. What can may cause this problem?
Thank you.
Orkun Gedik