can not call the functions
try this::::
"messages": [
{
"content": "You are a helpful AI assistant.\n For coding tasks, prioritize using "functions" in "tools" to solve the problem, with the function call named in "name" and its parameters defined in "parameters".\n Reply "TERMINATE" in the end when everything is done.\n ",
"role": "system"
},
{
"content": "query the latest economic data and major events",
"role": "user"
}
],
"model": "gpt-3.5-turbo-1106",
"stream": false,
"tools": [
{
"type": "function",
"function": {
"description": "run cell in ipython and return the execution result.",
"name": "python",
"parameters": {
"type": "object",
"properties": {
"cell": {
"type": "string",
"description": "Valid Python cell to execute."
}
},
"required": [
"cell"
]
}
}
},
{
"type": "function",
"function": {
"description": "run a shell script and return the execution result.",
"name": "sh",
"parameters": {
"type": "object",
"properties": {
"script": {
"type": "string",
"description": "Valid Python cell to execute."
}
},
"required": [
"script"
]
}
}
},
{
"type": "function",
"function": {
"description": "query the latest economic data and major events, and return the execution results.",
"name": "economic_data_cursor",
"parameters": {
"type": "object",
"properties": {},
"required": []
}
}
}
]
}
Hi, for the GGUF models, you'll need to use llama-cpp-python for inference. You can refer to our sample code here for how to perform inference with the GGUF models and llama-cpp-python.
The integration of functionary GGUF models into llama-cpp-python's OpenAI-compatible servers is still ongoing now in this PR. Once it is integrated, you will be able to start your own server in llama-cpp-python and send in requests via OpenAI API requests. For now, you will have to perform inference normally without a server first using the provided sample code.
Thank you!