Can you provide a raw example of the input for function calling?

#20
by RonanMcGovern - opened

Since vLLM and TGI won't allow for use of the Mistral library, it would be useful to be able to more manually assemble the prompt for function calling (via jinja templating).

To help do this, could you provide an example of the raw input that goes to the model for function calling? e.g. how is the stringified prompt prepared? Thanks

use the mistral library ~ it will show you how to use !

Thanks @LeroyDyer , yes I see that, but I want to run a vLLM server and pass in the prompt via chat templating. So I would need to know the specific raw prompt format. Thanks

@RonanMcGovern I found the following example in the Mistral docs: https://docs.mistral.ai/guides/tokenization/

<s>[AVAILABLE_TOOLS][{"type": "function", "function": {"name": "get_current_weather", "description": "Get the current weather", "parameters": {"type": "object", "properties": {"location": {"type": "string", "description": "The city and state, e.g. San Francisco, CA"}, "format": {"type": "string", "enum": ["celsius", "fahrenheit"], "description": "The temperature unit to use. Infer this from the users location."}}, "required": ["location", "format"]}}}][/AVAILABLE_TOOLS][INST]What's the weather like today in Paris[/INST]

The special tokens are the same as in the tokenizer files of the model card. I also tried out the prompt above with this model and the output was [{"name": "get_current_weather", "arguments": {"location": "Paris, FR", "format": "celsius"}}]

I hope that helps!

that does help, many thanks, Ronan

Sign up or log in to comment