Safetensors
English
code
Edit model card

ToolACE-8B

ToolACE-8B is a finetuned model of LLaMA-3.1-8B-Instruct with our dataset ToolACE tailored for tool usage. ToolACE-8B achieves state-of-the-art performance on the Berkeley Function-Calling Leaderboard(BFCL), rivaling the latest GPT-4 models.

ToolACE is an automatic agentic pipeline designed to generate Accurate, Complex, and divErse tool-learning data. ToolACE leverages a novel self-evolution synthesis process to curate a comprehensive API pool of 26,507 diverse APIs. Dialogs are further generated through the interplay among multiple agents, guided by a formalized thinking process. To ensure data accuracy, we implement a dual-layer verification system combining rule-based and model-based checks. More details can be found in our paper on arxiv: ToolACE: Winning the Points of LLM Function Calling

image/jpeg

Here are the winning scores of ToolACE in BFCL-v3.

image/png

Usage

Here we provide a code snippet with apply_chat_template to show you how to load the tokenizer and model and how to generate function calling with given functions.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "/home/huangxu/work/OpenLLMs/ToolACE-8B-zh-v2.2_1ep"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype='auto',
    device_map='auto'
)


# You can modify the prompt for your task
system_prompt = """You are an expert in composing functions. You are given a question and a set of possible functions. Based on the question, you will need to make one or more function/tool calls to achieve the purpose.
If none of the function can be used, point it out. If the given question lacks the parameters required by the function, also point it out.
You should only return the function call in tools call sections.

If you decide to invoke any of the function(s), you MUST put it in the format of [func_name1(params_name1=params_value1, params_name2=params_value2...), func_name2(params)]
You SHOULD NOT include any other text in the response.
Here is a list of functions in JSON format that you can invoke.\n{functions}\n
"""

# User query
query = "Find me the sales growth rate for company XYZ for the last 3 years and also the interest coverage ratio for the same duration."

# Availabel tools in JSON format (OpenAI-format)
tools = [
    {
        "name": "financial_ratios.interest_coverage", "description": "Calculate a company's interest coverage ratio given the company name and duration",
        "arguments": {
            "type": "dict",
            "properties": {
                "company_name": {
                    "type": "string",
                    "description": "The name of the company."
                }, 
                "years": {
                    "type": "integer",
                    "description": "Number of past years to calculate the ratio."
                }
            }, 
            "required": ["company_name", "years"]
        }
    },
    {
        "name": "sales_growth.calculate",
        "description": "Calculate a company's sales growth rate given the company name and duration",
        "arguments": {
            "type": "dict", 
            "properties": {
                "company": {
                    "type": "string",
                    "description": "The company that you want to get the sales growth rate for."
                }, 
                "years": {
                    "type": "integer",
                    "description": "Number of past years for which to calculate the sales growth rate."
                }
            }, 
            "required": ["company", "years"]
        }
    },
    {
        "name": "weather_forecast",
        "description": "Retrieve a weather forecast for a specific location and time frame.",
        "arguments": {
            "type": "dict",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The city that you want to get the weather for."
                }, 
                "days": {
                    "type": "integer",
                    "description": "Number of days for the forecast."
                }
            },
            "required": ["location", "days"]
        }
    }
]

messages = [
    {'role': 'system', 'content': system_prompt.format(functions=tools)},
    {'role': 'user', 'content': query}
]

inputs = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)

outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

Then you should be able to see the following output functional calls:

[sales_growth.calculate(company="XYZ", years=3), financial_ratios.interest_coverage(company_name="XYZ", years=3)]

Citation

If you think ToolACE is useful in your work, please cite our paper:

@misc{liu2024toolacewinningpointsllm,
      title={ToolACE: Winning the Points of LLM Function Calling}, 
      author={Weiwen Liu and Xu Huang and Xingshan Zeng and Xinlong Hao and Shuai Yu and Dexun Li and Shuai Wang and Weinan Gan and Zhengying Liu and Yuanqing Yu and Zezhong Wang and Yuxian Wang and Wu Ning and Yutai Hou and Bin Wang and Chuhan Wu and Xinzhi Wang and Yong Liu and Yasheng Wang and Duyu Tang and Dandan Tu and Lifeng Shang and Xin Jiang and Ruiming Tang and Defu Lian and Qun Liu and Enhong Chen},
      year={2024},
      eprint={2409.00920},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2409.00920}, 
}
Downloads last month
163,076
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for Team-ACE/ToolACE-8B

Finetuned
(450)
this model
Adapters
1 model
Quantizations
4 models

Dataset used to train Team-ACE/ToolACE-8B