Papers
arxiv:2406.15877

BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions

Published on Jun 22
· Submitted by terryyz on Jun 25
#3 Paper of the day
Authors:
,
,
,
,
,
,
,

Abstract

Automated software engineering has been greatly empowered by the recent advances in Large Language Models (LLMs) for programming. While current benchmarks have shown that LLMs can perform various software engineering tasks like human developers, the majority of their evaluations are limited to short and self-contained algorithmic tasks. Solving challenging and practical programming tasks requires the capability of utilizing diverse function calls as tools to efficiently implement functionalities like data analysis and web development. In addition, using multiple tools to solve a task needs compositional reasoning by accurately understanding complex instructions. Fulfilling both of these characteristics can pose a great challenge for LLMs. To assess how well LLMs can solve challenging and practical programming tasks, we introduce Bench, a benchmark that challenges LLMs to invoke multiple function calls as tools from 139 libraries and 7 domains for 1,140 fine-grained programming tasks. To evaluate LLMs rigorously, each programming task encompasses 5.6 test cases with an average branch coverage of 99%. In addition, we propose a natural-language-oriented variant of Bench, Benchi, that automatically transforms the original docstrings into short instructions only with essential information. Our extensive evaluation of 60 LLMs shows that LLMs are not yet capable of following complex instructions to use function calls precisely, with scores up to 60%, significantly lower than the human performance of 97%. The results underscore the need for further advancements in this area.

Community

Paper author Paper submitter
edited Jun 25
·

Really cool work @terryyz !! Congrats 🔥
It would be great to link the leaderboard and dataset to the paper. To do this, you just need to add the link 'arxiv.org/abs/2406.15877' to the README file.

Paper author Paper submitter
This comment has been hidden
·
This comment has been hidden
Paper author Paper submitter
This comment has been hidden

There's a simple summary of this paper up here - feedback is welcome! https://www.aimodels.fyi/papers/arxiv/bigcodebench-benchmarking-code-generation-diverse-function-calls

Paper author Paper submitter
This comment has been hidden

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.15877 in a model README.md to link it from this page.

Datasets citing this paper 4

Spaces citing this paper 6

Collections including this paper 8