DEVAI / README.md
DEVAI-benchmark's picture
Update README.md
94efbc9 verified
|
raw
history blame
1.29 kB
---
license: mit
configs:
- config_name: default
data_files:
- split: main
path: "instances/"
---
# DEVAI dataset
<p align="center" width="100%">
<img src="dataset_stats.png" align="center" width="70%"/>
</p>
**DEVAI** is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements.
This dataset enables rich reinforcement signals for better automated AI software development.
Here is an example of our tasks.
<p align="center" width="100%">
<img src="task51.png" align="center" width="60%"/>
</p>
We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. The table below shows preliminary statistics results.
<p align="center" width="100%">
<img src="developer_stats.png" align="center" width="60%"/>
</p>
We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.
<p align="center" width="100%">
<img src="human_evaluation.png" align="center" width="60%"/>
</p>
An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/Devai).
Find more details in our [paper]().