DEVAI-benchmark commited on
Commit
e5e7d80
1 Parent(s): 8b35787

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +28 -3
README.md CHANGED
@@ -1,3 +1,28 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # DEVAI dataset
5
+ <p align="center" width="100%">
6
+ <img src="dataset_stats.png" align="center" width="70%"/>
7
+ </p>
8
+
9
+ **DEVAI** is a benchmark of 55 realistic AI development tasks. It consists of plentiful manual annotations, including a total of 365 hierarchical user requirements.
10
+ This dataset enables rich reinforcement signals for better automated AI software development.
11
+
12
+ Here is an example of our tasks.
13
+ <p align="center" width="100%">
14
+ <img src="task51.png" align="center" width="60%"/>
15
+ </p>
16
+
17
+ We apply three state-of-the-art automatic software development systems to DEVAI, namely MetaGPT, GPT-Piolt, and OpenHands. The table below shows preliminary statistics results.
18
+ <p align="center" width="100%">
19
+ <img src="developer_stats.png" align="center" width="60%"/>
20
+ </p>
21
+
22
+ We perform a manual evaluation to judge if each requirement is satisfied by the solution provided by the aforementioned systems.
23
+ <p align="center" width="100%">
24
+ <img src="human_evaluation.png" align="center" width="60%"/>
25
+ </p>
26
+
27
+ An automated evaluation program that could possibly replace manual evaluation can be found at our [Github realse](https://github.com/metauto-ai/Devai).
28
+ Find more details in our [paper]().