OpenDevin: An Open Platform for AI Software Developers as Generalist Agents
Abstract
Software is one of the most powerful tools that we humans have at our disposal; it allows a skilled programmer to interact with the world in complex and profound ways. At the same time, thanks to improvements in large language models (LLMs), there has also been a rapid development in AI agents that interact with and affect change in their surrounding environments. In this paper, we introduce OpenDevin, a platform for the development of powerful and flexible AI agents that interact with the world in similar ways to those of a human developer: by writing code, interacting with a command line, and browsing the web. We describe how the platform allows for the implementation of new agents, safe interaction with sandboxed environments for code execution, coordination between multiple agents, and incorporation of evaluation benchmarks. Based on our currently incorporated benchmarks, we perform an evaluation of agents over 15 challenging tasks, including software engineering (e.g., SWE-Bench) and web browsing (e.g., WebArena), among others. Released under the permissive MIT license, OpenDevin is a community project spanning academia and industry with more than 1.3K contributions from over 160 contributors and will improve going forward.
Community
Thanks for taking time to publish a paper on OpenDevin.
What improvements are planned for OpenDevin's evaluation framework, particularly when it comes to extending benchmarks in future versions?
We're always happy to add more evaluation benchmarks! If there are any that you'd like to see, please look at our evaluation harness https://github.com/OpenDevin/OpenDevin/tree/main/evaluation
and come connect with us on Slack, github, etc: https://github.com/OpenDevin/OpenDevin/?tab=readme-ov-file#-join-our-community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Agentless: Demystifying LLM-based Software Engineering Agents (2024)
- CRAB: Cross-environment Agent Benchmark for Multimodal Language Model Agents (2024)
- Agent-E: From Autonomous Web Navigation to Foundational Design Principles in Agentic Systems (2024)
- CodeNav: Beyond tool-use to using real-world codebases with LLM agents (2024)
- Code Agents are State of the Art Software Testers (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper