OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World Exploration, Feedback and Optimization
Abstract
The rapid development of large language and multimodal models has sparked significant interest in using proprietary models, such as GPT-4o, to develop autonomous agents capable of handling real-world scenarios like web navigation. Although recent open-source efforts have tried to equip agents with the ability to explore environments and continuously improve over time, they are building text-only agents in synthetic environments where the reward signals are clearly defined. Such agents struggle to generalize to realistic settings that require multimodal perception abilities and lack ground-truth signals. In this paper, we introduce an open-source framework designed to facilitate the development of multimodal web agent that can autonomously conduct real-world exploration and improve itself. We first train the base model with imitation learning to gain the basic abilities. We then let the agent explore the open web and collect feedback on its trajectories. After that, it further improves its policy by learning from well-performing trajectories judged by another general-purpose model. This exploration-feedback-optimization cycle can continue for several iterations. Experimental results show that our web agent successfully improves itself after each iteration, demonstrating strong performance across multiple test sets.
Community
OpenWebVoyager presents a pioneering approach to building truly autonomous, multimodal web agents. Moving beyond traditional text-only models, this open-source framework enables agents to explore the actual web, learn from real-world feedback, and optimize autonomously over time. By combining imitation learning with a cycle of exploration, feedback, and optimization, OpenWebVoyager empowers agents to navigate complex, real-world scenarios with continually improving efficiency and accuracy.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ExACT: Teaching AI Agents to Explore with Reflective-MCTS and Exploratory Learning (2024)
- Web Agents with World Models: Learning and Leveraging Environment Dynamics in Web Navigation (2024)
- AgentBank: Towards Generalized LLM Agents via Fine-Tuning on 50000+ Interaction Trajectories (2024)
- E2CL: Exploration-based Error Correction Learning for Embodied Agents (2024)
- NNetscape Navigator: Complex Demonstrations for Web Agents Without a Demonstrator (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper