File size: 3,034 Bytes
0b1b704 d2dab4e 0b1b704 d2dab4e |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
---
license: apache-2.0
datasets:
- ai2lumos/lumos_maths_plan_iterative
language:
- en
tags:
- language-agent
- maths
- reasoning
- planning
---
# πͺ Agent Lumos: Unified and Modular Training for Open-Source Language Agents
<p align="center">
π<a href="https://allenai.github.io/lumos">[Website]</a>
π<a href="https://arxiv.org/abs/2311.05657">[Paper]</a>
π€<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a>
π€<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a>
π€<a href="https://huggingface.co/spaces/ai2lumos/lumos_data_demo">[Demo]</a>
</p>
We introduce πͺ**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents.
**Lumos** has following features:
* 𧩠**Modular Architecture**:
- 𧩠**Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B/13B and off-the-shelf APIs.
- π€ **Lumos** utilizes a unified data format that encompasses multiple task types, thereby enabling the developed agent framework to conveniently support a range of interactive tasks.
* π **Diverse Training Data**:
- π **Lumos** is trained with ~56K diverse high-quality subgoal/action annotations from ground-truth reasoning steps in existing benchmarks with GPT-4.
- βοΈ **Lumos** data can be instrumental for future research in developing open-source agents for complex interactive tasks.
* π **Competitive Performance**:
- π **Lumos** is comparable or even beats **GPT-series** agents on web/complex QA tasks Mind2Web and HotpotQA, and **larger open agents** on math and multimodal tasks.
- π **Lumos** exceeds contemporaneous agents that have been **fine-tuned** with in-domain HotpotQA, Mind2Web and ScienceQA annotations, such as **FiReAct**, **AgentLM**, and **AutoAct**.
- π **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **integrated** training.
- π **Lumos** surpasses larger open LLM agents and domain-specific agents on unseen tasks, WebShop and InterCode_SQL.
## Model Overview
`lumos_maths_plan_iterative-13B` is a **planning** module checkpoint finetuned on **maths** task in **Lumos-Iterative (Lumos-I)** formulation.
The training annotation is shown below:
| Training Data | Number |
|---|---|
|[`lumos_maths_plan_iterative-13B`](https://huggingface.co/datasets/ai2lumos/lumos_maths_plan_iterative-13B)|19778|
## Citation
If you find this work is relevant with your research, please feel free to cite our work!
```
@article{yin2023lumos,
title={πͺ Agent Lumos: Unified and Modular Training for Open-Source Language Agents},
author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen},
year={2023}
}
``` |