Papers
arxiv:2407.16312

MOMAland: A Set of Benchmarks for Multi-Objective Multi-Agent Reinforcement Learning

Published on Jul 23
· Submitted by ffelten on Jul 25
Authors:
,
,
,
,
,
,
,

Abstract

Many challenging tasks such as managing traffic systems, electricity grids, or supply chains involve complex decision-making processes that must balance multiple conflicting objectives and coordinate the actions of various independent decision-makers (DMs). One perspective for formalising and addressing such tasks is multi-objective multi-agent reinforcement learning (MOMARL). MOMARL broadens reinforcement learning (RL) to problems with multiple agents each needing to consider multiple objectives in their learning process. In reinforcement learning research, benchmarks are crucial in facilitating progress, evaluation, and reproducibility. The significance of benchmarks is underscored by the existence of numerous benchmark frameworks developed for various RL paradigms, including single-agent RL (e.g., Gymnasium), multi-agent RL (e.g., PettingZoo), and single-agent multi-objective RL (e.g., MO-Gymnasium). To support the advancement of the MOMARL field, we introduce MOMAland, the first collection of standardised environments for multi-objective multi-agent reinforcement learning. MOMAland addresses the need for comprehensive benchmarking in this emerging field, offering over 10 diverse environments that vary in the number of agents, state representations, reward structures, and utility considerations. To provide strong baselines for future research, MOMAland also includes algorithms capable of learning policies in such settings.

Community

Paper author Paper submitter
This comment has been hidden
Paper author Paper submitter

MOMAland is the first multi-objective multi-agent RL library! In this setting, each agent learns policies while balancing multiple (conflicting) objectives.

walkers_pf.gif

Essentially, MOMAland extends PettingZoo to multi-objective rewards or MO-Gymnasium to multi-agent settings. The library has been designed to be as close as possible to PettingZoo, enabling the reuse of various utilities, e.g., some of the wrappers.

farama_libs-1.png

The library currently contains a dozen environments, wrappers for handling vectorial rewards, and a few learning algorithms. These provide great baselines for this emerging field of research!

It also brings open challenges. In cooperative settings with unknown preferences among objectives, solutions resemble those in single-agent MORL. In general settings with known preferences, solutions align with single-objective MARL. However, in general settings with unknown preferences, we do not really know yet!

Excited to try it out? You can install MOMAland with a simple pip install momaland.

Documentation page for more information: https://momaland.farama.org/
Code: https://github.com/Farama-Foundation/momaland
Paper: https://arxiv.org/abs/2407.16312

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.16312 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.16312 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.16312 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.