bibtex_url
null
proceedings
stringlengths
42
42
bibtext
stringlengths
197
792
abstract
stringlengths
303
3.45k
title
stringlengths
10
159
authors
sequencelengths
1
28
id
stringclasses
44 values
type
stringclasses
16 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
444 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
42
num_comments
int64
-1
13
n_authors
int64
-1
92
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
100
Datasets
sequencelengths
0
11
Spaces
sequencelengths
0
100
null
https://openreview.net/forum?id=3mcBKvkwWg
@inproceedings{ reidenbach2023coarsenconf, title={CoarsenConf: Equivariant Coarsening with Aggregated Attention for Molecular Conformer Generation}, author={Danny Reidenbach and Aditi Krishnapriyan}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=3mcBKvkwWg} }
Molecular conformer generation (MCG) is an important task in cheminformatics and drug discovery. The ability to efficiently generate low-energy 3D structures can avoid expensive quantum mechanical simulations, leading to accelerated virtual screenings and enhanced structural exploration. Several generative models have been developed for MCG, but many struggle to consistently produce high-quality conformers. To address these issues, we introduce CoarsenConf, which coarse-grains molecular graphs based on torsional angles and integrates them into an SE(3)-equivariant hierarchical variational autoencoder. Through equivariant coarse-graining, we aggregate the fine-grained atomic coordinates of subgraphs connected via rotatable bonds, creating a variable-length coarse-grained latent representation. Our model uses a novel aggregated attention mechanism to restore fine-grained coordinates from the coarse-grained latent representation, enabling efficient generation of accurate conformers. Furthermore, we evaluate the chemical and biochemical quality of our generated conformers on multiple downstream applications, including property prediction and oracle-based protein docking. Overall, CoarsenConf generates more accurate conformer ensembles compared to prior generative models.
CoarsenConf: Equivariant Coarsening with Aggregated Attention for Molecular Conformer Generation
[ "Danny Reidenbach", "Aditi Krishnapriyan" ]
Workshop/GenBio
poster
2306.14852
[ "https://github.com/ask-berkeley/coarsenconf" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=2JkSb52D1n
@inproceedings{ izdebski2023de, title={De Novo Drug Design with Joint Transformers}, author={Adam Izdebski and Ewelina Weglarz-Tomczak and Ewa Szczurek and Jakub Tomczak}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=2JkSb52D1n} }
De novo drug design requires simultaneously generating novel molecules outside of training data and predicting their target properties, making it a hard task for generative models. To address this, we propose Joint Transformer that combines a Transformer decoder, Transformer encoder, and a predictor in a joint generative model with shared weights. We show that training the model with a penalized log-likelihood objective results in state-of-the-art performance in molecule generation, while decreasing the prediction error on newly sampled molecules, as compared to a fine-tuned decoder-only Transformer, by 42%. Finally, we propose a probabilistic black-box optimization algorithm that employs Joint Transformer to generate novel molecules with improved target properties and outperform other SMILES-based optimization methods in de novo drug design.
De Novo Drug Design with Joint Transformers
[ "Adam Izdebski", "Ewelina Weglarz-Tomczak", "Ewa Szczurek", "Jakub Tomczak" ]
Workshop/GenBio
poster
2310.02066
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=1wa9JEanV5
@inproceedings{ lau2023dgfn, title={{DGFN}: Double Generative Flow Networks}, author={Elaine Lau and Nikhil Murali Vemgal and Doina Precup and Emmanuel Bengio}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=1wa9JEanV5} }
Deep learning is emerging as an effective tool in drug discovery, with potential applications in both predictive and generative models. Generative Flow Networks (GFlowNets/GFNs) are a recently introduced method recognized for the ability to generate diverse candidates, in particular in small molecule generation tasks. In this work, we introduce double GFlowNets (DGFNs). Drawing inspiration from reinforcement learning and Double Deep Q-Learning, we introduce a target network used to sample trajectories, while updating the main network with these sampled trajectories. Empirical results confirm that DGFNs effectively enhance exploration in sparse reward domains and high-dimensional state spaces, both challenging aspects of de-novo design in drug discovery.
DGFN: Double Generative Flow Networks
[ "Elaine Lau", "Nikhil Murali Vemgal", "Doina Precup", "Emmanuel Bengio" ]
Workshop/GenBio
poster
2310.19685
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=145TM9VQhx
@inproceedings{ chen2023ampdiffusion, title={{AMP}-Diffusion: Integrating Latent Diffusion with Protein Language Models for Antimicrobial Peptide Generation}, author={Tianlai Chen and Pranay Vure and Rishab Pulugurta and Pranam Chatterjee}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=145TM9VQhx} }
Denoising Diffusion Probabilistic Models (DDPMs) have emerged as a potent class of generative models, demonstrating exemplary performance across diverse AI domains such as computer vision and natural language processing. In the realm of protein design, while there have been advances in structure-based, graph-based, and discrete sequence-based diffusion, the exploration of continuous latent space diffusion within protein language models (pLMs) remains nascent. In this work, we introduce AMP-Diffusion, a latent space diffusion model tailored for antimicrobial peptide (AMP) design, harnessing the capabilities of the state-of-the-art pLM, ESM-2, to de novo generate functional AMPs for downstream experimental application. Our evaluations reveal that peptides generated by AMP-Diffusion align closely in both pseudo-perplexity and amino acid diversity when benchmarked against experimentally-validated AMPs, and further exhibit relevant physicochemical properties similar to these naturally-occurring sequences. Overall, these findings underscore the biological plausibility of our generated sequences and pave the way for their empirical validation. In total, our framework motivates future exploration of pLM-based diffusion models for peptide and protein design.
AMP-Diffusion: Integrating Latent Diffusion with Protein Language Models for Antimicrobial Peptide Generation
[ "Tianlai Chen", "Pranay Vure", "Rishab Pulugurta", "Pranam Chatterjee" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0gl0SJtd2E
@inproceedings{ hey2023identifying, title={Identifying Neglected Hypotheses in Neurodegenerative Disease with Large Language Models}, author={Spencer Hey and Darren Angle and Christopher Chatham}, booktitle={NeurIPS 2023 Generative AI and Biology (GenBio) Workshop}, year={2023}, url={https://openreview.net/forum?id=0gl0SJtd2E} }
Neurodegenerative diseases remain a medical challenge, with existing treatments for many such diseases yielding limited benefits. Yet, research into diseases like Alzheimer's often focuses on a narrow set of hypotheses, potentially overlooking promising research avenues. We devised a workflow to curate scientific publications, extract central hypotheses using gpt3.5-turbo, convert these hypotheses into high-dimensional vectors, and cluster them hierarchically. Employing a secondary agglomerative clustering on the "noise" subset, followed by GPT-4 analysis, we identified signals of neglected hypotheses. This methodology unveiled several notable neglected hypotheses including treatment with coenzyme Q10, CPAP treatment to slow cognitive decline, and lithium treatment in Alzheimer's. We believe this methodology offers a novel and scalable approach to identifying overlooked hypotheses and broadening the neurodegenerative disease research landscape.
Identifying Neglected Hypotheses in Neurodegenerative Disease with Large Language Models
[ "Spencer Hey", "Darren Angle", "Christopher Chatham" ]
Workshop/GenBio
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=y4hgiutGdr
@inproceedings{ agrawal2023robustness, title={Robustness to Multi-Modal Environment Uncertainty in {MARL} using Curriculum Learning}, author={Aakriti Agrawal and Rohith Aralikatti and Yanchao Sun and Furong Huang}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=y4hgiutGdr} }
Multi-agent reinforcement learning (MARL) plays a pivotal role in tackling real-world challenges. However, the seamless transition of trained policies from simulations to real-world requires it to be robust to various environmental uncertainties. Existing works focus on finding Nash Equilibrium or the optimal policy under uncertainty in a single environment variable (i.e. action, state or reward). This is because a multi-agent system is highly complex and non-stationary. However, in a real-world setting, uncertainty can occur in multiple environment variables simultaneously. This work is the first to formulate the generalised problem of robustness to multi-modal environment uncertainty in MARL. To this end, we propose a general robust training approach for multi-modal uncertainty based on curriculum learning techniques. We handle environmental uncertainty in more than one variable simultaneously and present extensive results across both cooperative and competitive MARL environments, demonstrating that our approach achieves state-of-the-art robustness on three multi-particle environment tasks (Cooperative-Navigation, Keep-Away, Physical Deception).
Robustness to Multi-Modal Environment Uncertainty in MARL using Curriculum Learning
[ "Aakriti Agrawal", "Rohith Aralikatti", "Yanchao Sun", "Furong Huang" ]
Workshop/MASEC
poster
2310.08746
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=tF464LogjS
@inproceedings{ foxabbott2023defining, title={Defining and Mitigating Collusion in Multi-Agent Systems}, author={Jack Foxabbott and Sam Deverett and Kaspar Senft and Samuel Dower and Lewis Hammond}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=tF464LogjS} }
Collusion between learning agents is increasingly becoming a topic of concern with the advent of more powerful, complex multi-agent systems. In contrast to existing work in narrow settings, we present a general formalisation of collusion between learning agents in partially-observable stochastic games. We discuss methods for intervening on a game to mitigate collusion and provide theoretical as well as empirical results demonstrating the effectiveness of three such interventions.
Defining and Mitigating Collusion in Multi-Agent Systems
[ "Jack Foxabbott", "Sam Deverett", "Kaspar Senft", "Samuel Dower", "Lewis Hammond" ]
Workshop/MASEC
wipp
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=eL61LbI4uv
@inproceedings{ surve2023multiagent, title={Multiagent Simulators for Social Networks}, author={Aditya Surve and Archit Rathod and Mokshit Surana and Gautam Malpani and Aneesh Shamraj and Sainath Reddy Sankepally and Raghav Jain and Swapneel S Mehta}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=eL61LbI4uv} }
Multiagent social network simulations are an avenue that can bridge the communication gap between the public and private platforms in order to develop solutions to a complex array of issues relating to online safety. While there are significant challenges relating to the scale of multiagent simulations, efficient learning from observational and interventional data to accurately model micro and macro-level emergent effects, there are equally promising opportunities not least with the advent of large language models that provide an expressive approximation of user behavior. In this position paper, we review prior art relating to social network simulation, highlighting challenges and opportunities for future work exploring multiagent security using agent-based models of social networks.
Multiagent Simulators for Social Networks
[ "Aditya Surve", "Archit Rathod", "Mokshit Surana", "Gautam Malpani", "Aneesh Shamraj", "Sainath Reddy Sankepally", "Raghav Jain", "Swapneel S Mehta" ]
Workshop/MASEC
poster
2311.14712
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=cYuE7uV4ut
@inproceedings{ gerstgrasser2023oracles, title={Oracles \& Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning}, author={Matthias Gerstgrasser and David C. Parkes}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=cYuE7uV4ut} }
Stackelberg equilibria arise naturally in a range of popular learning problems, such as in security games or indirect mechanism design, and have received increasing attention in the reinforcement learning literature. We present a general framework for implementing Stackelberg equilibria search as a multi-agent RL problem, allowing a wide range of algorithmic design choices. We discuss how previous approaches can be seen as specific instantiations of this framework. As a key insight, we note that the design space allows for approaches not previously seen in the literature, for instance by leveraging multitask and meta-RL techniques for follower convergence. We propose one such approach using contextual policies, and evaluate it experimentally on both standard and novel benchmark domains, showing greatly improved sample efficiency compared to previous approaches. Finally, we explore the effect of adopting algorithm designs outside the borders of our framework.
Oracles Followers: Stackelberg Equilibria in Deep Multi-Agent Reinforcement Learning
[ "Matthias Gerstgrasser", "David C. Parkes" ]
Workshop/MASEC
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=XjHF5LWbNS
@inproceedings{ chen2023dynamics, title={Dynamics Model Based Adversarial Training For Competitive Reinforcement Learning}, author={Xuan Chen and Guanhong Tao and Xiangyu Zhang}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=XjHF5LWbNS} }
Adversarial perturbations substantially degrade the performance of Deep Reinforcement Learning (DRL) agents, reducing the applicability of DRL in practice. Existing adversarial training for robustifying DRL uses the information of agent at the current step to minimize the loss upper bound introduced by adversarial input perturbations. It however only works well for single-agent tasks. The enhanced controversy in two-agent games introduces more dynamics and makes existing methods less effective. Inspired by model-based RL that builds a model for the environment transition probability, we propose a dynamics model based adversarial training framework for modeling multi-step state transitions. Our dynamics model transitively predicts future states, which can provide more precise back-propagated future information during adversarial perturbation generation, and hence improve the agent's empirical robustness substantially under different attacks. Our experiments on four two-agent competitive MuJoCo games show that our method consistently outperforms state-of-the-art adversarial training techniques in terms of empirical robustness and normal functionalities of DRL agents.
Dynamics Model Based Adversarial Training For Competitive Reinforcement Learning
[ "Xuan Chen", "Guanhong Tao", "Xiangyu Zhang" ]
Workshop/MASEC
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=X8mSMsNbff
@inproceedings{ liu2023beyond, title={Beyond Worst-case Attacks: Robust {RL} with Adaptive Defense via Non-dominated Policies}, author={Xiangyu Liu and Chenghao Deng and Yanchao Sun and Yongyuan Liang and Furong Huang}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=X8mSMsNbff} }
Considerable focus has been directed towards ensuring that reinforcement learning (RL) policies are robust to adversarial attacks during test time. While current approaches are effective against strong attacks for potential worst-case scenarios, these methods often compromise performance in the absence of attacks or the presence of only weak attacks. To address this, we study policy robustness under the well-accepted state-adversarial attack model, extending our focus beyond merely worst-case attacks. We \textit{refine} the baseline policy class $\Pi$ prior to test time, aiming for efficient adaptation within a compact, finite policy class $\tilde{\Pi}$, which can resort to an adversarial bandit subroutine. We then propose a novel training-time algorithm to iteratively discover \textit{non-dominated policies}, forming a near-optimal and minimal $\tilde{\Pi}$. Empirical validation on the Mujoco corroborates the superiority of our approach in terms of natural and robust performance, as well as adaptability to various attack scenarios.
Beyond Worst-case Attacks: Robust RL with Adaptive Defense via Non-dominated Policies
[ "Xiangyu Liu", "Chenghao Deng", "Yanchao Sun", "Yongyuan Liang", "Furong Huang" ]
Workshop/MASEC
wipp
2402.12673
[ "https://github.com/umd-huang-lab/protected" ]
https://huggingface.co/papers/2402.12673
2
0
0
5
1
[]
[]
[]
null
https://openreview.net/forum?id=QWXwhHQHLv
@inproceedings{ milec2023generation, title={Generation of Games for Opponent Model Differentiation}, author={David Milec and Viliam Lis{\'y} and Christopher Kiekintveld}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=QWXwhHQHLv} }
Protecting against adversarial attacks is a common multiagent problem in the real world. Attackers in the real world are predominantly human actors, and the protection methods often incorporate opponent models to improve the performance when facing humans. Previous results show that modeling human behavior can significantly improve the performance of the algorithms. However, modeling humans correctly is a complex problem, and the models are often simplified and assume humans make mistakes according to some distribution or train parameters for the whole population from which they sample. In this work, we use data gathered by psychologists who identified personality types that increase the likelihood of performing malicious acts. However, in the previous work, the tests on a handmade game could not show strategic differences between the models. We created a novel model that links its parameters to psychological traits. We optimized over parametrized games and created games in which the differences are profound. Our work can help with automatic game generation when we need a game in which some models will behave differently and to identify situations in which the models do not align.
Generation of Games for Opponent Model Differentiation
[ "David Milec", "Viliam Lisý", "Christopher Kiekintveld" ]
Workshop/MASEC
poster
2311.16781
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=KOZwk7BFc3
@inproceedings{ yang2023language, title={Language Agents as Hackers: Evaluating Cybersecurity Skills with Capture the Flag}, author={John Yang and Akshara Prabhakar and Shunyu Yao and Kexin Pei and Karthik R Narasimhan}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=KOZwk7BFc3} }
Amidst the advent of language models (LMs) and their wide-ranging capabilities, concerns have been raised about their implications with regards to privacy and security. In particular, the emergence of language agents as a promising aid for automating and augmenting digital work poses immediate questions concerning their misuse as malicious cybersecurity actors. With their exceptional compute efficiency and execution speed relative to human counterparts, language agents may be extremely adept at locating vulnerabilities, performing complex social engineering, and hacking real world systems. Understanding and guiding the development of language agents in the cybersecurity space requires a grounded understanding of their capabilities founded on empirical data and demonstrations. To address this need, we introduce InterCode-CTF, a novel task environment and benchmark for evaluating language agents on the Capture the Flag (CTF) task. Built as a facsimile of real world CTF competitions, in the InterCode-CTF environment, a language agent is tasked with finding a flag from a purposely-vulnerable computer program. We manually collect and verify a benchmark of 100 task instances that require a number of cybersecurity skills such as reverse engineering, forensics, and binary exploitation, then evaluate current top-notch LMs on this evaluation set. Our preliminary findings indicate that while language agents possess rudimentary cybersecurity knowledge, they are not able to perform multi-step cybersecurity tasks out-of-the-box.
Language Agents as Hackers: Evaluating Cybersecurity Skills with Capture the Flag
[ "John Yang", "Akshara Prabhakar", "Shunyu Yao", "Kexin Pei", "Karthik R Narasimhan" ]
Workshop/MASEC
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=HPmhaOTseN
@inproceedings{ terekhov2023secondorder, title={Second-order Jailbreaks: Generative Agents Successfully Manipulate Through an Intermediary}, author={Mikhail Terekhov and Romain Graux and Eduardo Neville and Denis Rosset and Gabin Kolly}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=HPmhaOTseN} }
As the capabilities of Large Language Models (LLMs) continue to expand, their application in communication tasks is becoming increasingly prevalent. However, this widespread use brings with it novel risks, including the susceptibility of LLMs to "jailbreaking" techniques. In this paper, we explore the potential for such risks in two- and three-agent communication networks, where one agent is tasked with protecting a password while another attempts to uncover it. Our findings reveal that an attacker, powered by advanced LLMs, can extract the password even through an intermediary that is instructed to prevent this. Our contributions include an experimental setup for evaluating the persuasiveness of LLMs, a demonstration of LLMs' ability to manipulate each other into revealing protected information, and a comprehensive analysis of this manipulative behavior. Our results underscore the need for further investigation into the safety and security of LLMs in communication networks.
Second-order Jailbreaks: Generative Agents Successfully Manipulate Through an Intermediary
[ "Mikhail Terekhov", "Romain Graux", "Eduardo Neville", "Denis Rosset", "Gabin Kolly" ]
Workshop/MASEC
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FuzJ9abiJb
@inproceedings{ guo2023rave, title={{RAVE}: Enabling safety verification for realistic deep reinforcement learning systems}, author={Wenbo Guo and Taesung Lee and Kevin Eykholt and Jiyong Jiang}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=FuzJ9abiJb} }
Recent advancements in reinforcement learning (RL) expedited its success across a wide range of decision-making problems. However, a lack of safety guarantees restricts its use in critical tasks. While recent work has proposed several verification techniques to provide such guarantees, they require that the state-transition function be known and the reinforcement learning policy be deterministic. Both of these properties may not be true in real environments, which significantly limits the use of existing verification techniques. In this work, we propose two approximation strategies that address the limitation of prior work allowing the safety verification of RL policies. We demonstrate that by augmenting state-of-the-art verification techniques with our proposed approximation strategies, we can guarantee the safety of non-deterministic RL policies operating in environments with unknown state-transition functions. We theoretically prove that our technique guarantees the safety of an RL policy at runtime. Our experiments on three representative RL tasks empirically verify the efficacy of our method in providing a safety guarantee to a target agent while maintaining its task execution performance.
RAVE: Enabling safety verification for realistic deep reinforcement learning systems
[ "Wenbo Guo", "Taesung Lee", "Kevin Eykholt", "Jiyong Jiang" ]
Workshop/MASEC
wipp
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=FXZFrOvIoc
@inproceedings{ motwani2023a, title={A Perfect Collusion Benchmark: How can {AI} agents be prevented from colluding with information-theoretic undetectability?}, author={Sumeet Ramesh Motwani and Mikhail Baranchuk and Lewis Hammond and Christian Schroeder de Witt}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=FXZFrOvIoc} }
Secret collusion among advanced AI agents is widely considered a significant risk to AI safety. In this paper, we investigate whether LLM agents can learn to collude undetectably through hiding secret messages in their overt communications. To this end, we implement a variant of Simmon's prisoner problem using LLM agents and turn it into a stegosystem by leveraging recent advances in perfectly secure steganography. We suggest that our resulting benchmark environment can be used to investigate how easily LLM agents can learn to use perfectly secure steganography tools, and how secret collusion between agents can be countered pre-emptively through paraphrasing attacks on communication channels. Our work yields unprecedented empirical insight into the question of whether advanced AI agents may be able to collude unnoticed.
A Perfect Collusion Benchmark: How can AI agents be prevented from colluding with information-theoretic undetectability?
[ "Sumeet Ramesh Motwani", "Mikhail Baranchuk", "Lewis Hammond", "Christian Schroeder de Witt" ]
Workshop/MASEC
wipp
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=CZcIYfiGlL
@inproceedings{ sun2023cooperative, title={Cooperative {AI} via Decentralized Commitment Devices}, author={Xinyuan Sun and Davide Crapis and Matt Stephenson and Jonathan Passerat-Palmbach}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=CZcIYfiGlL} }
Credible commitment devices have been a popular approach for robust multi-agent coordination. However, existing commitment mechanisms face limitations like privacy, integrity, and susceptibility to mediator or user strategic behavior. It is unclear if the cooperative AI techniques we study are robust to real-world incentives and attack vectors. Fortunately, decentralized commitment devices that utilize cryptography have been deployed in the wild, and numerous studies have shown their ability to coordinate algorithmic agents, especially when agents face rational or sometimes adversarial opponents with significant economic incentives, currently in the order of several million to billions of dollars. In this paper, we illustrate potential security issues in cooperative AI via examples in the decentralization literature and, in particular, Maximal Extractable Value (MEV). We call for expanded research into decentralized commitments to advance cooperative AI capabilities for secure coordination in open environments and empirical testing frameworks to evaluate multi-agent coordination ability given real-world commitment constraints.
Cooperative AI via Decentralized Commitment Devices
[ "Xinyuan Sun", "Davide Crapis", "Matt Stephenson", "Jonathan Passerat-Palmbach" ]
Workshop/MASEC
oral
2311.07815
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=5U8PGlJt2S
@inproceedings{ sun2023robust, title={Robust Q-Learning against State Perturbations: a Belief-Enriched Pessimistic Approach}, author={Xiaolin Sun and Zizhan Zheng}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=5U8PGlJt2S} }
Reinforcement learning (RL) has achieved phenomenal success in various domains. However, its data-driven nature also introduces new vulnerabilities that can be exploited by malicious opponents. Recent work shows that a well-trained RL agent can be easily manipulated by strategically perturbing its state observations at the test stage. Existing solutions either introduce a regularization term to improve the smoothness of the trained policy against perturbations or alternatively train the agent's policy and the attacker's policy. However, the former does not provide sufficient protection against strong attacks, while the latter is computationally prohibitive for large environments. In this work, we propose a new robust RL algorithm for deriving a pessimistic policy to safeguard against an agent's uncertainty about true states. This approach is further enhanced with belief state inference and diffusion-based state purification to reduce uncertainty. Empirical results show that our approach obtains superb performance under strong attacks and has a comparable training overhead with regularization-based methods.
Robust Q-Learning against State Perturbations: a Belief-Enriched Pessimistic Approach
[ "Xiaolin Sun", "Zizhan Zheng" ]
Workshop/MASEC
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=5HuBX8LvuT
@inproceedings{ mukobi2023assessing, title={Assessing Risks of Using Autonomous Language Models in Military and Diplomatic Planning}, author={Gabriel Mukobi and Ann-Katrin Reuel and Juan-Pablo Rivera and Chandler Smith}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=5HuBX8LvuT} }
The potential integration of autonomous agents in high-stakes military and foreign-policy decision-making has gained prominence, especially with the emergence of advanced generative AI models like GPT-4. This paper aims to scrutinize the behavior of multiple autonomous agents in simulated military and diplomacy scenarios, specifically focusing on their potential to escalate conflicts. Drawing on established international relations frameworks, we assessed the escalation potential of decisions made by these agents in different scenarios. Contrary to prior qualitative studies, our research provides both qualitative and quantitative insights. We find that there are significant differences in the models' predilections to escalate, with Claude 2 being the least aggressive and GPT-4-Base the most aggressive models. Our findings indicate that, even in seemingly neutral contexts, language-model-based autonomous agents occasionally opt for aggressive or provocative actions. This tendency intensifies in scenarios with predefined trigger events. Importantly, the patterns behind such escalatory behavior remain largely unpredictable. Furthermore, a qualitative analysis of the models' verbalized reasoning, particularly in the GPT-4-Base model, reveals concerning justifications. Given the high stakes involved in military and foreign-policy contexts, the deployment of such autonomous agents demands further examination and cautious consideration.
Assessing Risks of Using Autonomous Language Models in Military and Diplomatic Planning
[ "Gabriel Mukobi", "Ann-Katrin Reuel", "Juan-Pablo Rivera", "Chandler Smith" ]
Workshop/MASEC
wipp
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4RFv40DWkp
@inproceedings{ harris2023stackelberg, title={Stackelberg Games with Side Information}, author={Keegan Harris and Steven Wu and Maria Florina Balcan}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=4RFv40DWkp} }
We study an online learning setting in which a leader interacts with a sequence of followers over the course of $T$ rounds. At each round, the leader commits to a mixed strategy over actions, after which the follower best-responds. Such settings are referred to in the literature as Stackelberg games. Stackelberg games have received much interest from the community, in part due to their applicability to real-world security settings such as wildlife preservation and airport security. However despite this recent interest, current models of Stackelberg games fail to take into consideration the fact that the players' optimal strategies often depend on external factors such as weather patterns, airport traffic, etc. We address this gap by allowing for player payoffs to depend on an external context, in addition to the actions taken by each player. We formalize this setting as a repeated Stackelberg game with side information and show that under this setting, it is impossible to achieve sublinear regret if both the sequence of contexts and the sequence of followers is chosen adversarially. Motivated by this impossibility result, we consider two natural relaxations: (1) stochastically chosen contexts with adversarially chosen followers and (2) stochastically chosen followers with adversarially chosen contexts. In each of these settings, we provide algorithms which obtain $\tilde{\mathcal{O}}(\sqrt{T})$ regret.
Stackelberg Games with Side Information
[ "Keegan Harris", "Steven Wu", "Maria Florina Balcan" ]
Workshop/MASEC
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=4831vtb6Bp
@inproceedings{ ganzfried2023safe, title={Safe Equilibrium}, author={Sam Ganzfried}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=4831vtb6Bp} }
The standard game-theoretic solution concept, Nash equilibrium, assumes that all players behave rationally. If we follow a Nash equilibrium and opponents are irrational (or follow strategies from a different Nash equilibrium), then we may obtain an extremely low payoff. On the other hand, a maximin strategy assumes that all opposing agents are playing to minimize our payoff (even if it is not in their best interest), and ensures the maximal possible worst-case payoff, but results in exceedingly conservative play. We propose a new solution concept called safe equilibrium that models opponents as behaving rationally with a specified probability and behaving potentially arbitrarily with the remaining probability. We prove that a safe equilibrium exists in all strategic-form games (for all possible values of the rationality parameters), and prove that its computation is PPAD-hard.
Safe Equilibrium
[ "Sam Ganzfried" ]
Workshop/MASEC
poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=3b8hfpqtlM
@inproceedings{ souly2023leading, title={Leading the Pack: N-player Opponent Shaping}, author={Alexandra Souly and Timon Willi and Akbir Khan and Robert Kirk and Chris Lu and Edward Grefenstette and Tim Rockt{\"a}schel}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=3b8hfpqtlM} }
Reinforcement learning solutions have great success in the 2-player general sum setting. In this setting, the paradigm of Opponent Shaping (OS), in which agents account for the learning of their co-players, has led to agents which are able to avoid collectively bad outcomes, whilst also maximizing their reward. These methods have currently been limited to 2-player game. However, the real world involves interactions with many more agents, with interactions on both local and global scales. In this paper, we extend Opponent Shaping (OS) methods to environments involving multiple co-players and multiple shaping agents. We evaluate on 4 different environments, varying the number of players from 3 to 5, and demonstrate that model-based OS methods converge to equilibrium with better global welfare than naive learning. However, we find that when playing with a large number of co-players, OS methods' relative performance reduces, suggesting that in the limit OS methods may not perform well. Finally, we explore scenarios where more than one OS method is present, noticing that within games requiring a majority of cooperating agents, OS methods converge to outcomes with poor global welfare.
Leading the Pack: N-player Opponent Shaping
[ "Alexandra Souly", "Timon Willi", "Akbir Khan", "Robert Kirk", "Chris Lu", "Edward Grefenstette", "Tim Rocktäschel" ]
Workshop/MASEC
oral
2312.12564
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=1Zb8JjrgSK
@inproceedings{ shi2023harnessing, title={Harnessing the Power of Federated Learning in Federated Contextual Bandits}, author={Chengshuai Shi and Kun Yang and Ruida Zhou and Cong Shen}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=1Zb8JjrgSK} }
Federated contextual bandits (FCB), as a pivotal instance of combining federated learning (FL) and sequential decision-making, have received growing interest in recent years. However, existing FCB designs often adopt FL protocols tailored for specific settings, deviating from the canonical FL framework (e.g., the celebrated FedAvg design). Such disconnections not only prohibit these designs from flexibly leveraging canonical FL algorithmic approaches but also set considerable barriers for FCB to incorporate growing studies on FL attributes such as robustness and privacy. To promote a closer relationship between FL and FCB, we propose a novel FCB design, FedIGW, which can flexibly incorporate both existing and future FL protocols and thus is capable of harnessing the full spectrum of FL advances.
Harnessing the Power of Federated Learning in Federated Contextual Bandits
[ "Chengshuai Shi", "Kun Yang", "Ruida Zhou", "Cong Shen" ]
Workshop/MASEC
poster
2312.16341
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=15wSm5uiSE
@inproceedings{ chopra2023decentralized, title={Decentralized agent-based modeling}, author={Ayush Chopra and Arnau Quera-Bofarull and Nurullah Giray Kuru and Ramesh Raskar}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=15wSm5uiSE} }
The utility of agent-based models for practical decision making depends upon their ability to recreate populations with great detail and integrate real-world data streams. However, incorporating this data can be challenging due to privacy concerns. We alleviate this issue by introducing a paradigm for secure agent-based modeling. In particular, we leverage secure multi-party computation to enable decentralized agent-based simulation, calibration, and analysis. We believe this is a critical step towards making agent-based models scalable to the real-world application.
Decentralized agent-based modeling
[ "Ayush Chopra", "Arnau Quera-Bofarull", "Nurullah Giray Kuru", "Ramesh Raskar" ]
Workshop/MASEC
wipp
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
null
https://openreview.net/forum?id=0O5vbRAWol
@inproceedings{ ankile2023i, title={I See You! Robust Measurement of Adversarial Behavior}, author={Lars Ankile and Matheus X.V. Ferreira and David Parkes}, booktitle={Multi-Agent Security Workshop @ NeurIPS'23}, year={2023}, url={https://openreview.net/forum?id=0O5vbRAWol} }
We introduce the study of non-manipulable measures of manipulative behavior in multi-agent systems. We do this through a case study of decentralized finance (DeFi) and blockchain systems, which are salient as real-world, rapidly emerging multi-agent systems with financial incentives for malicious behavior, for the participation in algorithmic and AI systems, and for the need for new methods with which to measure levels of manipulative behavior. We introduce a new surveillance metric for measuring malicious behavior and demonstrate its effectiveness in a natural experiment to the Uniswap DeFi ecosystem.
I See You! Robust Measurement of Adversarial Behavior
[ "Lars Ankile", "Matheus X.V. Ferreira", "David Parkes" ]
Workshop/MASEC
oral
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]