UCL Discovery Stage
UCL home » Library Services » Electronic resources » UCL Discovery Stage

Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning

Tennant, Elizaveta; Hailes, Stephen; Musolesi, Mirco; (2023) Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning. In: Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence 2023. (pp. pp. 317-325). International Joint Conferences on Artificial Intelligence Organization Green open access

[thumbnail of Modeling_Moral_Choices_in_Social_Dilemmas_with_Multi_Agent_Reinforcement_Learning.pdf]
Preview
Text
Modeling_Moral_Choices_in_Social_Dilemmas_with_Multi_Agent_Reinforcement_Learning.pdf - Accepted Version

Download (613kB) | Preview

Abstract

Practical uses of Artificial Intelligence (AI) in the real world have demonstrated the importance of embedding moral choices into intelligent agents. They have also highlighted that defining top-down ethical constraints on AI according to any one type of morality is extremely challenging and can pose risks. A bottom-up learning approach may be more appropriate for studying and developing ethical behavior in AI agents. In particular, we believe that an interesting and insightful starting point is the analysis of emergent behavior of Reinforcement Learning (RL) agents that act according to a predefined set of moral rewards in social dilemmas. In this work, we present a systematic analysis of the choices made by intrinsically-motivated RL agents whose rewards are based on moral theories. We aim to design reward structures that are simplified yet representative of a set of key ethical systems. Therefore, we first define moral reward functions that distinguish between consequence- and norm-based agents, between morality based on societal norms or internal virtues, and between single- and mixed-virtue (e.g., multi-objective) methodologies. Then, we evaluate our approach by modeling repeated dyadic interactions between learning moral agents in three iterated social dilemma games (Prisoner's Dilemma, Volunteer's Dilemma and Stag Hunt). We analyze the impact of different types of morality on the emergence of cooperation, defection or exploitation, and the corresponding social outcomes. Finally, we discuss the implications of these findings for the development of moral agents in artificial and mixed human-AI societies.

Type: Proceedings paper
Title: Modeling Moral Choices in Social Dilemmas with Multi-Agent Reinforcement Learning
Event: Thirty-Second International Joint Conference on Artificial Intelligence {IJCAI-23}
Dates: 19 Aug 2023 - 25 Aug 2023
ISBN-13: 978-1-956792-03-4
Open access status: An open access version is available from UCL Discovery
DOI: 10.24963/ijcai.2023/36
Publisher version: https://doi.org/10.24963/ijcai.2023/36
Language: English
Additional information: This version is the author accepted manuscript. For information on re-use, please refer to the publisher’s terms and conditions.
Keywords: Agent-based and Multi-agent Systems: MAS: Multi-agent learning AI Ethics, Trust, Fairness: ETF: Moral decision making
UCL classification: UCL
UCL > Provost and Vice Provost Offices > UCL BEAMS
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science
UCL > Provost and Vice Provost Offices > UCL BEAMS > Faculty of Engineering Science > Dept of Computer Science
URI: https://discovery-pp.ucl.ac.uk/id/eprint/10175468
Downloads since deposit
1,536Downloads
Download activity - last month
Download activity - last 12 months
Downloads by country - last 12 months

Archive Staff Only

View Item View Item