Game-Theoretic Reinforcement Learning for Stable Equilibrium in Competitive-Cooperative Decision Systems
DOI:
https://doi.org/10.71465/fias777Keywords:
Game theory, Nash equilibrium, Multi-agent reinforcement learning, Actor-critic, StabilityAbstract
Collaborative systems often involve both cooperative and competitive interactions, making equilibrium stability a key challenge. This study develops a game-theoretic reinforcement learning framework that integrates Nash equilibrium constraints into policy optimization. A multi-agent actor-critic model is augmented with equilibrium regularization to guide agents toward stable joint strategies. Evaluation is conducted on 8,600 mixed-motive tasks, including bidding, resource sharing, and competitive planning scenarios. The proposed method improves equilibrium convergence rate by 31.5% and reduces oscillatory behaviour in policies by 27.9% compared to standard multi-agent RL approaches. Additionally, social welfare metrics increase by 18.6%, indicating better global outcomes. The results highlight the importance of incorporating game-theoretic principles into collaborative decision learning.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Michael J. Smith, Daniel Nguyen, Sarah Thompson (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.