Robust Memory Update Mechanisms Against Poisoning Attacks in Multi-Agent Reinforcement Learning
DOI:
https://doi.org/10.71465/fair756Keywords:
Multi-agent reinforcement learning, memory poisoning, adversarial defense, replay buffer security, collaborative learning robustnessAbstract
In multi-agent reinforcement learning (MARL), shared replay buffers and inter-agent memory exchange introduce vulnerability to memory poisoning attacks. This work proposes a confidence-weighted memory validation mechanism integrated into the experience sharing pipeline. Each memory entry is assigned a trust score derived from temporal consistency and reward deviation metrics. A Bayesian filtering process excludes anomalous transitions before propagation to peer agents. Experiments were conducted on cooperative navigation and resource allocation benchmarks with 12–24 agents. Under a 20% poisoning injection rate, baseline MARL performance dropped by 37.8%, whereas the proposed defense limited degradation to 11.4%. Convergence time improved by 23.6% compared to anomaly-blind training.The method effectively mitigates adversarial memory contamination in collaborative reinforcement learning environments.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 James Thompson, Oliver Bennett, Daniel Carter (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.