Socially-Aware Robot Navigation via Deep Reinforcement Learning: A Critical Review and Future Directions
DOI:
https://doi.org/10.71465/fias745Keywords:
Autonomous Mobile Robots, Deep Reinforcement Learning, Human-Robot Interaction, Social Compliance, Socially-Aware NavigationAbstract
The integration of Autonomous Mobile Robots (AMRs) into human-centric environments presents a formidable scientific challenge: achieving navigation that is not only safe and efficient but also socially compliant. Traditional path planning algorithms, designed for structured and static worlds, fundamentally fail to address the complex, interactive, and socially governed dynamics of human crowds. Deep Reinforcement Learning (DRL) has emerged as a powerful paradigm for learning adaptive navigation policies through continuous interaction. This paper provides a comprehensive and critical review of the state-of-the-art in DRL for socially-aware robot navigation. We begin by framing the core problem as a governing trilemma among safety, efficiency, and social compliance. We then critically analyze the inherent limitations of classical navigation paradigms in social contexts, specifically focusing on the "Freezing Robot Problem." The core of this review is a deep dive into the thematic debates shaping modern DRL-based approaches, including architectural philosophies, reward engineering across physics-based and psychology-based domains, and sim-to-real transfer strategies. Finally, we outline a roadmap for next-generation social navigation, emphasizing Large Language Model (LLM) integration and explainable AI.
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Frontiers in Interdisciplinary Applied Science

This work is licensed under a Creative Commons Attribution 4.0 International License.