Adaptive Role Allocation via Hierarchical Reinforcement Learning in Collaborative Agent Systems

Authors

  • Michael Johnson Department of Computer Science, Stanford University, Stanford, CA 94305, USA Author
  • Emily Carter Department of Computer Science, Stanford University, Stanford, CA 94305, USA Author
  • David Thompson Department of Computer Science, Stanford University, Stanford, CA 94305, USA Author

DOI:

https://doi.org/10.71465/fra770

Keywords:

Hierarchical reinforcement learning, Role allocation, Multi-agent systems, PPO, Task coordination

Abstract

Collaborative agent systems often suffer from inefficient coordination due to static role assignment in dynamic environments. This study investigates adaptive role allocation using a hierarchical reinforcement learning (HRL) framework, where a high-level controller assigns roles and low-level policies execute task-specific actions. The approach is trained using proximal policy optimization (PPO) with a role-transition regularization term to stabilize switching behavior. Experiments are conducted on a benchmark of 9,200 multi-step decision tasks, including scheduling and distributed planning scenarios. Results show that the proposed method improves task success rate from 73.6% to 86.8% and reduces redundant interactions by 24.5% compared to fixed-role baselines. In addition, convergence speed is accelerated by 19%, indicating more efficient policy learning. The findings suggest that hierarchical role modeling is effective for improving coordination efficiency in complex decision workflows.

Downloads

Download data is not yet available.

Downloads

Published

2026-04-01