Adaptive Role Allocation via Hierarchical Reinforcement Learning in Collaborative Agent Systems
DOI:
https://doi.org/10.71465/fra770Keywords:
Hierarchical reinforcement learning, Role allocation, Multi-agent systems, PPO, Task coordinationAbstract
Collaborative agent systems often suffer from inefficient coordination due to static role assignment in dynamic environments. This study investigates adaptive role allocation using a hierarchical reinforcement learning (HRL) framework, where a high-level controller assigns roles and low-level policies execute task-specific actions. The approach is trained using proximal policy optimization (PPO) with a role-transition regularization term to stabilize switching behavior. Experiments are conducted on a benchmark of 9,200 multi-step decision tasks, including scheduling and distributed planning scenarios. Results show that the proposed method improves task success rate from 73.6% to 86.8% and reduces redundant interactions by 24.5% compared to fixed-role baselines. In addition, convergence speed is accelerated by 19%, indicating more efficient policy learning. The findings suggest that hierarchical role modeling is effective for improving coordination efficiency in complex decision workflows.
Downloads
Downloads
Published
Issue
Section
License
Copyright (c) 2026 Michael Johnson, Emily Carter, David Thompson (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.