Cooperative Multi-Agent Systems Cognitive Modeling
-
Abstract
The project aims to develop models for analyzing artificial intelligence (AI) robots’ (AI agents) motivations and behaviors and to understand their diverse relationships as they cooperate and adapt to the needs and behaviors of humans and other AI agents. The project’s novelty is its focus on modeling cooperative multi-agent systems (MAS) from the cognitive science perspective and investigating how they reach consensus and integrate human needs through a shared needs-oriented trust network in the interaction. The project’s impacts are significant because the proposed cooperative MAS models will help artificial social systems (like multi-robot systems and self-driving cars) integrate into human society and work harmoniously with us, supporting sustainable human development. Moreover, the success of this project could enable cognitive modeling for cooperability-aware MAS of advanced AI architectures and software, leading to new technologies and applications in the computing, communications, electronics, aerospace, transportation, agriculture, and defense industries. It will have the potential to revolutionize AI and Robotics technology.
NSF Support Link for More Details: FRR: Cooperative Multi-Agent Systems Cognitive Modeling
Bayesian Strategy Network based Reinforcement Learning
-
A Reinforcement Learning Model based on the Bayesian Strategy Network for Robot Locomotion & Planning
The proposed research aims to develop a new reinforcement learning (RL) model based on Bayesian Strategy Network (BSN) for robot locomotion and planning. By combining AI and cognitive robotics technology, the model can support robots developing diverse strategies and skills to adapt to complex environments and achieve various tasks efficiently.
Objective: A cognitive robotic model for robot locomotion and planning. This research will develop a novel cognitive robotic model based on BSN and Deep RL architecture to improve the convergent speed and sample efficiency. Furthermore, we will implement our model in a real robot, such as Unitree Go2 robot dog, to achieve dynamic and complex tasks.
Reference Paper: Bayesian Strategy Networks Based Soft Actor-Critic Learning
Innate-Values-driven Reinforcement Learning (IVRL)
-
Abstract
Innate values describe agents' intrinsic motivations, which reflect their inherent interests and preferences for pursuing goals and drive them to develop diverse skills that satisfy their various needs. Traditional reinforcement learning (RL) is learning from interaction based on the feedback rewards of the environment. However, in real scenarios, the rewards are generated by agents' innate value systems, which differ vastly from individuals based on their needs and requirements. In other words, considering the AI agent as a self-organizing system, developing its awareness through balancing internal and external utilities based on its needs in different tasks is a crucial problem for individuals learning to support others and integrate community with safety and harmony in the long term. To address this gap, we propose a new RL model termed innate-values-driven RL (IVRL) based on combined motivations' models and expected utility theory to mimic its complex behaviors in the evolution through decision-making and learning.
Objective: We want to improve the IVRL further and develop a more comprehensive system to personalize individual characteristics to achieve various tasks testing in several standard MAS testbeds, such as StarCraft II, OpenAI Gym, Unity, etc. Especially in multi-object and multi-agent interaction scenarios, building the awareness of AI agents to balance the group utilities and system costs and satisfy group members' needs in their cooperation is a crucial problem for individuals learning to support their community and integrate human society in the long term. Furthermore, implementing the IVRL in real-world systems, such as human-robot interaction, multi-robot systems, and self-driving cars, would be challenging and exciting.
Edge Computing based Human-Robot Cognitive Fusion
-
Abstract
This research introduces a novel architecture of edge cognitive computing integrating human experts and edge intelligent robots collaborating in the same framework to form the next generation of medical and smart healthcare systems. It can achieve a seamless remote diagnosis, round-the-clock symptom monitoring, emergency warning, therapy alteration, and advanced assistance.
Objective: We want to implement our methods in real robots, such as Unitree Go2, and test them in the AWS wavelength framework. Furthermore, we want to apply the architecture in a real hospital medical system to test its robustness and effectiveness.
Reference Paper: Edge Computing based Human-Robot Cognitive Fusion: A Medical Case Study in the Autism Spectrum Disorder Therapy
Bridging the Sim-RL Agent to a Real-Time Smart Manufacturing Robotic Control
-
Abstract
With the rapid development of deep reinforcement learning technology, it gradually demonstrates excellent potential and is becoming the most promising solution in the robotics. However, in the smart manufacturing domain, considering dynamic adaptive control mechanisms optimizing complex processes, there is still not too much research involving. This research provides a framework for enhanced adaptive real-time robotic control in smart manufacturing processing. The system architecture combines Unity's simulation environment with ROS2 for seamless digital twin synchronization, leveraging transfer learning to adapt trained models across tasks efficiently.
Objective: We want to build a more comprehensive and robust architecture testing in the real robot arm and implement it in real manufacturing scenarios.
Reference Paper: Digital Twin-Enabled Real-Time Control in Robotic Additive Manufacturing via Soft Actor-Critic Reinforcement Learning
Sponsors


