Our research in robotics focuses on developing autonomous systems that can adapt to diverse environments, interact seamlessly with humans, and navigate smoothly in crowded spaces. We leverage reinforcement learning and simulation-based training to enable robots to perform complex tasks safely and efficiently.
Research Areas
Reinforcement Learning
Exploring robot learning using reinforcement learning (RL), including model-based RL and safety-aware approaches, to enable fast and reliable skill acquisition in manipulation and navigation tasks.
World Models & Vision-Language-Action (VLA) Models
Investigating the integration of world models with vision-language-action policies, enabling robots to interpret visual and linguistic inputs and translate them into robust, generalizable actions across diverse scenarios.
Super-Dense Crowd Navigation & Socially-Aware Locomotion
Developing methods for safe, natural, and efficient robot navigation in human-dense environments. Research combines multi-human trajectory forecasting, reinforcement learning, and uncertainty-aware control to achieve socially-aware locomotion and seamless interaction with complex, dynamic surroundings.