Junhong Xu

profile.jpg

About Me

I am currently a Postdoctoral Researcher at the University of Texas at Austin, working with Roberto Martin-Martin and Peter Stone. Prior to UT Austin, I was a Senior Research Scientist at Nuro.ai, where I worked on reinforcement learning for generative policies (diffusion and flow-matching). I obtained my PhD from Indiana University, Bloomington, advised by Lantao Liu in the Vehicle Autonomy and Intelligence Lab.

My research focuses on enhancing the safety and efficiency of robotic systems at deployment time. During my PhD, I leveraged model uncertainty to develop planning algorithms that behave conservatively in high-uncertainty regions, improving deployment-time robustness. I also worked on generating diverse and trainable environments for reinforcement learning to achieve generalization over environment distributions. More recently, my work has centered on generative models for robot decision-making.

News

  • August, 2025 I am very excited to return to academia and join UT Austin as a Postdoctoral Researcher working with Roberto Martin-Martin and Peter Stone.

  • October, 2024 One paper got accepted by CoRL 2024, enabling generating realistic and challenging environments for training generalizable robot navigation policies.

  • October, 2024 Our work on using the diffusion model to imagine unknown regions to provide more informative context for the downstream planner is accepted by IROS 2024.

  • July, 2024 We published a new blog post on how Nuro.ai combines safe RL and imitation learning for self-driving.

  • April, 2024 Boundary-Aware Value Function Generation for Safe Stochastic Motion Planning got accepted by the International Journal of Robotics Research (IJRR). This work combines mesh-based and meshless function approximators to accelerate the computation of solving the second-order Hamilton-Jacobi Bellman (HJB) equation while not sacrificing the strict boundary conditions on critical state dimensions.

  • January, 2024 One paper got accepted by the International Journal of Robotics Research (IJRR), proposing a novel kernel-based approach for solving stochastic optimal control problems and its application to autonomous navigation on unstructured terrains.

  • October, 2023 A new arXiv paper on enhancing the sampling-based MPC algorithm (MPPI) with uncertainty propogation.

  • October, 2023 Our work on combining informative prior policies trained by goal-conditioned RL with the bounded-rational game-theoretic framework is accepted by IROS 2023.

  • August, 2023 I successfully defended my PhD dissertation titled “Robust Motion Planning and Control for Autonomous Robots Under Uncertainty”.

  • October, 2022 One paper on learning a causal model of the robot’s dynamics is accepted by ICRA 2023.