I am a Senior Researcher in the Robot Learning Group of Microsoft Research, Redmond, and I am affiliated with the MSR Reinforcement Learning Group. I am a practical theoretician who is interested in developing foundations for designing principled algorithms that can tackle real-world challenges. My research studies structural properties in sequential decision making problems, especially in robotics, and aims to improve the learning efficiency of autonomous agents. My recent works focus on reinforcement learning and imitation learning from offline data. Previously, I worked on online learning, Gaussian processes, and integrated motion planning and control.
I received PhD in Robotics in 2020 from Georgia Tech, where I was advised by Byron Boots at Institute for Robotics and Intelligent Machines. During my PhD study, I interned at Microsoft Research AI, Redmond, in Summer 2019, working with Alekh Agarwal and Andrey Kolobov; at Nvidia Research, Seattle, in Summer 2018, working with Nathan Ratliff and Dieter Fox.
Before Georgia Tech, I received from National Taiwan University (NTU) my M.S. in Mechanical Engineering in 2013, and double degrees of B.S. in Mechanical Engineering and B.S. in Electrical Engineering in 2011. During that period, I was advised by Han-Pang Huang, who directs NTU Robotics Laboratory, and my research included learning dynamical systems, force/impedance control, kernel methods, and approximation theory – with applications ranging from manipulator, grasping, exoskeleton, brain-computer interface, to humanoid.
I was fortunately awarded with Outstanding Paper Award, Runner-Up (ICML 2022), Best Paper Award (OptRL Workshop @ NeurIPS 2019), Best Student Paper & Best Systems Paper, Finalist (RSS 2019), Best Paper (AISTATS 2018), Best Systems Paper, Finalist (RSS 2018), NVIDIA Graduate Fellowship, and Google PhD Fellowship (declined).