I am a Senior Researcher in the Robot Learning Group of Microsoft Research, Redmond, and I am affiliated with the MSR Reinforcement Learning Group. I am a practical theoretician, interested in developing theoretical foundations for designing principled algorithms that can efficiently tackle real-world challenges. My research studies learning efficiency, structural properties, and uncertainties in sequential decision making, especially in robotics problems. My recent works focus on reinforcement learning, imitation learning, and lifelong learning. Previously, I worked on online learning, Gaussian processes, and integrated motion planning and control.
I received PhD in Robotics from Georgia Tech in 2020, where I was advised by Byron Boots at Institute for Robotics and Intelligent Machines. During my PhD study, I interned at Microsoft Research AI, Redmond, in Summer 2019, working with Alekh Agarwal and Andrey Kolobov; at Nvidia Research, Seattle, in Summer 2018, working with Nathan Ratliff and Dieter Fox.
Before Georgia Tech, I received from National Taiwan University (NTU) my M.S. in Mechanical Engineering in 2013, and double degrees of B.S. in Mechanical Engineering and B.S. in Electrical Engineering in 2011. During that period, I was advised by Han-Pang Huang, who directs NTU Robotics Laboratory, and my research included learning dynamical systems, force/impedance control, kernel methods, and approximation theory – with applications ranging from manipulator, grasping, exoskeleton, brain-computer interface, to humanoid.
I was fortunately awarded with Outstanding Paper Runner-Up Award (ICML 2022), Best Paper Award (OptRL Workshop @ NeurIPS 2019), Best Student Paper & Finalist to Best Systems Paper (RSS 2019), Best Paper (AISTATS 2018), Finalist to Best Systems Paper (RSS 2018), NVIDIA Graduate Fellowship, and Google PhD Fellowship (declined).