I am a Senior Researcher in MSR AI Frontiers, and I am affiliated with the MSR Reinforcement Learning Group. I am a practical theoretician who is interested in developing foundations for designing principled algorithms that can tackle real-world challenges. My research studies structural properties in sequential decision making problems, especially in robotics, and aims to improve the learning efficiency of autonomous agents. My recent works focus on developing agents that can learn from general feedback, which unifies Learning from Language Feedback (LLF), reinforcement learning (RL), and imitation learning (IL). Previously, I worked on online learning, Gaussian processes, and integrated motion planning and control.
I received PhD in Robotics in 2020 from Georgia Tech, where I was advised by Byron Boots at Institute for Robotics and Intelligent Machines. During my PhD study, I interned at Microsoft Research AI, Redmond, in Summer 2019, working with Alekh Agarwal and Andrey Kolobov; at Nvidia Research, Seattle, in Summer 2018, working with Nathan Ratliff and Dieter Fox.
Before Georgia Tech, I received from National Taiwan University (NTU) my M.S. in Mechanical Engineering in 2013, and double degrees of B.S. in Mechanical Engineering and B.S. in Electrical Engineering in 2011. During that period, I was advised by Han-Pang Huang, who directs NTU Robotics Laboratory, and my research included learning dynamical systems, force/impedance control, kernel methods, and approximation theory – with applications ranging from manipulator, grasping, exoskeleton, brain-computer interface, to humanoid.
I was fortunately awarded with Outstanding Paper Award, Runner-Up (ICML 2022), Best Paper Award (OptRL Workshop @ NeurIPS 2019), Best Student Paper & Best Systems Paper, Finalist (RSS 2019), Best Paper (AISTATS 2018), Best Systems Paper, Finalist (RSS 2018), NVIDIA Graduate Fellowship, and Google PhD Fellowship (declined).
Trace is the New AutoDiff–Unlocking Efficient Optimization of Computational Workflows ICML 2024 AutoRL Workshop, 2024
C.-A. Cheng*, A. Nie*, and A. Swaminathan*
Importance of Directional Feedback for LLM-based Optimizers NeurIPS 2023 Foundation Models for Decision Making Workshop, 2023
Allen, Nie, C.-A. Cheng, A. Kolobov, and A. Swaminathan
Simple Data Sharing for Multi-Tasked Goal-Oriented Problems Goal-Conditioned Reinforcement Learning Workshop at NeurIPS 2023, 2023
Y. Fan, J. Li, A. Swaminathan, A. Modi, and C.-A. Cheng
Learning Multi-task Action Abstractions as Sequence Compression Problem Spotlight CoRL 2023 Workshop on Pre-training for Robot Learning, 2023
R. Zheng, C.-A. Cheng, H. Furong, and A. Kolobov