Research

Understanding Flexible Intelligence

My research asks how intelligent systems, both biological and artificial, learn to act flexibly in structured but uncertain environments. I integrate reinforcement learning, probabilistic inference, and systems neuroscience to uncover principles of adaptive control.

I’m particularly interested in how hierarchical, compositional, and self organizing structure emerges from experience: how the brain and algorithms learn reusable subskills, infer latent goals, and adapt strategies on the fly. This perspective bridges levels: from motor control and neural dynamics to machine learning architectures and embodied robotics.

Ultimately, I aim to develop a unified theory of flexible behavior that both explains neural computation and inspires more adaptive AI.


Research Directions


Research Trajectory

My PhD work (with Larry Abbott and Mark Churchland at Columbia) focused on how motor cortex implements flexible, high-dimensional control. This work revealed that neural dynamics can reorganize compositionally, expressing different “subskills” as needed.

At the Kempner Institute, my research expands these ideas into hierarchical reinforcement learning and robotics, building algorithms and models that learn and adapt with similar compositional flexibility.

Looking forward, I aim to connect these approaches across levels—from neural circuits to robot policies—to uncover general principles of intelligent behavior.