AI Research Scientist, Meta
San Francisco Bay Area
I am an AI Research Scientist at Meta in the San Francisco Bay Area, where I work on body tracking and generative motion models for AR/VR. My research sits at the intersection of geometric methods and generative deep learning — specifically, how to produce structured, physically-valid motion for articulated bodies. I’m increasingly interested in extending this work toward humanoid robots and embodied AI.
I defended my thesis at the University of Toronto in January 2023, jointly affiliated with the STARS Lab (Jonathan Kelly) and LAMOR at the University of Zagreb (Ivan Petrović). My thesis developed geometric approaches to inverse kinematics and motion planning, drawing on differential geometry and generative models. Before Meta, I spent a year at Samsung Research America in Montreal working on visual-language models for composed image search. I’m happy to hear from people working on related problems.
A transformer-based model for temporally consistent body pose estimation from egocentric headset cameras, paired with an auto-labeling system that scales training to tens of millions of unlabeled frames. Supports both keypoint and parametric body representations.
The first learned generative IK solver that generalizes across robot embodiments. Uses a distance-graph robot representation with a graph neural network to produce diverse solutions in parallel for unseen manipulators.
A reformulation of inverse kinematics as low-rank Euclidean distance matrix completion, solved on the Riemannian manifold of fixed-rank Gram matrices. Outperforms classical numerical solvers, particularly for multi-end-effector problems.
Distance-graph representations of robotic manipulators for inverse kinematics, motion planning, and learning with graph neural networks. The reference implementation behind several of the IK papers above.