I'm interested in how visual perception, action, and social inference are coordinated to support learning and communication. To get at this, I derive inspiration from how people draw.
I received my PhD in cognitive psychology from Princeton in 2016, and my AB in neurobiology from Harvard in 2010.
There are these things called ideas and some seem clear. And yet communication with natural language — our primary mode of conveying these ideas — is anything but!
I'm interested in this interplay between natural and formal language, between the fuzzy and the sharp. I use computational models and behavioral experiments to explore how logic and language interact.
I'm interested in problems at the intersection of language and vision. I develop probabilistic and neural network models with the joint objective of explaining human behavioral data and building AI with more human-like representations and capacities.
Was: a graduate student in Psychology Now: Siri Machine Learning Data Scientist at Apple
Was: a post-doc Now: Algorithm Engineer at Lumo Bodytech