I'm interested in vague language, causal reasoning, discourse relations, and sentence meaning representation.
My research interests are in operationalizing language for machine learning. This involves (1) building machines that can learn from (and teach with) language and interactive dialog, and (2) using language as a structured and compositional guide for learning.
My research goal is to advance the way we interact with computers, taking inspiration from how humans communicate and collaborate. In particular, I’m interested in making computers capable of handling ambiguous inputs and specifications by leveraging prior knowledge, context and interaction.
My research focuses on unsupervised learning: how can we understand and guide systems that learn general-purpose representations of the world?
I’m interested in how our understanding of cognitive mechanisms can be used to inform AI systems, specifically language models.
My current research interests are in bayesian deep learning, sampling methods, and inference in generative models.
My research is aimed at understanding the key ingredients of sample-efficient reinforcement learning. Currently, I examine how information theory might be a useful tool for facilitating this understanding, with a focus on abstractions and natural language.
I'm interested in understanding structure in images and videos, with a focus on contrastive representation learning, computational algebraic topology, and probabilistic graphical modelling.
I'm working on methods to better understand the structure of events in vision and language, with a particular emphasis on video applications and language grounding.
My research aims to apply cutting-edge ML technologies to climate change-related ecological problems in concert with developing better ML methods for working with complex ecological and genomic data.
I'm interested in natural language processing in application to robotics, specifically looking at methods for adapting to users online, and learning through interaction.
I use probabilistic and neural network models to understand human language production and comprehension. In particular, I concentrate on the role context plays in this process.
I'm interested in understanding how neural models process language to build more flexible and safe NLP systems.
My research interests are in models that learn across modalities and in the ways to build "ethical" language models.
I am interested in visually grounded language understanding and generation, NLP probing and analysis, and graph representation learning, among other things.
Judy Fan (post-doc) -- Assistant Professor of Psychology at UC San Diego
Robert Hawkins (graduate student in Psychology) -- Post-doc at the Princeton Neuroscience Institute
Michael Henry Tessler (graduate student in Psychology) -- Post-doc at MIT
Desmond Ong (graduate student in Psychology) -- Assistant Professor of Information Systems and Analytics at the National University of Singapore and Research Scientist at the A*STAR Artificial Intelligence Initiative (A*AI)
Judith Degen (post-doc) -- Assistant Professor of Linguistics at Stanford
Leon Bergen (post-doc) -- Assistant Professor of Linguistics at UCSD
Greg Scontras (post-doc) -- Assistant Professor of Linguistics at UC Irvine
Justine Kao (graduate student in Psychology) -- Siri Machine Learning Data Scientist at Apple
Andreas Stuhlmüller (graduate student and post-doc) -- Founder of Ought
Long Ouyang (graduate student and post-doc)
Daniel Hawthorne (graduate student in Psychology) -- Co-founder and CTO at Datawallet
Daniel Ly (post-doc) -- Senior Algorithms and Machine Learning Engineer at Seismic
Siddharth Narayanaswamy (post-doc) -- Senior Researcher at Oxford
Daniel Lassiter (post-doc) -- Assistant Professor of Linguistics at Stanford
Joseph Austerweil (post-doc) -- Assistant Professor of Psychology at Wisconsin, Madison
Thomas Icard (graduate student in Philosophy) -- Assistant Professor of Philosophy at Stanford