Computation and Cognition: the Probabilistic Approach (Psych 204/CS 428, Fall 2017)
How can we understand intelligent behavior as computation? This course will introduce probabilistic modeling through probabilistic programs, and will explore the probabilistic approach to modeling human and artificial cognition. Examples will be drawn from areas including concept learning, causal reasoning, social cognition, and language understanding.
Instructor: Noah Goodman (ngoodman at stanford)
TAs: Robert Hawkins (rxdh at stanford) & Erin Bennett (erindb at stanford)
Meeting time: T,Th, 1:30-2:50pm
Meeting place: History Corner (Building 200), Room 305
- w/ NG: W, 2-3:30pm, Room 356 Jordan Hall, Building 420
- w/ EB: M, 2-3pm, Room 316 Jordan Hall, Building 420
- w/ RH: book here, meet in Room 358 Jordan Hall, Building 420
We will use Canvas to post announcements, collect assignments, and host discussion among students. We encourage students to post questions here instead of directly emailing the instructors: we hope students will attempt to answer each other's questions as well (TAs will verify the answers). Trying to explain a concept to someone else is often the best way to check your own knowledge.
Assignments and grading
Students (both registered and auditing) will be expected to do assigned readings before class. Registered students will be graded based on:
- 20% Class & Canvas participation.
- 30% Homework.
- 50% Final project (including proposal, update, presentation, and paper).
Assignments should be submitted to Canvas in .pdf form; fixed-width font appreciated for code (e.g. using the editor at http://webppl.org). Homework assignments will be graded using letter grades:
- A: All solutions correct & reasoning clear
- B: Assignment complete, with a few errors
- C: Assignment incomplete, or pervasive errors throughout
- F: Assignment was not attempted
Readings for each week will be linked from the calendar below. Readings will be drawn from the web-book Probabilistic Models of Cognition and selected research papers. (In some cases the papers will require an SUNet ID to access. See the instructor in case of trouble.)
There are no formal pre-requisites for this class. However, this is a graduate-level course, which will move quickly and have technical content. Students should be already familiar with the basics of probability and programming.
In addition to the assigned readings below, here are notes from a few related short courses, that might prove useful:
- The Design and Implementation of Probabilistic Programming Languages
- PPAML Summer School 2016
- Bayesian Statistics for Psychologists
- Modeling Agents with Probabilistic Programs
- Probabilistic Language Understanding
Week of September 26
Introduction. Simulation, computation, and generative models. Probability and belief.
- Generative Models
- Optional: Concepts in a probabilistic language of thought. Goodman, Tenenbaum, Gerstenberg (2015).
- Optional: How to grow a mind: structure, statistics, and abstraction., J. B. Tenenbaum, C. Kemp, T. L. Griffiths, and N. D. Goodman (2011).
- Optional: Simulation as an engine of physical scene understanding. Hamrick, Battaglia, Tenenbaum (2013).
- Optional: Sources of uncertainty in intuitive physics. Smith and Vul (2012).
Week of October 3
Conditioning and inference. Causal vs. statistical dependency. Patterns of inference.
Homework: Exercises on Conditioning and Patterns of Inference.
- Patterns of Inference
- Optimal predictions in everyday cognition. Griffiths and Tenenbaum (2006).
- Optional: Causal Reasoning Through Intervention. Hagmayer, Sloman, Lagnado, and Waldmann (2006).
- Optional: Children's causal inferences from indirect evidence: Backwards blocking and Bayesian reasoning in preschoolers. Sobel, Tenenbaum, Gopnik (2004).
- Optional: Bayesian models of object perception. Kersten and Yuille (2003).
Week of October 10
Bayesian data analysis. Discussion on levels of analysis.
Homework: Exercises on Bayesian data analysis, also work on project proposals (see below).
- Bayesian data analysis
- Chapter 1 of "The adaptive character of thought." Anderson (1990).
- Optional: Inferring Subjective Prior Knowledge: An Integrative Bayesian Approach. Tauber & Steyvers (2013).
- Optional: Descriptive vs. optimal bayesian modeling (blog post). Frank (2015)
- Optional: Chapter 1 of "Vision." Marr (1982).
- Optional: Ten Years of Rational Analysis. Chater, Oaksford (1999).
- Optional: The Knowledge Level. Newell (1982).
Week of October 17
Project proposals due on Friday!
- Algorithms for Inference
- PPAML Summer School 2016: Approximate Inference Algorithms.
- Optional: The Design and Implementation of Probabilistic Programming Languages.
Week of October 24
Resource-rational process models. Sequences of observations. Learning as inference.
- Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Griffiths, Lieder, Goodman (2015).
- One and Done? Optimal Decisions From Very Few Samples. Vul, Goodman, Griffiths, Tenenbaum (2014).
- Models for sequences of observations
- Learning as Conditional Inference
- Optional: Burn-in, bias, and the rationality of anchoring. Lieder, Griffiths, and Goodman (2012).
- Optional: Perceptual multistability as Markov chain Monte Carlo inference. Gershman, Vul, Tenenbaum (2009).
- Optional: A more rational model of categorization. Sanborn, Griffiths, Navarro (2006).
- Optional: Theory acquisition as stochastic search Ullman, Goodman, and Tenenbaum (2010).
- Optional: Exemplar models as a mechanism for performing Bayesian inference. Shi, Griffiths, Feldman, Sanborn (2010).
Week of October 31
Learning compositional hypotheses. Learning continuous functions. Deep probabilistic models.
- A rational analysis of rule-based concept learning. Goodman, Tenenbaum, Feldman, and Griffiths (2008).
- Optional: Rules and similarity in concept learning. Tenenbaum (2000).
- Optional: Learning Structured Generative Concepts. Stuhlmueller, Tenenbaum, and Goodman (2010).
Week of November 7
Hierarchical models. Occam's razor.
- Hierarchical Models
- Occam's Razor
- Structure and strength in causal induction. Griffiths and Tenenbaum (2005).
- Optional: Bayesian modeling of human concept learning. Tenenbaum (1999).
- Optional: Word learning as Bayesian inference. Tenenbaum and Xu (2000).
- Optional: Word learning as Bayesian inference: Evidence from preschoolers. Xu and Tenenbaum (2005).
- Optional: Learning overhypotheses. Kemp, Perfors, and Tenenbaum (2006).
- Optional: Object name learning provides on-the-job training for attention. Smith, Jones, Landau, Gershko-Stowe, and Samuelson (2002).
Week of November 14
Mixture models. Unbounded mixture models.
Project update (preliminary paper) due on Friday!
Week of November 21
Thanksgiving -- no class!
Week of November 28
Social cognition. Natural language pragmatics and semantics.
Project peer-reviews due on Friday!
- Modeling Agents with Probabilistic Programs sections III and IV.
- Probabilistic Language Understanding
- Pragmatic language interpretation as probabilistic inference. Goodman and Frank (2016).
- Optional: Inference about Inference
- Optional: Goal Inference as Inverse Planning. Baker, Tenenbaum, Saxe (2007).
- Optional: Cause and intent: Social reasoning in causal learning. Goodman, Baker, Tenenbaum (2009).
- Optional: Reasoning about Reasoning by Nested Conditioning: Modeling Theory of Mind with Probabilistic Programs. Stuhlmueller and Goodman (2013).
- Optional: Young children use statistical sampling to infer the preferences of other people. Kushnir, Xu, and Wellman (2010).
- Optional: Teaching games: statistical sampling assumptions for learning in pedagogical situations. Shafto and Goodman (2008).
- Optional: A rational account of pedagogical reasoning: Teaching by, and learning from, examples. Shafto, Goodman, and Griffiths (2014).
- Optional: Quantifying pragmatic inference in language games. Frank and Goodman (2012).
- Optional: Probabilistic Semantics and Pragmatics: Uncertainty in Language and Thought Goodman and Lassiter (2015).
- Optional: Knowledge and implicature: Modeling language understanding as social cognition. Goodman and Stuhlmueller (2013).
- Optional: Nonliteral understanding of number words. Kao, Wu, Bergen, Goodman (2014). (See also the model on Forest.)
- Optional: The strategic use of noise in pragmatic reasoning. Bergen and Goodman (to appear).
Week of December 5
Each project team will present a short summary. We'll go in alphabetical order.
Extended class on 12/5 for presentations, no class on 12/7.
Your final project is an opportunity to get in-depth experience applying the techniques we've discussed in class to a question that interests you. In choosing a project, you should draw on your own background, interests and strengths. You do not have to work on a project that relates directly to the topics covered in the classes and readings: other topics that pursue the general idea of probabilistic models of cognition are fine, and you should try to work on a project that captures your interests within that fairly broad scope. Working on existing research projects is okay, if you bring the techniques and ideas of the class to bear.
You are encouraged (but not required) to do projects in small groups of two or three people.
Projects will generally contain both a probabilistic model of some aspect of human cognition and a behavioral experiment testing the model. Some ways you can go:
- Directly replicate the experiment and model in an existing paper. This is the most concrete way to go if you are new to both experiments and models.
- Replicate an existing experiment (or possibly use existing data) that has not been modeled and consider different probabilistic models for the data.
- Extend the experiment and model in an existing paper in a new direction.
- Something brand new: choose an interesting phenomenon in human cognition; do an experiment and model it!
In all cases, you are encouraged to consider multiple models (for example, several variants of your theory) and pay careful attention to data analysis (for example, by doing bayesian model selection).
With approval of the instructor, a project could focus on AI rather than human behavior: use an idea we've discussed in class to implement an interesting new AI system. Similarly projects could focus on inference and infrastructure in PPLs by building a better algorithm, implementing a useful automatic analysis of programs, etc.
A list of class projects done in previous versions of the course can be found here:
Your proposal should be no more than one page long (single spaced). Make sure that you cover the background, key question, and methods of your project. The background should include the topic and the context of your project, including other research in this area. The specific question you are planning to ask through your project should be clearly stated. You should briefly describe the methods you plan to use (your experimental design, your modeling approach, your data analysis, and so on).
Upload your proposal to Canvas as a PDF file by midnight on 10/20/2016.
Several weeks before your project presentation you should turn in a preliminary version of your paper. This should be a complete outline for all sections. It should have a full draft of your introduction and background and related work sections. In addition, it should have preliminary results from your modeling and/or experiments. All together, these will probably take about 3 pages.
These preliminary results will be peer-reviewed: everyone will be asked to read and comment on one other team's project update.
Each person or team will have 7 minutes (5 minute presentation + 2 minutes for questions). We will go in reverse alphabetical order (last name). The presentations should describe your question, methods, and results at a high level.
Presentations should be in Google Drive Presentation format: upload the presentation to this folder.
For students who don’t like working in Google Presentations, you can do your presentation in powerpoint and convert it. Google drive can upload (and convert) slides from the following formats: .ppt (if newer than Microsoft® Office 95), .pptx, .pptm, .pps, .ppsx, .ppsm, .pot, .potx, .potm, .odp . Students should check their conversion once they’ve uploaded it for errors. Presumably, one could also do the slides in Keynote, convert to PPT, and then convert to Google Slides, but we suspect the errors would compound.
Your final project should be described in the format of a conference paper, following the guidelines of paper submissions to the Cognitive Science Society conference. The easiest way is to download the pre-formatted template:
In particular, your paper should be no more than six pages long. Your paper should cover the background behind your project, the questions you are asking, your methods and results, and your interpretation of these results.
Email your paper to the instructor as a PDF file by midnight on Tuesday, Dec. 12th.