Jeffrey Johnston, PhD

Picture of Jeff Johnston

Research Description

We work to understand how the collective activity of large populations of neurons support learning and behavior. We have a particular focus on the neural basis of generalization to novel situations, as well as what goes wrong in the case of behavioral errors. To investigate these topics, we use a mixture of techniques including mathematical theory, artificial neural networks, and statistical analysis of neural data. Throughout this work, we collaborate closely with experimental labs.

Current Projects

Ongoing projects include: 
1. Understanding when and why neurons become functionally specialized in both artificial and biological neural networks. For instance, why do some tasks seem to rely on specialized groups of neurons while others seem to rely on neurons with mixed selectivity? To answer this question, we use artificial neural networks and other machine learning models as well as mathematical theories of learning in those systems.  
2. Understanding the representational and dynamical causes of behavioral errors. For instance, why can't we keep more than a few items in working memory? What is the bottleneck on this memory process? This project involves statistical analysis of neural data, normative mathematical theories, as well as both static and dynamic neural network models.  
3. Understanding how the body participates in memory and computation during natural and laboratory behavior. For instance, how can changes in pose be used as a form of working memory? What are the limitations of this strategy? When is it used? This project involves recurrent neural network modeling, mathematical theory, and the analysis of neural and behavioral data.  

Website

Academic community service and committee membership:

Project mentor for Neuromatch Academy Computational Neuroscience (2022 -); reviewer for COSYNE (2023 -); ad hoc reviewer Nature Human Behavior, Neuron, Cell Reports, eLife, iScience.