
Research Overview
How do we efficiently encode, store, and update internal neural representations of the world, and generalize from them to guide decisions and behavior in real time?
​
My research program is focused on understanding how neural circuit architecture (both in vivo and in silico) can be organized to enact robust, flexible computations to serve these goals in the face of constantly-changing environmental contexts.
​
To date, I have used the human (and non-human primate) visual system to probe these questions, as it is arguably the most elaborate, adaptive neural system we know of. Looking forward, I aim to use the foundational computational principles I have studied in the sensory areas of the brain to expand the ways in which humans construct and engage with intelligent engineered systems.
Reduced discriminability of visual features under crowding in hierarchical cortical circuits

-
Used multichannel in vivo recording methods to collect large physiological datasets of the brain’s responses to visual crowding.
-
Fit custom machine learning pipelines to analyze and quantify perceptual effects of crowding in the neural code.
-
Built novel computational model to explain how hierarchical computations limit perceptual performance and generalization under crowding.
-
Trained neural population decoders to predict animals’ trial-by-trial judgements, highlighting a reliance on specific brain regions.
Crowding limits visual discriminability similarly in humans and macaque monkeys
-
Designed rigorous experiment to collect first evidence that crowded vision operates similarly in humans and monkeys.
-
Fit quantitative models to identify the cause of discrimination errors.
-
Developed novel task paradigm to infer individuals’ sensory decision templates.
-
Constructed machine learning models and computational simulations to show that neural changes in primary visual cortex limit crowded perception.

Distinct spatiotemporal mechanisms in V1 microcircuits gate information about visual context

-
Designed novel experiment to measure impulse response of neurons in primary visual cortex to changes in surrounding stimuli.
-
Discovered multiple neural mechanisms with distinct feature tuning, spatial tuning, and temporal dynamics.
-
Built computational model to show how these mechanisms can gate and enhance sensory information routed to higher visual areas for perception.
Inferring trial-by-trial changes in perceptual decisions
-
Derived theoretical framework for tracking changes in performance based on behavioral data
-
Benchmark tests demonstrated that this method recovers
observer models with a wide range of trial-by-trial behavior -
Model fits to human and nonhuman primate data exceeded
state-of-the-art stationary observer models

Estimating second-order channels used in visual texture discrimination

-
Developed efficient noise masking paradigm to probe the scale of 2nd stage channels in human texture perception.
-
Fit quantitative models of observer channels using maximum likelihood estimation.
-
Results led to rethinking classical multi-stage models of form vision perception. Behavior was better explained with a nonlinearity added before the decision stage.
Adaptation recovers discriminability in crowded scenes
-
Designed novel tests of visual adaptation on the perception of cluttered scenes
-
Fit machine learning models to decode neural data and generalize across environments
-
Results highlighted novel domains where sensory adaptation improves neural and perceptual performance in real-world settings

Adaptation alters the structure of shared variability in neuronal networks

-
Applied dimensionality reduction techniques to identify common modes of fluctuation within and between neuronal populations
-
Partitioned variability into shared and private factors
-
Revealed that adaptation restructures network activity into
low-rank modes in a stimulus-dependent manner