In Vision Computation Lab, we combine a number of versatile and powerful techniques to create a general program for determining how the biological vision processes visual information, and how to transform this knowledge to visual brain-computer interface and artificial vision systems that prove robust, efficient and appropriate for real-time processing to meet human needs in real world settings. We also collaborate closely with experimental neuroscience labs at the University of Utah and at Stanford University to collect and study electrophysiological data from the brain and the retina.

To accomplish these objectives, our lab focuses on the following two research tracks in vision science and vision computation in a concerted manner:


Our lab seeks to understand the computational and neural basis of the brain dynamic information processing through a combined experimental, computational and theoretical approach.

We aim to understand how cognitive or behavioral information is dynamically encoded in the sensory response modulations and how these modulations translate into our behavior or perception. Our focus is to address computational and analytical challenges for encoding and decoding of dynamic neural systems varying on fast timescales using high-dimensional, sparse neuronal data.

Through collaboration with experimental and systems neuroscience labs, we collect physiological and behavioral data using single-unit and multi-electrode recording, causal experiments, and psychophysical studies. These comprehensive, simultaneous measurements produce large, high-dimensional data that will be used to create mathematical models for quantitatively characterizing the dynamic stimulus-response relationships. We exploit, extend, and develop computational and theoretical frameworks from multiple disciplines, including statistical signal processing, statistical machine learning, dimensionality reduction, statistical inference, information theory, and computational modeling, that are driven by and tailored for our experimental data.

The ongoing studies pursued by our lab include:

  • modeling the dynamic encoding and decoding of visual information during eye movements
  • understanding the computational and neural mechanisms underlying our visuospatial perception during an eye movement
  • creating dynamic model frameworks for characterizing time-varying information conveyed by sparse neuronal responses
  • identifying low-dimensional subspaces to develop robust computational models for analyzing high-dimensional, and sparse neural data
  • developing model-based estimation methods for mapping the high-dimensional spatiotemporal receptive fields of neurons in the visual cortex
  • deciphering the computational principles of object motion processing in the presence of an eye movement in the retina and the brain



In our computational vision research, we aspire to develop artificial vision solutions and neural prosthetic devices based on the strategies that the retina and the brain use to perform complicated computations such as eye movement control, or visual perception in the presence of eye or observer movement.

As a computational neuroscience lab working at the interface of brain physiology and engineering, one of our translational goals is to facilitate a huge leap forward in the design and implementation of visual prosthetics. Our effort to understand what information is encoded in neural responses, and how to decode the relevant information to visual perception on the actual timescale of the behavior, is critically needed for developing visual and oculomotor prostheses and visual brain-machine interfacing which aim to restore our natural visual behavior.

Moreover, we develop computational model frameworks or related deep network architectures for solving motion detection, segmentation, and tracking in the presence of an observer motion. This research will develop visual motion computation algorithms for mobile observers, such as moving cameras, based on our robust visual perception during eye movements to enhance robustness, accuracy and runtime of these algorithms applied to real world scenarios.


These projects are funded by NIH National Eye Institutes , National Science Foundation (NSF) and National Aeronautics and Space Administration (NASA).