Computer vision solutions have majorly evolved independently of biological visual processing and perception concerns, much like the camera has evolved independently of the understanding of the eye biology and function. Despite many advances in computer vision and imaging techniques, the existing solutions are still in a preliminary stage when compared to the biological vision, as far as simultaneous reliability, efficiency, adaptability, robustness, autonomy, and accuracy are concerned in real world scenarios.

In Vision Computation Lab, we combine a number of versatile and powerful techniques to create a general program for determining how the biological vision processes visual information, and how to transform this knowledge to computer vision systems that prove flexible, robust, efficient and appropriate for real-time processing to meet human needs in real world settings. We also collaborate closely with experimental neuroscience labs at the University of Utah and at Stanford University to collect and study electrophysiological data from the brain and the retina.

Our two-way approach enables accelerating progress in both biological and artificial vision domains by advancing our understanding of the underlying neural computations, circuits, and mechanisms responsible for our natural visual processing and perception, and also advancing the state-of-the-art of computer vision solutions and imaging schemes, which are usually task-specific or highly task-general, by enabling concurrent autonomy, adaptability, efficiency, and robustness in complex visual tasks all implemented in a single computational framework.


To accomplish these objectives, our lab focuses on the following two research tracks in vision science and technology in a concerted manner:


Our lab seeks to understand the computational underpinning of the brain information processing through a combined experimental and theoretical approach that includes visual stimulation, multielectrode recording, intracellular recording, current stimulation, and computational modeling. These precise, simultaneous measurements produce large sets of data that will be used to create mathematical models for quantitatively characterizing the visual information conveyed by neuronal responses.

Our research exploits, extends, and develops computational and theoretical techniques from multiple disciplines, including statistical signal processing, statistical machine learning, dimensionality reduction, statistical inference, information theory, and dynamical systems to study the brain's visual functions.

The ongoing studies pursued by our lab include:

  • understanding the nonlinear neural code of visual sensory processing, which enables adaptive, predictive, and efficient information processing in the retina and the visual cortex
  • creating nonstationary modeling frameworks for characterizing time-varying information representing context- or task-dependent covariates
  • identifying a lower dimensional linear or nonlinear subspace representing high-dimensional visual information in a statistically or information-theoretically optimal sense
  • deciphering the computational principles of object motion processing in the presence of an eye movement in the retina and the brain
  • developing model-based estimation methods for mapping the spatiotemporal receptive field of cortical neurons during saccadic eye movements
  • investigating the encoding and decoding of visual information during eye movements
  • understanding the computational mechanisms underlying our stable visual perception during an eye movement
  • constructing multilayer network models of visual motion representation in spiking neurons
  • extending feedforward computational models and related deep network architectures to implement nonlinear and feedback operations for capturing more complex dynamical processes in visual coding



By incorporating our knowledge about the computational properties of neural systems, we introduce novel approaches for computer vision solutions by enabling concurrent autonomy, adaptability, efficiency, and robustness in real world settings. The current focus of our lab is on investigating novel computational schemes and algorithms for visual motion processing using neuro-inspired computational models of the brain's motion processing from the retina to the cortex. This novel approach will enable a paradigm shift of intelligent motion computing in robotics and brain-machine interfaces toward autonomous, adaptive, and robust systems that can operate in real world settings.

More specifically, we develop computational model frameworks or related deep network architectures for solving motion detection, segmentation, and tracking in the presence of an observer motion. Despite many advances in motion analysis methods, techniques based on moving observations are still in a preliminary stage of development when compared to those using static observations, in terms of reliability, efficiency, robustness, and runtime in real world scenarios. On the other hand, our biological visual system is capable of performing similar motion detection and discrimination task reliably every moment that we are awake to compensate for constant eye or head movements. This project will develop a novel, smart motion computation system for mobile observers, such as moving cameras, based on recent findings about the retina’s real-time motion computations and circuitry, and changes in visual processing during goal-directed visual attention; performance of the system will be tested in a variety of real-world scenarios.

Moreover, we aim to use the resulting development in the following artificial vision applications to improve the power, performance and accuracy of existing solutions by multiple orders of magnitude:

  • visual simultaneous localization, mapping, and tracking in dynamic environments
  • image stabilization and camera motion compensation
  • video compression
  • smart visual sensor design


The projects have been funded by National Science Foundation (NSF) and National Aeronautics and Space Administration (NASA).