Research

Overview

In Vision Computation Lab, we combine a number of versatile and powerful techniques to create a general program for determining how the biological vision processes visual information, and how to transform this knowledge to computational and artificial vision systems that prove robust, flexible and appropriate for real-time processing. We also collaborate closely with experimental neuroscience labs at the University of Utah and at Stanford University to collect and study electrophysiological data from the brain and the retina.


To accomplish these objectives, our lab focuses on the following two research tracks in vision science and vision computation in a concerted manner:

VISUAL CODING AND COMPUTATION

Our lab seeks to understand the computational and neural basis of the brain's dynamic information processing through a combined experimental, computational and theoretical approach.

We aim to understand how cognitive or behavioral information is dynamically encoded in the sensory response modulations and how these modulations translate into our behavior or perception. Our focus is to address computational and analytical challenges for encoding and decoding of dynamic neural systems varying on fast timescales using high-dimensional, sparse neuronal data.

Through collaboration with experimental and systems neuroscience labs, we collect physiological and behavioral data using single-unit and multi-electrode recording, causal experiments, and psychophysical studies. These comprehensive, simultaneous measurements produce large, high-dimensional data that will be used to create mathematical models for quantitatively characterizing the dynamic stimulus-response relationships. We exploit, extend, and develop computational and theoretical frameworks from multiple disciplines, including statistical signal processing, statistical machine learning, dimensionality reduction, statistical inference, information theory, and computational modeling, that are driven by and tailored for the experimental data.

The ongoing studies pursued by our lab include:

  • modeling the dynamic encoding and decoding of visual information during eye movements
  • understanding the computational and neural mechanisms underlying our visuospatial perception during an eye movement
  • creating dynamic model frameworks for characterizing time-varying information conveyed by sparse neuronal responses
  • identifying low-dimensional subspaces to develop robust computational models for analyzing high-dimensional, and sparse neural data
  • developing model-based estimation methods for mapping the high-dimensional, dynamic spatiotemporal receptive fields of neurons in the visual cortex
  • deciphering the computational principles of object motion processing in the presence of an eye movement in the retina and the brain

  

COMPUTATIONAL AND ARTIFICIAL VISION

In our computational vision research, our goal is to develop algorithmic and modeling solutions to enhance the robustness and flexibility of artificial vision systems by applying the principles and algorithms identified in neutral systems to computational applications.

As a computational neuroscience lab working at the interface of brain physiology and engineering, one of our translational goals is to facilitate a huge leap forward in the design and implementation of visual prosthetics. Our effort to understand what information is encoded in neural responses, and how to decode the relevant information to visual perception on the actual timescale of the behavior, is critically needed for developing visual and oculomotor prostheses and visual brain-machine interfacing which aim to restore our natural visual behavior.

Moreover, we develop computational model frameworks for solving motion detection, segmentation, and tracking in the presence of observer motion. This research will develop visual motion computation algorithms for mobile observers, such as moving cameras, based on our robust visual perception during eye movements to enhance robustness, accuracy and runtime of these algorithms applied to real world scenarios.

 


These projects are funded by NIH National Eye Institutes , National Science Foundation (NSF) and National Aeronautics and Space Administration (NASA).