Research

Overview

In Vision Computation Lab, we combine a number of versatile and powerful techniques to create a general program for determining how the biological vision processes visual information, and how to transform this knowledge to computer vision systems that prove flexible, robust, efficient and appropriate for real-time processing to meet human needs in real world settings. We also collaborate closely with experimental neuroscience labs at the University of Utah and at Stanford University to collect and study electrophysiological data from the brain and the retina.


To accomplish these objectives, our lab focuses on the following two research tracks in vision science and computation in a concerted manner:

VISUAL CODING AND COMPUTATION

Our lab seeks to understand the computational and neural basis of the brain dynamic information processing through a combined experimental, computational and theoretical approach.

Through collaboration with experimental and systems neuroscience labs, we collect physiological and behavioral data using multi-electrode recording, causal experiments, and psychophysical studies. These comprehensive, simultaneous measurements produce large, high-dimensional data that will be used to create mathematical models for quantitatively characterizing the dynamic stimulus-response relationships. We exploit, extend, and develop computational and theoretical frameworks from multiple disciplines, including statistical signal processing, statistical machine learning, dimensionality reduction, statistical inference, information theory, and computational modeling, that are driven by and tailored for our experimental data.

We use the resulting data and tools to specifically understand how cognitive or behavioral information is dynamically encoded in the sensory response modulations and how these modulations translate into our behavior or perception. Our focus is to address computational and analytical challenges for encoding and decoding of dynamic neural systems varying on fast timescales using high-dimensional, sparse neuronal data.

The ongoing studies pursued by our lab include:

  • creating dynamic model frameworks for characterizing time-varying information conveyed by sparse neuronal responses
  • identifying low-dimensional subspaces to develop robust computational models for analyzing high-dimensional, and sparse neural data
  • developing model-based estimation methods for mapping the high-dimensional spatiotemporal receptive fields of neurons in the visual cortex
  • modeling the dynamic encoding and decoding of visual information during eye movements
  • understanding the computational and neural mechanisms underlying our visuospatial perception during an eye movement
  • developing convolutional neural network-based models and related deep network architectures that are able to implement dynamic, nonlinear computations at the integration of multiple neural pathways
  • deciphering the computational principles of object motion processing in the presence of an eye movement in the retina and the brain

  

COMPUTATIONAL AND ARTIFICIAL VISION

In our computational vision research, we aspire to develop artificial vision solutions and neural prosthetic devices based on the strategies that the retina and the brain use to perform complicated computations such as eye movement control, or visual perception in the presence of eye or observer movement.

As a computational neuroscience lab working at the interface of brain physiology and engineering, one of our translational goals is to facilitate a huge leap forward in the design and implementation of visual prosthetics. Our effort to understand what information is encoded in neural responses, and how to decode the relevant information to visual perception on the actual timescale of the behavior, is critically needed for developing visual prostheses and brain-machine interfaces which aim to restore our natural visual behavior.

Moreover, we develop computational model frameworks or related deep network architectures for solving motion detection, segmentation, and tracking in the presence of an observer motion. This research will develop visual motion computation algorithms for mobile observers, such as moving cameras, based on our robust visual perception during eye movements to enhance robustness, accuracy and runtime of these algorithms applied to real world scenarios.

We aim to use the resulting development in the following artificial vision applications to improve the robustness, dynamically and accuracy of existing solutions:

  • visual and oculomotor prostheses
  • visual brain-machine interfacing
  • dynamic, robust computer vision algorithms for moving cameras
  • smart visual motion sensor design

 


The projects have been funded by National Science Foundation (NSF) and National Aeronautics and Space Administration (NASA).