Proto-object based visual saliency model with a motion-sensitive channel

Jamal Lottier Molin, Alexander F. Russell, Stefan Mihalas, Ernst Niebur, Ralph Etienne-Cummings

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Scopus citations

Abstract

The human visual system has the inherent capability of using selective attention to rapidly process visual information across visual scenes. Early models of visual saliency are purely feature-based and compute visual attention for static scenes. However, to model the human visual system, it is important to also consider temporal change that may exist within the scene when computing visual saliency. We present a biologically-plausible model of dynamic visual attention that computes saliency as a function of proto-objects modulated by an independent motion-sensitive channel. This motion-sensitive channel extracts motion information via biologically plausible temporal filters modeling simple cell receptive fields. By using KL divergence measurements, we show that this model performs significantly better than chance in predicting eye fixations. Furthermore, in our experiments, this model outperforms the Itti, 2005 dynamic saliency model and insignificantly differs from the graph-based visual dynamic saliency model in performance.

Original languageEnglish (US)
Title of host publication2013 IEEE Biomedical Circuits and Systems Conference, BioCAS 2013
Pages25-28
Number of pages4
DOIs
StatePublished - Dec 1 2013
Event2013 IEEE Biomedical Circuits and Systems Conference, BioCAS 2013 - Rotterdam, Netherlands
Duration: Oct 31 2013Nov 2 2013

Publication series

Name2013 IEEE Biomedical Circuits and Systems Conference, BioCAS 2013

Other

Other2013 IEEE Biomedical Circuits and Systems Conference, BioCAS 2013
Country/TerritoryNetherlands
CityRotterdam
Period10/31/1311/2/13

ASJC Scopus subject areas

  • Hardware and Architecture
  • Biomedical Engineering
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Proto-object based visual saliency model with a motion-sensitive channel'. Together they form a unique fingerprint.

Cite this