Tracker fusion for robustness in visual feature tracking

Kentaro Toyama, Gregory Hager

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.

Original languageEnglish (US)
Title of host publicationProceedings of SPIE - The International Society for Optical Engineering
EditorsPaul S. Schenker, Gerard T. McKee
Pages38-49
Number of pages12
Volume2589
StatePublished - 1995
Externally publishedYes
EventSensor Fusion and Networked Robotics VIII - Philadelphia, PA, USA
Duration: Oct 23 1995Oct 24 1995

Other

OtherSensor Fusion and Networked Robotics VIII
CityPhiladelphia, PA, USA
Period10/23/9510/24/95

Fingerprint

Fusion reactions
fusion
optical tracking
Lighting
shelves
Hardware
hardware
illumination
estimates

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Condensed Matter Physics

Cite this

Toyama, K., & Hager, G. (1995). Tracker fusion for robustness in visual feature tracking. In P. S. Schenker, & G. T. McKee (Eds.), Proceedings of SPIE - The International Society for Optical Engineering (Vol. 2589, pp. 38-49)

Tracker fusion for robustness in visual feature tracking. / Toyama, Kentaro; Hager, Gregory.

Proceedings of SPIE - The International Society for Optical Engineering. ed. / Paul S. Schenker; Gerard T. McKee. Vol. 2589 1995. p. 38-49.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Toyama, K & Hager, G 1995, Tracker fusion for robustness in visual feature tracking. in PS Schenker & GT McKee (eds), Proceedings of SPIE - The International Society for Optical Engineering. vol. 2589, pp. 38-49, Sensor Fusion and Networked Robotics VIII, Philadelphia, PA, USA, 10/23/95.
Toyama K, Hager G. Tracker fusion for robustness in visual feature tracking. In Schenker PS, McKee GT, editors, Proceedings of SPIE - The International Society for Optical Engineering. Vol. 2589. 1995. p. 38-49
Toyama, Kentaro ; Hager, Gregory. / Tracker fusion for robustness in visual feature tracking. Proceedings of SPIE - The International Society for Optical Engineering. editor / Paul S. Schenker ; Gerard T. McKee. Vol. 2589 1995. pp. 38-49
@inproceedings{d3019e9751f2405986b571d18153ad62,
title = "Tracker fusion for robustness in visual feature tracking",
abstract = "Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.",
author = "Kentaro Toyama and Gregory Hager",
year = "1995",
language = "English (US)",
isbn = "0819419532",
volume = "2589",
pages = "38--49",
editor = "Schenker, {Paul S.} and McKee, {Gerard T.}",
booktitle = "Proceedings of SPIE - The International Society for Optical Engineering",

}

TY - GEN

T1 - Tracker fusion for robustness in visual feature tracking

AU - Toyama, Kentaro

AU - Hager, Gregory

PY - 1995

Y1 - 1995

N2 - Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.

AB - Task-directed vision obviates the need for general image comprehension by focusing attention only on features which contribute useful information to the task at hand. Window-based visual tracking fits into this paradigm as motion tracking becomes a problem of local search in a small image region. While the gains in speed from such methods allow for real-time feature tracking on off-the-shelf hardware, they lose robustness by giving up a more global perspective: Window-based feature trackers are prone to such problems as distraction, illumination changes, fast features, and so forth. To add robustness to feature tracking, we present `tracker fusion,' where multiple trackers simultaneously track the same feature while watching for various problematic circumstances and combine their estimates in a meaningful way. By categorizing different situations in which mistracking occurs, finding appropriate trackers to deal with each such situation, and fusing the resulting trackers together, we construct robust feature trackers which maintain the speed of simple window-based trackers, yet afford greater resistance to mistracking.

UR - http://www.scopus.com/inward/record.url?scp=0029519286&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0029519286&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0029519286

SN - 0819419532

SN - 9780819419538

VL - 2589

SP - 38

EP - 49

BT - Proceedings of SPIE - The International Society for Optical Engineering

A2 - Schenker, Paul S.

A2 - McKee, Gerard T.

ER -