Robot navigation using image sequences

Christopher Rasmussen, Gregory D. Hager

Research output: Contribution to conferencePaperpeer-review

17 Scopus citations

Abstract

We describe a framework for robot navigation that exploits the continuity of image sequences. Tracked visual features both guide the robot and provide predictive information about subsequent features to track. Our hypothesis is that image-based techniques will allow accurate motion without a precise geometric model of the world, while using predictive information will add speed and robustness. A basic component of our framework is called a scene, which is the set of image features stable over some segment of motion. When the scene changes, it is appended to a stored sequence. As the robot moves, correspondences and dissimilarities between current, remembered, and expected scenes provide cues to join and split scene sequences, forming a map-like directed graph. Visual servoing on features in successive scenes is used to traverse a path between robot and goal map locations. In our framework, a human guide serves as a scene recognition oracle during a map-learning phase; thereafter, assuming a known starting position, the robot can independently determine its location without general scene recognition ability. A prototype implementation of this framework uses as features color patches, sum-of-squared differences (SSD) subimages, or image projections of rectangles.

Original languageEnglish (US)
Pages938-943
Number of pages6
StatePublished - Dec 1 1996
Externally publishedYes
EventProceedings of the 1996 13th National Conference on Artificial Intelligence. Part 2 (of 2) - Portland, OR, USA
Duration: Aug 4 1996Aug 8 1996

Other

OtherProceedings of the 1996 13th National Conference on Artificial Intelligence. Part 2 (of 2)
CityPortland, OR, USA
Period8/4/968/8/96

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Robot navigation using image sequences'. Together they form a unique fingerprint.

Cite this