V-GPS(SLAM): Vision-based inertial system for mobile robots

Darius Burschka, Gregory D. Hager

Research output: Contribution to journalConference articlepeer-review

49 Scopus citations

Abstract

In this paper we present a novel vision-based approach to Simultaneous Localization and Mapping (SLAM). We discuss it in the context of estimating the 6 DoF pose of a mobile robot from the perception of a monocular camera using a minimum set of three natural landmarks. In contrast to our previously presented V-GPS system, which navigates based on a set of known landmarks, the current approach allows to estimate the required information about the landmarks on-the-fly during the exploration of an unknown environment. The method is applicable to indoor and outdoor environments. The calculation is done from the image position of a set of natural landmarks that are tracked in a continuous video stream at frame-rate. An automatic hand-off process allows an update of the set to compensate for occlusions and decreasing reconstruction accuracies with the distance to an imaged landmark. A generic sensor model allows a system configuration with a variety of physical sensors including: monocular perspective cameras, omni-directional cameras and laser range finders.

Original languageEnglish (US)
Pages (from-to)409-415
Number of pages7
JournalProceedings - IEEE International Conference on Robotics and Automation
Volume2004
Issue number1
DOIs
StatePublished - 2004
EventProceedings- 2004 IEEE International Conference on Robotics and Automation - New Orleans, LA, United States
Duration: Apr 26 2004May 1 2004

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'V-GPS(SLAM): Vision-based inertial system for mobile robots'. Together they form a unique fingerprint.

Cite this