On specifying and performing visual tasks with qualitative object models

Gregory D. Hager, Zachary Dodds

Research output: Contribution to journalArticle

Abstract

Since its genesis, vision-based control has aimed to develop general-purpose, high accuracy systems for manipulating objects. While much of the scientific and technological infrastructure needed to accomplish this aim is now in place, several stumbling blocks still remain. One continuing issue is accuracy, and its relationship to system calibration. We describe a generative task structure for vision-based control of motion that admits a simple, geometric approach to task specification. At the same time, this approach allows us to state precisely what types of miscalibration lead to errors in task performance. A second hurdle has been the programmability of hand-eye systems. We however, argue that a structured object representation sufficient for flexible hand-eye coordination is a possibility. The result is a high-level, object-centered language for expressing hand-eye tasks.

Original languageEnglish (US)
Pages (from-to)636-643
Number of pages8
JournalProceedings-IEEE International Conference on Robotics and Automation
Volume1
DOIs
StatePublished - Jan 1 2000

ASJC Scopus subject areas

  • Software
  • Control and Systems Engineering
  • Artificial Intelligence
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'On specifying and performing visual tasks with qualitative object models'. Together they form a unique fingerprint.

  • Cite this