A moving grid framework for geometric deformable models

Xiao Han, Chenyang Xu, Jerry L. Prince

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Geometric deformable models based on the level set method have become very popular in the last decade. To overcome an inherent limitation in accuracy while maintaining computational efficiency, adaptive grid techniques using local grid refinement have been developed for use with these models. This strategy, however, requires a very complex data structure, yields large numbers of contour points, and is inconsistent with the implementation of topology-preserving geometric deformable models (TGDMs). In this paper, we investigate the use of an alternative adaptive grid technique called the moving grid method with geometric deformable models. In addition to the development of a consistent moving grid geometric deformable model framework, our main contributions include the introduction of a new grid nondegeneracy constraint, the design of a new grid adaptation criterion, and the development of novel numerical methods and an efficient implementation scheme. The overall method is simpler to implement than using grid refinement, requiring no large, complex, hierarchical data structures. It also offers an extra benefit of automatically reducing the number of contour vertices in the final results. After presenting the algorithm, we demonstrate its performance using both simulated and real images.

Original languageEnglish (US)
Pages (from-to)63-79
Number of pages17
JournalInternational Journal of Computer Vision
Volume84
Issue number1
DOIs
StatePublished - Aug 2009

Keywords

  • Adaptive grid method
  • Deformation moving grid
  • Geometric deformable model
  • Level set method
  • Topology preservation

ASJC Scopus subject areas

  • Software
  • Computer Vision and Pattern Recognition
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'A moving grid framework for geometric deformable models'. Together they form a unique fingerprint.

Cite this