MetaMorphs: Deformable shape and texture models

Xiaolei Huang, Dimitris Metaxas, Ting Chen

Research output: Contribution to journalConference articlepeer-review

80 Scopus citations


We present a new class of deformable models, MetaMorphs, that consist of both shape and interior texture. The model deformations are derived from both boundary and region information in a common variational framework. This framework represents a generalization of previous model-based segmentation approaches. The shapes of the new models are represented implicitly as "images" in the higher dimensional space of distance transforms. The interior textures are captured using a nonparametric kernel-based approximation of the intensity probability density functions (p.d.f.s) inside the models. The deformations that Meta-Morph models can undergo are defined using a space warping technique - the cubic B-spline based Free Form. Deformations (FFD). When using the models for boundary finding in images, we derive the model dynamics from an energy functional consisting of both edge energy terms and intensity/texture energy terms. This way, the models deform under the influence of forces derived from both boundary and regional information. The proposed MetaMorph deformable models are efficient in convergence, have large attraction range, and are robust to image noise and inhomogeities. Various examples on finding object boundaries in noisy images with complex textures demonstrate the potential of the proposed technique.

Original languageEnglish (US)
Pages (from-to)I496-I503
JournalProceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
StatePublished - 2004
EventProceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2004 - Washington, DC, United States
Duration: Jun 27 2004Jul 2 2004

All Science Journal Classification (ASJC) codes

  • Software
  • Computer Vision and Pattern Recognition


Dive into the research topics of 'MetaMorphs: Deformable shape and texture models'. Together they form a unique fingerprint.

Cite this