Automatic joint classification and segmentation of whole cell 3D images

Rajesh Narasimha, Hua Ouyang, Alexander Gray, Steven W. McLaughlin, Sriram Subramaniam

Research output: Contribution to journalArticle

Abstract

We present a machine learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscopy (IA-SEM). For diagnosing signatures that may be unique to cellular states such as cancer, automatic tools with minimal user intervention need to be developed for analysis and mining of high-throughput data from these large volume data sets (typically ∼ 2 GB / cell). Challenges for such a tool in 3D electron microscopy arise due to low contrast and signal-to-noise ratios (SNR) inherent to biological imaging. Our approach is based on block-wise classification of images into a trained list of regions. Given manually labeled images, our goal is to learn models that can localize novel instances of the regions in test datasets. Since datasets obtained using electron microscopes are intrinsically noisy, we improve the SNR of the data for automatic segmentation by implementing a 2D texture-preserving filter on each slice of the 3D dataset. We investigate texton-based region features in this work. Classification is performed by k-nearest neighbor (k-NN) classifier, support vector machines (SVMs), adaptive boosting (AdaBoost) and histogram matching using a NN classifier. In addition, we study the computational complexity vs. segmentation accuracy tradeoff of these classifiers. Segmentation results demonstrate that our approach using minimal training data performs close to semi-automatic methods using the variational level-set method and manual segmentation carried out by an experienced user. Using our method, which we show to have minimal user intervention and high classification accuracy, we investigate quantitative parameters such as volume of the cytoplasm occupied by mitochondria, differences between the surface area of inner and outer membranes and mean mitochondrial width which are quantities potentially relevant to distinguishing cancer cells from normal cells. To test the accuracy of our approach, these quantities are compared against manually computed counterparts. We also demonstrate extension of these methods to segment 3D images obtained using electron tomography.

Original languageEnglish (US)
Pages (from-to)1067-1079
Number of pages13
JournalPattern Recognition
Volume42
Issue number6
DOIs
StatePublished - Jun 2009
Externally publishedYes

Fingerprint

Mitochondria
Classifiers
Signal to noise ratio
Adaptive boosting
Abrasion
Electron microscopy
Tomography
Support vector machines
Learning systems
Computational complexity
Electron microscopes
Textures
Cells
Throughput
Membranes
Imaging techniques
Scanning electron microscopy
Electrons
Ions

Keywords

  • Automated techniques
  • Cancer detection
  • Classification
  • Machine learning
  • Mitochondria
  • Segmentation
  • Texture features

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Signal Processing

Cite this

Narasimha, R., Ouyang, H., Gray, A., McLaughlin, S. W., & Subramaniam, S. (2009). Automatic joint classification and segmentation of whole cell 3D images. Pattern Recognition, 42(6), 1067-1079. https://doi.org/10.1016/j.patcog.2008.08.009

Automatic joint classification and segmentation of whole cell 3D images. / Narasimha, Rajesh; Ouyang, Hua; Gray, Alexander; McLaughlin, Steven W.; Subramaniam, Sriram.

In: Pattern Recognition, Vol. 42, No. 6, 06.2009, p. 1067-1079.

Research output: Contribution to journalArticle

Narasimha, R, Ouyang, H, Gray, A, McLaughlin, SW & Subramaniam, S 2009, 'Automatic joint classification and segmentation of whole cell 3D images', Pattern Recognition, vol. 42, no. 6, pp. 1067-1079. https://doi.org/10.1016/j.patcog.2008.08.009
Narasimha R, Ouyang H, Gray A, McLaughlin SW, Subramaniam S. Automatic joint classification and segmentation of whole cell 3D images. Pattern Recognition. 2009 Jun;42(6):1067-1079. https://doi.org/10.1016/j.patcog.2008.08.009
Narasimha, Rajesh ; Ouyang, Hua ; Gray, Alexander ; McLaughlin, Steven W. ; Subramaniam, Sriram. / Automatic joint classification and segmentation of whole cell 3D images. In: Pattern Recognition. 2009 ; Vol. 42, No. 6. pp. 1067-1079.
@article{fdadf44b0dc44fecb5c6b75920981a72,
title = "Automatic joint classification and segmentation of whole cell 3D images",
abstract = "We present a machine learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscopy (IA-SEM). For diagnosing signatures that may be unique to cellular states such as cancer, automatic tools with minimal user intervention need to be developed for analysis and mining of high-throughput data from these large volume data sets (typically ∼ 2 GB / cell). Challenges for such a tool in 3D electron microscopy arise due to low contrast and signal-to-noise ratios (SNR) inherent to biological imaging. Our approach is based on block-wise classification of images into a trained list of regions. Given manually labeled images, our goal is to learn models that can localize novel instances of the regions in test datasets. Since datasets obtained using electron microscopes are intrinsically noisy, we improve the SNR of the data for automatic segmentation by implementing a 2D texture-preserving filter on each slice of the 3D dataset. We investigate texton-based region features in this work. Classification is performed by k-nearest neighbor (k-NN) classifier, support vector machines (SVMs), adaptive boosting (AdaBoost) and histogram matching using a NN classifier. In addition, we study the computational complexity vs. segmentation accuracy tradeoff of these classifiers. Segmentation results demonstrate that our approach using minimal training data performs close to semi-automatic methods using the variational level-set method and manual segmentation carried out by an experienced user. Using our method, which we show to have minimal user intervention and high classification accuracy, we investigate quantitative parameters such as volume of the cytoplasm occupied by mitochondria, differences between the surface area of inner and outer membranes and mean mitochondrial width which are quantities potentially relevant to distinguishing cancer cells from normal cells. To test the accuracy of our approach, these quantities are compared against manually computed counterparts. We also demonstrate extension of these methods to segment 3D images obtained using electron tomography.",
keywords = "Automated techniques, Cancer detection, Classification, Machine learning, Mitochondria, Segmentation, Texture features",
author = "Rajesh Narasimha and Hua Ouyang and Alexander Gray and McLaughlin, {Steven W.} and Sriram Subramaniam",
year = "2009",
month = "6",
doi = "10.1016/j.patcog.2008.08.009",
language = "English (US)",
volume = "42",
pages = "1067--1079",
journal = "Pattern Recognition",
issn = "0031-3203",
publisher = "Elsevier Limited",
number = "6",

}

TY - JOUR

T1 - Automatic joint classification and segmentation of whole cell 3D images

AU - Narasimha, Rajesh

AU - Ouyang, Hua

AU - Gray, Alexander

AU - McLaughlin, Steven W.

AU - Subramaniam, Sriram

PY - 2009/6

Y1 - 2009/6

N2 - We present a machine learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscopy (IA-SEM). For diagnosing signatures that may be unique to cellular states such as cancer, automatic tools with minimal user intervention need to be developed for analysis and mining of high-throughput data from these large volume data sets (typically ∼ 2 GB / cell). Challenges for such a tool in 3D electron microscopy arise due to low contrast and signal-to-noise ratios (SNR) inherent to biological imaging. Our approach is based on block-wise classification of images into a trained list of regions. Given manually labeled images, our goal is to learn models that can localize novel instances of the regions in test datasets. Since datasets obtained using electron microscopes are intrinsically noisy, we improve the SNR of the data for automatic segmentation by implementing a 2D texture-preserving filter on each slice of the 3D dataset. We investigate texton-based region features in this work. Classification is performed by k-nearest neighbor (k-NN) classifier, support vector machines (SVMs), adaptive boosting (AdaBoost) and histogram matching using a NN classifier. In addition, we study the computational complexity vs. segmentation accuracy tradeoff of these classifiers. Segmentation results demonstrate that our approach using minimal training data performs close to semi-automatic methods using the variational level-set method and manual segmentation carried out by an experienced user. Using our method, which we show to have minimal user intervention and high classification accuracy, we investigate quantitative parameters such as volume of the cytoplasm occupied by mitochondria, differences between the surface area of inner and outer membranes and mean mitochondrial width which are quantities potentially relevant to distinguishing cancer cells from normal cells. To test the accuracy of our approach, these quantities are compared against manually computed counterparts. We also demonstrate extension of these methods to segment 3D images obtained using electron tomography.

AB - We present a machine learning tool for automatic texton-based joint classification and segmentation of mitochondria in MNT-1 cells imaged using ion-abrasion scanning electron microscopy (IA-SEM). For diagnosing signatures that may be unique to cellular states such as cancer, automatic tools with minimal user intervention need to be developed for analysis and mining of high-throughput data from these large volume data sets (typically ∼ 2 GB / cell). Challenges for such a tool in 3D electron microscopy arise due to low contrast and signal-to-noise ratios (SNR) inherent to biological imaging. Our approach is based on block-wise classification of images into a trained list of regions. Given manually labeled images, our goal is to learn models that can localize novel instances of the regions in test datasets. Since datasets obtained using electron microscopes are intrinsically noisy, we improve the SNR of the data for automatic segmentation by implementing a 2D texture-preserving filter on each slice of the 3D dataset. We investigate texton-based region features in this work. Classification is performed by k-nearest neighbor (k-NN) classifier, support vector machines (SVMs), adaptive boosting (AdaBoost) and histogram matching using a NN classifier. In addition, we study the computational complexity vs. segmentation accuracy tradeoff of these classifiers. Segmentation results demonstrate that our approach using minimal training data performs close to semi-automatic methods using the variational level-set method and manual segmentation carried out by an experienced user. Using our method, which we show to have minimal user intervention and high classification accuracy, we investigate quantitative parameters such as volume of the cytoplasm occupied by mitochondria, differences between the surface area of inner and outer membranes and mean mitochondrial width which are quantities potentially relevant to distinguishing cancer cells from normal cells. To test the accuracy of our approach, these quantities are compared against manually computed counterparts. We also demonstrate extension of these methods to segment 3D images obtained using electron tomography.

KW - Automated techniques

KW - Cancer detection

KW - Classification

KW - Machine learning

KW - Mitochondria

KW - Segmentation

KW - Texture features

UR - http://www.scopus.com/inward/record.url?scp=59149098932&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=59149098932&partnerID=8YFLogxK

U2 - 10.1016/j.patcog.2008.08.009

DO - 10.1016/j.patcog.2008.08.009

M3 - Article

AN - SCOPUS:59149098932

VL - 42

SP - 1067

EP - 1079

JO - Pattern Recognition

JF - Pattern Recognition

SN - 0031-3203

IS - 6

ER -