Robust formant tracking in noise

Ian C. Bruce, Neel V. Karkhanis, Eric D. Young, Murray B. Sachs

Research output: Contribution to journalArticlepeer-review

Abstract

While many algorithms exist for accurate extraction of formant frequencies from a speech waveform, these algorithms are not typically shown to be robust in the presence of highly-transient background noise such as competing speech waveforms. Preliminary results are presented from an algorithm using time-varying adaptive filters that appears to be robust in the presence of white, Gaussian noise or a single competing speaker over a large range of signal-to-noise ratios (quiet to -6 dB). Use of a synthesized sentence, for which the actual formant frequencies are known, permits quantitative assessment of the algorithm's accuracy as a function of signal-to-noise ratio.

Original languageEnglish (US)
Pages (from-to)281-284
Number of pages4
JournalICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume1
DOIs
StatePublished - 2002

ASJC Scopus subject areas

  • Software
  • Signal Processing
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'Robust formant tracking in noise'. Together they form a unique fingerprint.

Cite this