Part of the dissertation Pitch of Voiced Speech in the Short-Time Fourier Transform: Algorithms, Ground Truths, and Evaluation Methods.
Download Dissertation
© 2020, Bastian Bechtold, Jade Hochschule & Carl von Ossietzky Universität Oldenburg, Germany.

A Replication Dataset for Fundamental Frequency Estimation

Estimating the fundamental frequency of speech remains an active area of research, with varied applications in speech recognition, speaker identification, and speech compression. A vast number of algorithms for estimatimating this quantity have been proposed over the years, and a number of speech and noise corpora have been developed for evaluating their performance. The present dataset contains estimated fundamental frequency tracks of 25 algorithms, six speech corpora, two noise corpora, at nine signal-to-noise ratios between -20 and 20 dB SNR, as well as an additional evaluation of synthetic harmonic tone complexes in white noise.

The dataset also contains pre-calculated performance measures both novel and traditional, in reference to each speech corpus' ground truth, the algorithms' own clean-speech estimate, and our own consensus truth. It can thus serve as the basis for a comparison study, or to replicate existing studies from a larger dataset, or as a reference for developing new fundamental frequency estimation algorithms. All source code and data is available to download, and entirely reproducible, albeit requiring about one year of processor-time.

The Replication Dataset and the scripts necessary for calculating is available on Zenodo:

http://doi.org/10.5281/zenodo.3904389

The source code necessary for calculating the Replication Dataset is available on Github:

https://github.com/bastibe/Replication-Dataset-Scripts

The dataset contains fundamental frequency estimates for all speech recordings in the following corpora:

Additionally, it contains fundamental frequency estimates for a number of the above speech recordings mixed with noises from the following corpora, as well as synthetic tone complexes in white noise:

Fundamental frequency estimates are calculated for the following fundamental frequency estimation algorithms:

For all combinations of speech and noise, the following performance measures are calculated:

Example Results:

The following graphs show a few key metrics that can be extracted from the Replication Dataset. The graphs displayed here are in no way complete, but should highlight the fidelity of data available in the dataset. Please refer to the dissertation itself for a more in-depth evaluation of algorithms, databases, and evaluation methods.

Gross Pitch Errors and Octave Errors

The first graph gives an overview of the accuracy of fundamental frequency estimation algorithms in terms of gross pitch errors. The shaded areas are the mean gross pitch errors of each algorithm, across all databases, and graphed agains the signal-to-noise ratio (SNR). The upwards triangles show high octave errors, downwards triangles show low octave errors.

The graph highlights how the accuracy of all algorithms deteriorates around 0 dB SNR. At positive SNRs, most algorithms can estimate the fundamental frequency of speech with few errors. At negative SNRs, errors rise very steeply. In general, error rates above 10-20% are often considered useless.

Interestingly, it is the area around 0 dB SNR that shows the highest occurrence of octave errors. Octave errors are caused by the algorithm mistaking a higher or lower harmonic of the speech sound for the fundamental. Thus, they occur most frequently where the general harmonic pattern is still visible, but the details are partly obscured. This type of detailed analysis is only possible due to the high fidelity of the Replication Dataset.

Voicing Detection Errors

The second graph show voicing detection errors. The solid line is the same gross pitch error against SNR as before. This time, pluses show voicing false positives, the percentage of frames where the algorithm thought the speech to be voiced, but it was unvoiced in reality. Crosses show false positives, which were estimated unvoiced, but were voiced. Finally, circles show the percentage of signals without any estimates. If an algorithm does not have a voicing detection, n circles, pluses, or crosses are shown.

These evaluations are important as they are not captured by the ubiquitous gross pitch error metric. The gross pitch error only evaluates true positives, where both the algorithm and the ground truth are voiced. In reality, however, voicing detection errors are incorrect pitch estimations, just like gross pitch errors. This highlights how the gross pitch error measure can mislead.

Furthermore, there exists a tradeoff between voicing detection errors and gross pitch errors. The more frames are considered voiced, the more likely are gross pitch errors. It is thus very tempting to simply mask uncertain estimates as unvoiced. As the graph shows, however, these voicing detection errors are highly problematic for many algorithms, but are often not reported in algorithm evaluations. The Replication Dataset contains a number of new performance measures to investigate these phenomena in unprecedented detail.

Differences Between Speech Corpora

The third example graph shows the difference between various speech corpora commonly used for evaluating fundamental frequency estimation algorithms. Each line is the difference between one corpus' accuracy and the mean accuracy. If a line is positive, errors for this corpus are greater than the average, if the line is positive, errors are lower. CMU-ARCTIC is marked with upwards triangles, FDA with downwards, KEELE with leftwards, MOCHA-TIMIT with rightwards triangles, PTDB-TUG with stars, and TIMIT with diamonds.

For some algorithms, the lines gather around the zero line, which means the algorithm's accuracy is the same in all corpora. For others, however, single corpora fare much better or worse than others. If a single corpus has much lower error rates than all other corpora, it is likely that the algorithm was trained on that particular corpus. If one corpus shows much higher error rates, it might contain a few particular voices that an algorithm was not optimized for.

Strikingly, the differences shown in this graph are very large. In fact, they often exceed ten or twenty percent gross pitch errors, which is far beyond the margin of improvement usually claimed for new algorithms. Thus, this graph shows a type of comparison that is only possible with as large a dataset as the Replication Dataset, which shows highly significant differences that a smaller dataset could not show.

Summary

The full dissertation contains even more detailed analyses of these algorithms, corpora, and evaluation measures, including theoretical accuracy limits, fine pitch errors, estimation biases, voicing detection tradeoffs, ground truth comparisons, noise and noise corpus comparisons, significance tests of differences between algorithms, and fundamental frequency biases and dependencies.

It shows a number of differences in unprecedented detail, some differences for the very first time, only possible due to the vast data available in the Replication Dataset.

References:

  1. John Kominek and Alan W Black. CMU ARCTIC database for speech synthesis, 2003.
  2. Paul C Bagshaw, Steven Hiller, and Mervyn A Jack. Enhanced Pitch Tracking and the Processing of F0 Contours for Computer Aided Intonation Teaching. In EUROSPEECH, 1993.
  3. F Plante, Georg F Meyer, and William A Ainsworth. A Pitch Extraction Reference Database. In Fourth European Conference on Speech Communication and Technology, pages 837–840, Madrid, Spain, 1995.
  4. Alan Wrench. MOCHA MultiCHannel Articulatory database: English, November 1999.
  5. Gregor Pirker, Michael Wohlmayr, Stefan Petrik, and Franz Pernkopf. A Pitch Tracking Corpus with Evaluation on Multipitch Tracking Scenario. page 4, 2011.
  6. John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, and Victor Zue. TIMIT Acoustic-Phonetic Continuous Speech Corpus, 1993.
  7. Andrew Varga and Herman J.M. Steeneken. Assessment for automatic speech recognition: II. NOISEX-92: A database and an experiment to study the effect of additive noise on speech recog- nition systems. Speech Communication, 12(3):247–251, July 1993.
  8. David B. Dean, Sridha Sridharan, Robert J. Vogt, and Michael W. Mason. The QUT-NOISE-TIMIT corpus for the evaluation of voice activity detection algorithms. Proceedings of Interspeech 2010, 2010.
  9. Man Mohan Sondhi. New methods of pitch extraction. Audio and Electroacoustics, IEEE Transactions on, 16(2):262—266, 1968.
  10. Myron J. Ross, Harry L. Shaffer, Asaf Cohen, Richard Freudberg, and Harold J. Manley. Average magnitude difference function pitch extractor. Acoustics, Speech and Signal Processing, IEEE Transactions on, 22(5):353—362, 1974.
  11. Na Yang, He Ba, Weiyang Cai, Ilker Demirkol, and Wendi Heinzelman. BaNa: A Noise Resilient Fundamental Frequency Detection Algorithm for Speech and Music. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(12):1833–1848, December 2014.
  12. Michael Noll. Cepstrum Pitch Determination. The Journal of the Acoustical Society of America, 41(2):293–309, 1967.
  13. Jong Wook Kim, Justin Salamon, Peter Li, and Juan Pablo Bello. CREPE: A Convolutional Representation for Pitch Estimation. arXiv:1802.06182 [cs, eess, stat], February 2018. arXiv: 1802.06182.
  14. Masanori Morise, Fumiya Yokomori, and Kenji Ozawa. WORLD: A Vocoder-Based High-Quality Speech Synthesis System for Real-Time Applications. IEICE Transactions on Information and Systems, E99.D(7):1877–1884, 2016.
  15. Kun Han and DeLiang Wang. Neural Network Based Pitch Tracking in Very Noisy Speech. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(12):2158–2168, Decem- ber 2014.
  16. Pegah Ghahremani, Bagher BabaAli, Daniel Povey, Korbinian Riedhammer, Jan Trmal, and Sanjeev Khudanpur. A pitch extraction algorithm tuned for automatic speech recognition. In Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pages 2494–2498. IEEE, 2014.
  17. Lee Ngee Tan and Abeer Alwan. Multi-band summary correlogram-based pitch detection for noisy speech. Speech Communication, 55(7-8):841–856, September 2013.
  18. Jesper Kjær Nielsen, Tobias Lindstrøm Jensen, Jesper Rindom Jensen, Mads Græsbøll Christensen, and Søren Holdt Jensen. Fast fundamental frequency estimation: Making a statistically efficient estimator computationally efficient. Signal Processing, 135:188–197, June 2017.
  19. Sira Gonzalez and Mike Brookes. PEFAC - A Pitch Estimation Algorithm Robust to High Levels of Noise. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 22(2):518—530, February 2014.
  20. Paul Boersma. Accurate short-term analysis of the fundamental frequency and the harmonics-to-noise ratio of a sampled sound. In Proceedings of the institute of phonetic sciences, volume 17, page 97—110. Amsterdam, 1993.
  21. David Talkin. A robust algorithm for pitch tracking (RAPT). Speech coding and synthesis, 495:518, 1995.
  22. Byung Suk Lee and Daniel PW Ellis. Noise robust pitch tracking by subband autocorrelation classification. In Interspeech, pages 707–710, 2012.
  23. Wei Chu and Abeer Alwan. SAFE: a statistical algorithm for F0 estimation for both clean and noisy speech. In INTERSPEECH, pages 2590–2593, 2010.
  24. Xuejing Sun. Pitch determination and voice quality analysis using subharmonic-to-harmonic ratio. In Acoustics, Speech, and Signal Processing (ICASSP), 2002 IEEE International Conference on, volume 1, page I—333. IEEE, 2002.
  25. Markel. The SIFT algorithm for fundamental frequency estimation. IEEE Transactions on Audio and Electroacoustics, 20(5):367—377, December 1972.
  26. Thomas Drugman and Abeer Alwan. Joint Robust Voicing Detection and Pitch Estimation Based on Residual Harmonics. In Interspeech, page 1973—1976, 2011.
  27. Hideki Kawahara, Masanori Morise, Toru Takahashi, Ryuichi Nisimura, Toshio Irino, and Hideki Banno. TANDEM-STRAIGHT: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, F0, and aperiodicity estimation. In Acous- tics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, pages 3933–3936. IEEE, 2008.
  28. Arturo Camacho. SWIPE: A sawtooth waveform inspired pitch estimator for speech and music. PhD thesis, University of Florida, 2007.
  29. Kavita Kasi and Stephen A. Zahorian. Yet Another Algorithm for Pitch Tracking. In IEEE International Conference on Acoustics Speech and Signal Processing, pages I–361–I–364, Orlando, FL, USA, May 2002. IEEE.
  30. Alain de Cheveigné and Hideki Kawahara. YIN, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America, 111(4):1917, 2002.