Theses, Chapters

[1] Michael Mandel, Justin Salamon, and Daniel P.W. Ellis, editors. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2019 Workshop (DCASE2019). New York University, NY, USA, October 2019. [ bib | DOI ]
[2] Michael I Mandel, Shoko Araki, and Tomohiro Nakatani. Multichannel clustering and classification approaches. In Emmanuel Vincent, Tuomas Virtanen, and Sharon Gannot, editors, Audio Source Separation and Speech Enhancement, chapter 12. Wiley, 2018. [ bib ]
[3] Michael I Mandel and Jon P Barker. Multichannel spatial clustering using model-based source separation. In Shinji Watanabe, Marc Delcroix, Florian Metze, and John R. Hershey, editors, New Era for Robust Speech Recognition: Exploiting, Deep Learning, chapter 3. Springer, 2017. [ bib | DOI ]
[4] Xiong Xiao, Shinji Watanabe, Hakan Erdogan, Michael Mandel, Liang Lu, John R. Hershey, Michael L. Seltzer, Guoguo Chen, Yu Zhang, and Dong Yu. Discriminative beamforming with phase-aware neural networks for speech enhancement and recognition. In Shinji Watanabe, Marc Delcroix, Florian Metze, and John R. Hershey, editors, New Era for Robust Speech Recognition: Exploiting, Deep Learning, chapter 4. Springer, 2017. [ bib | DOI ]
[5] Johanna Devaney, Michael I Mandel, Douglas Turnbull, and George Tzanetakis, editors. Proceedings of the 17th International Society for Music Information Retrieval Conference (ISMIR). New York, 2016. [ bib | http ]
[6] Thierry Bertin-Mahieux, Douglas Eck, and Michael I. Mandel. Automatic tagging of audio: The state-of-the-art. In Wenwu Wang, editor, Machine Audition: Principles, Algorithms and Systems, chapter 14, pages 334--352. IGI Publishing, 2010. [ bib ]
[7] Michael I. Mandel. Binaural Model-Based Source Separation and Localization. PhD thesis, Columbia University, February 2010. [ bib | .pdf ]
When listening in noisy and reverberant environments, human listeners are able to focus on a particular sound of interest while ignoring interfering sounds. Computer listeners, however, can only perform highly constrained versions of this task. While automatic speech recognition systems and hearing aids work well in quiet conditions, source separation is necessary for them to be able to function in these challenging situations.

This dissertation introduces a system that separates more than two sound sources from reverberant, binaural mixtures based on the sources' locations. Each source is modelled probabilistically using information about its interaural time and level differences at every frequency, with parameters learned using an expectation maximization (EM) algorithm. The system is therefore called Model-based EM Source Separation and Localization (MESSL). This EM algorithm alternates between refining its estimates of the model parameters (location) for each source and refining its estimates of the regions of the spectrogram dominated by each source. In addition to successfully separating sources, the algorithm estimates model parameters from a mixture that have direct psychoacoustic relevance and can usually only be measured for isolated sources. One of the key features enabling this separation is a novel probabilistic localization model that can be evaluated at individual time-frequency points and over arbitrarily-shaped regions of the spectrogram.

The localization performance of the systems introduced here is comparable to that of humans in both anechoic and reverberant conditions, with a 40% lower mean absolute error than four comparable algorithms. When target and masker sources are mixed at similar levels, MESSL's separations have signal-to-distortion ratios 2.0 dB higher than four comparable separation algorithms and estimated speech quality 0.19 mean opinion score units higher. When target and masker sources are mixed anechoically at very different levels, MESSL's performance is comparable to humans', but in similar reverberant mixtures it only achieves 20–-25% of human performance. While MESSL successfully rejects enough of the direct-path portion of the masking source in reverberant mixtures to improve energy-based signal-to-noise ratio results, it has difficulty rejecting enough reverberation to improve automatic speech recognition results significantly. This problem is shared by other comparable separation systems.

Journal

[1] Vieh Anh Trinh and Michael I Mandel. Directly comparing the listening strategies of humans and machines. IEEE Transactions on Audio, Speech, and Language Processing, 29:312--323, 2021. [ bib | DOI ]
Automatic speech recognition (ASR) has reached hu-man performance on many clean speech corpora, but it remains worse than human listeners in noisy environments. This paper investigates whether this difference in performance might be due to a difference in the time-frequency regions that each listener utilizes in making their decisions and how these "important" regions change for ASRs using different acoustic models (AMs) and language models (LMs). We define important regions as time-frequency points in a spectrogram that tend to be audible when the listener correctly recognizes that utterance in noise. The evidence from this study indicates that a neural network AM attends to regions that are more similar to those of humans (capturing certain high-energy regions) than those of a traditional Gaussian mixture model (GMM) AM. Our analysis also shows that the neural network AM has not yet captured all the cues that human listeners utilize, such as certain transitions between silence and high speech energy. We also find that differences in important time-frequency regions tend to track differences in accuracy on specific words in a test sentence, suggesting a connection. Because of this connection, adapting an ASR to attend to the same regions humans use might improve its generalization in noise.
[2] Michael I Mandel, Vikas Grover, Mengxuan Zhao, Jiyoung Choi, and Valerie Shafer. The bubble-noise technique for speech perception research. Perspectives of the ASHA Special Interest Groups, 4(6):1653--1666, 2019. [ bib | DOI ]
Purpose: The “bubble noise” technique has recently been introduced as a method to identify the regions in time-frequency maps (that is, spectrograms) of speech that are especially important for listeners in speech recognition. This technique identifies regions of “importance” that are specific to the speech stimulus and the listener, thus permitting these regions to be compared across different listener groups. For example, in cross-linguistic and second language (L2) speech perception, this method identifies differences in regions of importance in accomplishing decisions of phoneme category membership. This paper describes the application of bubble noise to the study of language learning for three different language pairs: Hindi-English bilinguals' perception of the /v/-/w/ contrast in American English, native English speakers' perception of the tense/lax contrast for Korean fricatives and affricates, and native English speakers' perception of Mandarin lexical tone. Conclusion: We demonstrate that this technique provides insight on what information in the speech signal is important for native/first-language listeners compared to non-native/L2 listeners. Furthermore, the method can be used to examine whether L2 speech perception training is effective in bringing the listener's attention to the important cues.
[3] Michael I Mandel, Sarah E Yoho, and Eric W Healy. Measuring time-frequency importance functions of speech with bubble noise. Journal of the Acoustical Society of America, 140:2542--2553, 2016. [ bib | DOI | Code | .pdf ]
Listeners can reliably perceive speech in noisy conditions, but it is not well understood what specific features of speech they use to do this. This paper introduces a data-driven framework to identify the time-frequency locations of these features. Using the same speech utterance mixed with many different noise instances, the framework is able to compute the importance of each time-frequency point in the utterance to its intelligibility. The mixtures have approximately the same global signal-to-noise ratio at each frequency, but very different recognition rates. The difference between these intelligible vs unintelligible mixtures is the alignment between the speech and spectro-temporally modulated noise, providing different combinations of “glimpses” of speech in each mixture. The current results reveal the locations of these important noise-robust phonetic features in a restricted set of syllables. Classification models trained to predict whether individual mixtures are intelligible based on the location of these glimpses can generalize to new conditions, successfully predicting the intelligibility of novel mixtures. They are able to generalize to novel noise instances, novel productions of the same word by the same talker, novel utterances of the same word spoken by different talkers, and, to some extent, novel consonants.
[4] Hugo Larochelle, Michael I Mandel, Razvan Pascanu, and Yoshua Bengio. Learning algorithms for the classification restricted boltzmann machine. Journal of Machine Learning Research, 13:643--669, March 2012. [ bib | .pdf ]
Recent developments have demonstrated the capacity of restricted Boltzmann machines (RBM) to be powerful generative models, able to extract useful features from input data or construct deep artificial neural networks. In such settings, the RBM only yields a preprocessing or an initialization for some other model, instead of acting as a complete supervised model in its own right. In this paper, we argue that RBMs can provide a self-contained framework for developing competitive classifiers. We study the Classification RBM (ClassRBM), a variant on the RBM adapted to the classification setting. We study different strategies for training the ClassRBM and show that competitive classification performances can be reached when appropriately combining discriminative and generative training objectives. Since training according to the generative objective requires the computation of a generally intractable gradient, we also compare different approaches to estimating this gradient and address the issue of obtaining such a gradient for problems with very high dimensional inputs. Finally, we describe how to adapt the ClassRBM to two special cases of classification problems, namely semi-supervised and multitask learning.
[5] Ron Weiss, Michael I. Mandel, and Daniel P. W. Ellis. Combining localization cues and source model constraints for binaural source separation. Speech Communication, 53(5):606--621, May 2011. [ bib | DOI | .pdf ]
We describe a system for separating multiple sources from a two-channel recording based on interaural cues and prior knowledge of the statistics of the underlying source signals. The proposed algorithm effectively combines information derived from low level perceptual cues, similar to those used by the human auditory system, with higher level information related to speaker identity. We combine a probabilistic model of the observed interaural level and phase differences with a prior model of the source statistics and derive an EM algorithm for finding the maximum likelihood parameters of the joint model. The system is able to separate more sound sources than there are observed channels in the presence of reverberation. In simulated mixtures of speech from two and three speakers the proposed algorithm gives a signal-to-noise ratio improvement of 1.7 dB over a baseline algorithm which uses only interaural cues. Further improvement is obtained by incorporating eigenvoice speaker adaptation to enable the source model to better match the sources present in the signal. This improves performance over the baseline by 2.7 dB when the speakers used for training and testing are matched. However, the improvement is minimal when the test data is very different from that used in training.
[6] Michael I. Mandel, Razvan Pascanu, Douglas Eck, Yoshua Bengio, Luca M. Aiello, Rossano Schifanella, and Filippo Menczer. Contextual tag inference. ACM Transactions on Multimedia Computing, Communications and Applications, 7S(1):32:1--32:18, October 2011. [ bib | DOI | .pdf ]
This paper examines the use of two kinds of context to improve the results of content-based music taggers: the relationships between tags and between the clips of songs that are tagged. We show that users agree more on tags applied to clips temporally "closer" to one another; that conditional restricted Boltzmann machine models of tags can more accurately predict related tags when they take context into account; and that when training data is "smoothed" using context, support vector machines can better rank these clips according to the original, unsmoothed tags and do this more accurately than three standard multi-label classifiers.
[7] Johanna Devaney, Michael I. Mandel, Daniel P. W. Ellis, and Ichiro Fujinaga. Automatically extracting performance data from recordings of trained singers. Psychomusicology: Music, Mind & Brain, 21(1–-2):108--136, 2012. [ bib | .pdf ]
Recorded music offers a wealth of information for studying performance practice. This paper examines the challenges of automatically extracting performance information from audio recordings of the singing voice and discusses our technique for automatically extracting information such as note timings, intonation, vibrato rates, and dynamics. An experiment is also presented that focuses on the tuning of semitones in solo soprano performances of Schubert's “Ave Maria” by non-professional and professional singers. We found a small decrease in size of intervals with a leading tone function only in the non-professional group.
[8] Michael I. Mandel, Scott Bressler, Barbara Shinn-Cunningham, and Daniel P. W. Ellis. Evaluating source separation algorithms with reverberant speech. IEEE Transactions on Audio, Speech, and Language Processing, 18(7):1872--1883, 2010. [ bib | DOI | .pdf ]
This paper examines the performance of several source separation systems on a speech separation task for which human intelligibility has previously been measured. For anechoic mixtures, automatic speech recognition (ASR) performance on the separated signals is quite similar to human performance. In reverberation, however, while signal separation has some benefit for ASR, the results are still far below those of human listeners facing the same task. Performing this same experiment with a number of oracle masks created with a priori knowledge of the separated sources motivates a new objective measure of separation performance, the DERTM (Direct-path, Early echo, and Reverberation, of the Target and Masker), which is closely related to the ASR results. This measure indicates that while the non-oracle algorithms successfully reject the direct-path signal from the masking source, they reject less of its reverberation, explaining the disappointing ASR performance.
[9] Michael I. Mandel, Ron J. Weiss, and Daniel P. W. Ellis. Model-based expectation maximization source separation and localization. IEEE Transactions on Audio, Speech, and Language Processing, 18(2):382--394, February 2010. [ bib | DOI | .pdf ]
This paper describes a system, referred to as model-based expectation-maximization source separation and localization (MESSL), for separating and localizing multiple sound sources from an underdetermined reverberant two-channel recording. By clustering individual spectrogram points based on their interaural phase and level differences, MESSL generates masks that can be used to isolate individual sound sources. We first describe a probabilistic model of interaural parameters that can be evaluated at individual spectrogram points. By creating a mixture of these models over sources and delays, the multi-source localization problem is reduced to a collection of single source problems. We derive an expectation-maximization algorithm for computing the maximum-likelihood parameters of this mixture model, and show that these parameters correspond well with interaural parameters measured in isolation. As a byproduct of fitting this mixture model, the algorithm creates probabilistic spectrogram masks that can be used for source separation. In simulated anechoic and reverberant environments, separations using MESSL produced on average a signal-to-distortion ratio 1.6 dB greater and perceptual evaluation of speech quality (PESQ) results 0.27 mean opinion score units greater than four comparable algorithms.
[10] Michael I. Mandel and Daniel P. W. Ellis. A web-based game for collecting music metadata. Journal of New Music Research, 37(2):151--165, 2008. [ bib | DOI | .pdf ]
We have designed a web-based game, MajorMiner, that makes collecting descriptions of musical excerpts fun, easy, useful, and objective. Participants describe 10 second clips of songs and score points when their descriptions match those of other participants. The rules were designed to encourage players to be thorough and the clip length was chosen to make judgments objective and specific. To analyse the data, we measured the degree to which binary classifiers could be trained to spot popular tags. We also compared the performance of clip classifiers trained with MajorMiner's tag data to those trained with social tag data from a popular website. On the top 25 tags from each source, MajorMiner's tags were classified correctly 67.2% of the time, while the social tags were classified correctly 62.6% of the time.
[11] Thomas S. Huang, Charlie K. Dagli, Shyamsundar Rajaram, Edward Y. Chang, Michael I. Mandel, Graham E. Poliner, and Daniel P. W. Ellis. Active learning for interactive multimedia retrieval. Proceedings of the IEEE, 96(4):648--667, 2008. [ bib | DOI ]
As the first decade of the 21st century comes to a close, growth in multimedia delivery infrastructure and public demand for applications built on this backbone are converging like never before. The push towards reaching truly interactive multimedia technologies becomes stronger as our media consumption paradigms continue to change. In this paper, we profile a technology leading the way in this revolution: active learning. Active learning is a strategy that helps alleviate challenges inherent in multimedia information retrieval through user interaction. We show how active learning is ideally suited for the multimedia information retrieval problem by giving an overview of the paradigm and component technologies used with special attention given to the application scenarios in which these technologies are useful. Finally, we give insight into the future of this growing field and how it fits into the larger context of multimedia information retrieval.
[12] Michael I. Mandel, Graham E. Poliner, and Daniel P. W. Ellis. Support vector machine active learning for music retrieval. Multimedia systems, 12(1):1--11, August 2006. [ bib | DOI | .pdf ]
Searching and organizing growing digital music collections requires a computational model of music similarity. This paper describes a system for performing flexible music similarity queries using SVM active learning. We evaluated the success of our system by classifying 1210 pop songs according to mood and style (from an online music guide) and by the performing artist. In comparing a number of representations for songs, we found the statistics of mel-frequency cepstral coefficients to perform best in precision-at-20 comparisons. We also show that by choosing training examples intelligently, active learning requires half as many labeled examples to achieve the same accuracy as a standard scheme.

Conference

[1] Enis Berk Çoban, Megan Perra, and Michael I Mandel. Towards high resolution weather monitoring with sound data. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2024. To appear. [ bib ]
[2] Ali Raza Syed and Michael I Mandel. Estimating shapley values of training utterances for automatic speech recognition models. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2023. [ bib ]
[3] Viet Ahn Trinh, Hassan Salami Kavaki, and Michael I Mandel. Importantaug: a data augmentation agent for speech. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2022. [ bib ]
[4] Enis Berk Çoban, Megan Perra, Dara Pir, and Michael I Mandel. Edansa-2019: The ecoacoustic dataset from arctic north slope alaska. In Workshop on the Detection and Classification of Audio Scenes and Environments, 2022. [ bib ]
[5] Enis Berk Çoban, Ali R Syed, Dara Pir, and Michael I Mandel. Towards large scale ecoacoustic monitoring with small amounts of labeled data. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2021. [ bib ]
[6] Zhaoheng Ni, Yong Xu, Meng Yu, Bo Wu, Shixiong Zhang, Dong Yu, and Michael I Mandel. WPD++: an improved neural beamformer for simultaneous speech separation and dereverberation. In IEEE Workshop on Spoken Language Technologies, 2020. [ bib ]
[7] Hassan Salami Kavaki and Michael I Mandel. Identifying important time-frequency locations in continuous speech utterances. In Proceedings of Interspeech, pages 1639--1643, 2020. [ bib | DOI | .pdf ]
Human listeners use specific cues to recognize speech and recent experiments have shown that certain time-frequency regions of individual utterances are more important to their correct identification than others. A model that could identify such cues or regions from clean speech would facilitate speech recognition and speech enhancement by focusing on those important regions. Thus, in this paper we present a model that can predict the regions of individual utterances that are important to an automatic speech recognition (ASR) "listener" by learning to add as much noise as possible to these utterances while still permitting the ASR to correctly identify them. This work utilizes a continuous speech recognizer to recognize multi-word utterances and builds upon our previous work that performed the same process for an isolated word recognizer. Our experimental results indicate that our model can apply noise to obscure 90.5% of the spectrogram while leaving recognition performance nearly unchanged.
[8] Viet Anh Trinh and Michael I. Mandel. Large scale evaluation of importance maps in automatic speech recognition. In Proceedings of Interspeech, pages 1166--1170, 2020. [ bib | DOI | .pdf ]
This paper proposes a metric that we call the structured saliency benchmark (SSBM) to evaluate importance maps computed for automatic speech recognizers on individual utterances. These maps indicate time-frequency points of the utterance that are most important for correct recognition of a target word. Our evaluation technique is not only suitable for standard classification tasks, but is also appropriate for structured prediction tasks like sequence-to-sequence models. Additionally, we use this approach to perform a comparison of the importance maps created by our previously introduced technique using “bubble noise” to identify important points through correlation with a baseline approach based on smoothed speech energy and forced alignment. Our results show that the bubble analysis approach is better at identifying important speech regions than this baseline on 100 sentences from the AMI corpus.
[9] Hussein Ghaly and Michael I Mandel. Using prosody to improve dependency parsing. In Speech prosody, 2020. [ bib ]
[10] Enis Berk Çoban, Dara Pir, Richard So, and Michael I Mandel. Transfer learning from youtube soundtracks to tag arctic ecoacoustic recordings. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 726--730, 2020. [ bib | DOI | .pdf ]
Sound provides a valuable tool for long-term monitoring of sensitive animal habitats at a spatial scale larger than camera traps or field observations, while also providing more details than satellite imagery. Currently, the ability to collect such recordings outstrips the ability to analyze them manually, necessitating the development of automatic analysis methods. While several datasets and models of large corpora of video soundtracks have recently been released, it is not clear to what extent these models will generalize to environmental recordings and the scientific questions of interest in analyzing them. This paper investigates this generalization in several ways and finds that models themselves display limited performance, however, their intermediate representations can be used to train successful models on small sets of labeled data.
[11] Soumi Maiti and Michael I Mandel. Speaker independence of neural vocoders and their effect on parametric resynthesis speech enhancement. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 206--210, 2020. [ bib | DOI | arXiv | Demo | Slides | .pdf ]
Traditional speech enhancement systems produce speech with compromised quality. Here we propose to use the high quality speech generation capability of neural vocoders for better quality speech enhancement. We term this parametric resynthesis (PR). In previous work, we showed that PR systems generate high quality speech for a single speaker using two neural vocoders, WaveNet and WaveGlow. Both these vocoders are traditionally speaker dependent. Here we first show that when trained on data from enough speakers, these vocoders can generate speech from unseen speakers, both male and female, with similar quality as seen speakers in training. Next using these two vocoders and a new vocoder LPCNet, we evaluate the noise reduction quality of PR on unseen speakers and show that objective signal and overall quality is higher than the state-of-the-art speech enhancement systems Wave-U-Net, Wavenet-denoise, and SEGAN. Moreover, in subjective quality, multiple-speaker PR out-performs the oracle Wiener mask.
[12] Zhaoheng Ni and Michael I Mandel. Mask-dependent phase estimation for monaural speaker separation. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2020. [ bib | arXiv | .pdf ]
Speaker separation refers to isolating speech of interest in a multi-talker environment. Most methods apply real-valued Time-Frequency (T-F) masks to the mixture Short-Time Fourier Transform (STFT) to reconstruct the clean speech. Hence there is an unavoidable mismatch between the phase of the reconstruction and the original phase of the clean speech. In this paper, we propose a simple yet effective phase estimation network that predicts the phase of the clean speech based on a T-F mask predicted by a chimera++ network. To overcome the label-permutation problem for both the T-F mask and the phase, we propose a mask-dependent permutation invariant training (PIT) criterion to select the phase signal based on the loss from the T-F mask prediction. We also propose an Inverse Mask Weighted Loss Function for phase prediction to focus the model on the T-F regions in which the phase is more difficult to predict. Results on the WSJ0-2mix dataset show that the phase estimation network achieves comparable performance to models that use iterative phase reconstruction or end-to-end time-domain loss functions, but in a more straightforward manner.
[13] Soumi Maiti and Michael I Mandel. Parametric resynthesis with neural vocoders. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 303--307, 2019. [ bib | DOI | arXiv | Demo | .pdf ]
[14] Soumi Maiti and Michael I Mandel. Speech denoising by parametric resynthesis. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 6995--6999, 2019. [ bib | DOI | Demo | Poster | .pdf ]
This work proposes the use of clean speech vocoder parameters as the target for a neural network performing speech enhancement. These parameters have been designed for text-to-speech synthesis so that they both produce high-quality resyntheses and also are straightforward to model with neural networks, but have not been utilized in speech enhancement until now. In comparison to a matched text-to-speech system that is given the ground truth transcripts of the noisy speech, our model is able to produce more natural speech because it has access to the true prosody in the noisy speech. In comparison to two denoising systems, the oracle Wiener mask and a DNN-based mask predictor, our model equals the oracle Wiener mask in subjective quality and intelligibility and surpasses the realistic system. A vocoder-based upper bound shows that there is still room for improvement with this approach beyond the oracle Wiener mask. We test speaker-dependence with two speakers and show that a single model can be used for multiple speakers.
[15] Viet Anh Trinh, Brian McFee, and Michael I Mandel. Bubble cooperative networks for identifying important speech cues. In Proceedings of Interspeech, pages 1616--1620, 2018. [ bib | DOI | Poster | .pdf ]
Predicting the intelligibility of noisy recordings is difficult and most current algorithms treat all speech energy as equally important to intelligibility. Our previous work on human perception used a listening test paradigm and correlational analysis to show that some energy is more important to intelligibility than other energy. In this paper, we propose a system called the Bubble Cooperative Network (BCN), which aims to predict important areas of individual utterances directly from clean speech. Given such a prediction, noise is added to the utterance in unimportant regions and then presented to a recognizer. The BCN is trained with a loss that encourages it to add as much noise as possible while preserving recognition performance, encouraging it to identify important regions precisely and place the noise everywhere else. Empirical evaluation shows that the BCN can obscure 97.7% of the spectrogram with noise while maintaining recognition accuracy for a simple speech recognizer that compares a noisy test utterance with a clean reference utterance. The masks predicted by a single BCN on several utterances show patterns that are similar to analyses derived from human listening tests that analyze each utterance separately, while exhibiting better generalization and less context-dependence than previous approaches.
[16] Ali Raza Syed, Viet Anh Trinh, and Michael I. Mandel. Concatenative resynthesis with improved training signals for speech enhancement. In Proceedings of Interspeech, pages 1195--1199, 2018. [ bib | DOI | Poster | .pdf ]
Noise reduction in speech signals remains an important area of research with potential for high impact in speech processing domains such as voice communication and hearing prostheses. We extend and demonstrate significant improvements to our previous work in synthesis-based speech enhancement, which performs concatenative resynthesis of speech signals for the production of noiseless, high quality speech. Concatenative resynthesis methods perform unit selection through learned non-linear similarity functions between short chunks of clean and noisy signals. These mappings are learned using deep neural networks (DNN) trained to predict high similarity for the exact chunk of speech that is contained within a chunk of noisy speech, and low similarity for all other pairings. We find here that more robust mappings can be learned with a more efficient use of the available data by selecting pairings that are not exact matches, but contain similar clean speech that matches the original in terms of acoustic, phonetic, and prosodic content. The resulting output is evaluated on the small vocabulary CHiME2-GRID corpus and outperforms our original baseline system in terms of intelligibility by combining phonetic similarity with similarity of acoustic intensity, fundamental frequency, and periodicity.
[17] Soumi Maiti, Joey Ching, and Michael I. Mandel. Large vocabulary concatenative resynthesis. In Proceedings of Interspeech, pages 1190--1194, 2018. [ bib | DOI | Poster | .pdf ]
Traditional speech enhancement systems reduce noise by modifying the noisy signal, which suffer from two problems: under-suppression of noise and over-suppression of speech. As an alternative, in this paper, we use the recently introduced concatenative resynthesis approach where we replace the noisy speech with its clean resynthesis. The output of such a system can produce speech that is both noise-free and high quality. This paper generalizes our previous small-vocabulary system to large vocabulary. To do so, we employ efficient decoding techniques using fast approximate nearest neighbor (ANN) algorithms. Firstly, we apply ANN techniques on the original small vocabulary task and get 5× speedup. We then apply the techniques to the construction of a large vocabulary concatenative resynthesis system and scale the system up to 12 × larger dictionary. We perform listening tests with five participants to measure subjective quality and intelligibility of the output speech.
[18] Soumi Maiti and Michael I Mandel. Concatenative resynthesis using twin networks. In Proceedings of Interspeech, pages 3647--3651, 2017. [ bib | DOI | .pdf ]
Traditional noise reduction systems modify a noisy signal to make it more like the original clean signal. For speech, these methods suffer from two main problems: under-suppression of noise and over-suppression of target speech. Instead, synthesizing clean speech based on the noisy signal could produce outputs that are both noise-free and high quality. Our previous work introduced such a system using concatenative synthesis, but it required processing the clean speech at run time, which was slow and not scalable. In order to make such a system scalable, we propose here learning a similarity metric using two separate networks, one network processing the clean segments offline and another processing the noisy segments at run time. This system incorporates a ranking loss to optimize for the retrieval of appropriate clean speech segments. This model is compared against our original on the CHiME2-GRID corpus, measuring ranking performance and subjective listening tests of resyntheses.
[19] Ali Syed, Andrew Rosenberg, and Michael I Mandel. Active learning for low-resource speech recognition: Impact of selection size and language modeling data. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2017. [ bib | .pdf ]
Active learning aims to reduce the time and cost of developing speech recognition systems by selecting for transcription highly informative subsets from large pools of audio data. Previous evaluations at OpenKWS and IARPA BABEL have investigated data selection for low-resource languages in very constrained scenarios with 2-hour data selections given a 1-hour seed set. We expand on this to investigate what happens with larger selections and fewer constraints on language modeling data. Our results, on four languages from the final BABEL OP3 period, show that active learning is helpful at larger selections with consistent gains up to 14 hours. We also find that the impact of additional language model data is orthogonal to the impact of the active learning selection criteria.
[20] Johanna Devaney and Michael I Mandel. An evaluation of score-informed methods for estimating fundamental frequency and power from polyphonic audio. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2017. [ bib | .pdf ]
Robust extraction of performance data from polyphonic musical performances requires precise frame-level estimation of fundamental frequency (f0) and power. This paper evaluates a new score-guided approach to f0 and power estimation in polyphonic audio and compares the use of four different input features: the central bin frequencies of the spectrogram, the instantaneous frequency, and two variants of a high resolution spectral analysis. These four features were evaluated on four-part multi-track ensemble recordings, consisting of either four vocalists or bassoon, clarinet, saxophone, and violin (the Bach10 data set) created from polyphonic mixes of the monophonic tracks both with and without artificial reverberation. Score information was used to identify time-frequency regions of interest in the polyphonic mixes for each note in a corresponding aligned score, from which f0 and power estimates were made. The approach was able to recover ground truth f0 within 20 cents on average in reverberation and power within 5dB for anechoic mixtures, but only within 10dB for reverberant.
[21] Michael I Mandel and Jon P Barker. Multichannel spatial clustering for robust far-field automatic speech recognition in mismatched conditions. In Proceedings of Interspeech, pages 1991--1995, 2016. [ bib | DOI | Slides | .pdf ]
Recent automatic speech recognition (ASR) results are quite good when the training data is matched to the test data, but much worse when they differ in some important regard, like the number and arrangement of microphones or differences in reverberation and noise conditions. This paper proposes an unsupervised spatial clustering approach to microphone array processing that can overcome such train-test mismatches. This approach, known as Model-based EM Source Separation and Localization (MESSL), clusters spectrogram points based on the relative differences in phase and level between pairs of microphones. Here it is used for the first time to drive minimum variance distortionless response (MVDR) beamforming in several ways. We compare it to a standard delay-and-sum beamformer on the CHiME-3 noisy test set (real recordings), using each system as a pre-processor for the same recognizer trained on the AMI meeting corpus. We find that the spatial clustering front end reduces word error rates by between 9.9 and 17.1% relative to the baseline.
[22] Michael I Mandel. Directly comparing the listening strategies of humans and machines. In Proceedings of Interspeech, pages 660--664, 2016. [ bib | DOI | Poster | .pdf ]
In a given noisy environment, human listeners can more accurately identify spoken words than automatic speech recognizers. It is not clear, however, what information the humans are able to utilize in doing so that the machines are not. This paper uses a recently introduced technique to directly characterize the information used by humans and machines on the same task. The task was a forced choice between eight sentences spoken by a single talker from the small-vocabulary GRID corpus that were selected to be maximally confusable with one another. These sentences were mixed with “bubble” noise, which is designed to reveal randomly selected time-frequency glimpses of the sentence. Responses to these noisy mixtures allowed the identification of time-frequency regions that were important for each listener to recognize each sentence, i.e., regions that were frequently audible when a sentence was correctly identified and inaudible when it was not. In comparing these regions across human and machine listeners, we found that dips in noise allowed the humans to recognize words based on informative speech cues. In contrast, the baseline CHiME-2-GRID recognizer correctly identified sentences only when the time-frequency profile of the noisy mixture matched that of the underlying speech.
[23] Hakan Erdogan, John Hershey, Shinji Watanabe, Michael I Mandel, and Jonathan Le Roux. Improved MVDR beamforming using single-channel mask prediction networks. In Proceedings of Interspeech, pages 1981--1985, 2016. [ bib | DOI | .PDF ]
Recent studies on multi-microphone speech databases indicate that it is beneficial to perform beamforming to improve speech recognition accuracies, especially when there is a high level of background noise. Minimum variance distortionless response (MVDR) beamforming is an important beamforming method that performs quite well for speech recognition purposes especially if the steering vector is known. However, steering the beamformer to focus on speech in unknown acoustic conditions remains a challenging problem. In this study, we use single-channel speech enhancement deep networks to form masks that can be used for noise spatial covariance estimation, which steers the MVDR beamforming toward the speech. We analyze how mask prediction affects performance and also discuss various ways to use masks to obtain the speech and noise spatial covariance estimates in a reliable way. We show that using a single mask across microphones for covariance prediction with minima-limited post-masking yields the best result in terms of signal-level quality measures and speech recognition word error rates in a mismatched training condition.
[24] Xiong Xiao, Shinji Watanabe, Hakan Erdogan, Liang Lu, John Hershey, Michael L Seltzer, Guoguo Chen, Yu Zhang, Michael Mandel, and Dong Yu. Deep beamforming networks for multi-channel speech recognition. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 5745--5749. IEEE, mar 2016. [ bib | DOI | .pdf ]
Despite the significant progress in speech recognition enabled by deep neural networks, poor performance persists in some scenarios. In this work, we focus on far-field speech recognition which remains challenging due to high levels of noise and reverberation in the captured speech signals. We propose to represent the stages of acoustic processing including beamforming, feature extraction, and acoustic modeling, as three components of a single unified computational network. The parameters of a frequency-domain beamformer are first estimated by a network based on features derived from the microphone channels. These filter coefficients are then applied to the array signals to form an enhanced signal. Conventional features are then extracted from this signal and passed to a second network that performs acoustic modeling for classification. The parameters of both the beamforming and acoustic modeling networks are trained jointly using back-propagation with a common crossentropy objective function. In experiments on the AMI meeting corpus, we observed improvements by pre-training each sub-network with a network-specific objective function before joint training of both networks. The proposed method obtained a 3.2% absolute word error rate reduction compared to a conventional pipeline of independent processing stages.
Keywords: AMI meeting corpus,Acoustics,Array signal processing,Feature extraction,Microphones,Neural networks,Speech,Speech recognition,absolute word error rate reduction,acoustic modeling network,array signal application,array signal processing,backpropagation,beamforming,common crossentropy objective function,deep beamforming network,deep neural network,deep neural networks,direction of arrival,entropy,feature extraction,filter coefficient,filter- and-sum beamforming,filtering theory,frequency-domain analysis,frequency-domain beamformer,learning (artificial intelligence),microphone arrays,microphone channels,multichannel far-field speech recognition,network-specific objective function,reverberation,signal classification,signal enhancement,signal representation,single unified computational network,speech recognition
[25] Deblin Bagchi, Michael I Mandel, Zhongqiu Wang, Yanzhang He, Andrew Plummer, and Eric Fosler-Lussier. Combining spectral feature mapping and multi-channel model-based source separation for noise-robust automatic speech recognition. In Proceedings of the IEEE Workshop on Automatic Speech Recognition and Understanding, pages 496--503, 2015. [ bib | DOI | .pdf ]
Automatic Speech Recognition systems suffer from severe performance degradation in the presence of myriad complicating factors such as noise, reverberation, multiple speech sources, multiple recording devices, etc. Previous challenges have sparked much innovation when it comes to designing systems capable of handling these complications. In this spirit, the CHiME-3 challenge presents system builders with the task of recognizing speech in a real-world noisy setting wherein speakers talk to an array of 6 microphones in a tablet. In order to address these issues, we explore the effectiveness of first applying a model-based source separation mask to the output of a beamformer that combines the source signals recorded by each microphone, followed by a DNN-based front end spectral mapper that predicts clean filterbank features. The source separation algorithm MESSL (Model-based EM Source Separation and Localization) has been extended from two channels to multiple channels in order to meet the demands of the challenge. We report on interactions between the two systems, cross-cut by the use of a robust beamforming algorithm called BeamformIt. Evaluations of different system settings reveal that combining MESSL and the spectral mapper together on the baseline beamformer algorithm boosts the performance substantially.
[26] Sreyas Srimath Tirumala and Michael I Mandel. Exciting estimated clean spectra for speech resynthesis. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2015. [ bib | Poster | .pdf ]
Spectral masking techniques are prevalent for noise suppression but they damage speech in regions of the spectrum where both noise and speech are present. This paper instead utilizes a recently introduced analysis-by-synthesis technique to estimate the spectral envelope of the speech at all frequencies, and adds to it a model of the speech excitation necessary to fully resynthesize a clean speech signal. Such a resynthesis should have little noise and high quality compared to mask-based approaches. We compare several different excitation signals on the Aurora4 corpus, including those derived from the high quefrency components of the noisy mixture and from the combination of a noise robust pitch tracker and a voiced/unvoiced classifier. Preliminary subjective evaluations suggest that the speech synthesized using our approach has higher voice quality and noise suppression than spectral masking.
[27] Michael I Mandel and Young Suk Cho. Audio super-resolution using concatenative resynthesis. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2015. [ bib | Demo | Slides | .pdf ]
This paper utilizes a recently introduced non-linear dictionary-based denoising system in another voice mapping task, that of transforming low-bandwidth, low-bitrate speech into high-bandwidth, high-quality speech. The system uses a deep neural network as a learned non-linear comparison function to drive unit selection in a concatenative synthesizer based on clean recordings. This neural network is trained to predict whether a given clean audio segment from the dictionary could be transformed into a given segment of the degraded observation. Speaker-dependent experiments on the small-vocabulary CHiME2-GRID corpus show that this model is able to resynthesize high quality clean speech from degraded observations. Preliminary listening tests show that the system is able to improve subjective speech quality evaluations by up to 50 percentage points, while a similar system based on non-negative matrix factorization and trained on the same data produces no significant improvement.
[28] Michael I Mandel and Nicoleta Roman. Enforcing consistency in spectral masks using markov random fields. In Proceedings of EUSIPCO, pages 2028--2032, 2015. [ bib | .pdf ]
Localization-based multichannel source separation algorithms typically operate by clustering or classifying individual time-frequency points based on their spatial characteristics, treating adjacent points as independent observations. The Model-based EM Source Separation and Localization (MESSL) algorithm is one such approach for binaural signals that achieves additional robustness by enforcing consistency across frequencies in interaural phase differences. This paper incorporates MESSL into a Markov Random Field (MRF) framework in order to enforce consistency in the assignment of neighboring time-frequency units to sources. Approximate inference in the MRF is performed using loopy belief propagation, and the same approach can be used to smooth any probabilistic source separation mask. The proposed MESSL-MRF algorithm is tested on binaural mixtures of three sources in reverberant conditions and shows significant improvements over the original MESSL algorithm as measured by both signal-to-distortion ratios as well as a speech intelligibility predictor.
[29] Michael I Mandel, Young-Suk Cho, and Yuxuan Wang. Learning a concatenative resynthesis system for noise suppression. In Proceedings of the IEEE GlobalSIP conference, 2014. [ bib | Demo | Poster | .pdf ]
This paper introduces a new approach to dictionary-based source separation employing a learned non-linear metric. In contrast to existing parametric source separation systems, this model is able to utilize a rich dictionary of speech signals. In contrast to previous dictionary-based source separation systems, the system can utilize perceptually relevant non-linear features of the noisy and clean audio. This approach utilizes a deep neural network (DNN) to predict whether a noisy chunk of audio contains a given clean chunk. Speaker-dependent experiments on the CHiME2-GRID corpus show that this model is able to accurately resynthesize clean speech from noisy observations. Preliminary listening tests show that the system's output has much higher audio quality than existing parametric systems trained on the same data, achieving noise suppression levels close to those of the original clean speech.
[30] Michael I Mandel, Sarah E Yoho, and Eric W Healy. Generalizing time-frequency importance functions across noises, talkers, and phonemes. In Proceedings of Interspeech, 2014. [ bib | Poster | .pdf ]
Listeners can reliably identify speech in noisy conditions, although it is generally not known what specific features of speech are used to do this. We utilize a recently introduced data-driven framework to identify these features. By analyzing listening-test results involving the same speech utterance mixed with many different noise instances, the framework is able to compute the importance of each time-frequency point in the utterance to its intelligibility. This paper shows that a trained model resulting from this framework can generalize to new conditions, successfully predicting the intelligibility of novel mixtures. First, it can generalize to novel noise instances after being trained on mixtures involving the same speech utterance but different noises. Second, it can generalize to novel talkers after being trained on mixtures involving the same syllables produced by different talkers in different noises. Finally, it can generalize to novel phonemes, after being trained on mixtures involving different consonants produced by the same or different talkers in different noises. Aligning the clean utterances in time and then propagating this alignment to the features used in the intelligibility prediction improves this generalization performance further.
[31] Michael I Mandel and Arun Narayanan. Analysis-by-synthesis feature estimation for robust automatic speech recognition using spectral masks. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, 2014. [ bib | Poster | .pdf ]
Spectral masking is a promising method for noise suppression in which regions of the spectrogram that are dominated by noise are attenuated while regions dominated by speech are preserved. It is not clear, however, how best to combine spectral masking with the non-linear processing necessary to compute automatic speech recognition features. We propose an analysis-by-synthesis approach to automatic speech recognition, which, given a spectral mask, poses the estimation of mel frequency cepstral coefficients (MFCCs) of the clean speech as an optimization problem. MFCCs are found that minimize a combination of the distance from the resynthesized clean power spectrum to the regions of the noisy spectrum selected by the mask and the negative log likelihood under an unmodified large vocabulary continuous speech recognizer. In evaluations on the Aurora4 noisy speech recognition task with both ideal and estimated masks, analysis-by-synthesis decreases both word error rates and distances to clean speech as compared to traditional approaches.
[32] Arnab Nandi, Lilong Jiang, and Michael I Mandel. Gestural query specification. In Proceedings of the International Conference on Very Large Data Bases, volume 7, 2014. [ bib | Slides | .pdf ]
Direct, ad-hoc interaction with databases has typically been per- formed over console-oriented conversational interfaces using query languages such as SQL. With the rise in popularity of gestural user interfaces and computing devices that use gestures as their exclusive modes of interaction, database query interfaces require a fundamen- tal rethinking to work without keyboards. We present a novel query specification system that allows the user to query databases using a series of gestures. We present a novel gesture recognition system that uses both the interaction and the state of the database to classify gestural input into relational database queries. We conduct exhaus- tive systems performance tests and user studies to demonstrate that our system is not only performant and capable of interactive laten- cies, but it is also more usable, faster to use and more intuitive than existing systems.
[33] Michael I. Mandel. Learning an intelligibility map of individual utterances. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, 2013. [ bib | .pdf ]
Predicting the intelligibility of noisy recordings is difficult and most current algorithms only aim to be correct on average across many recordings. This paper describes a listening test paradigm and associated analysis technique that can predict the intelligibility of a specific recording of a word in the presence of a specific noise instance. The analysis learns a map of the importance of each point in the recording's spectrogram to the overall intelligibility of the word when glimpsed through “bubbles” in many noise instances. By treating this as a classification problem, a linear classifier can be used to predict intelligibility and can be examined to determine the importance of spectral regions. This approach was tested on recordings of vowels and consonants. The important regions identified by the model in these tests agreed with those identified by a standard, non-predictive statistical test of independence and with the acoustic phonetics literature.
[34] Nicoleta Roman and Micheal Mandel. Classification based binaural dereverberation. In Proceedings of Interspeech, 2013. [ bib ]
Reverberation has a detrimental effect on speech perception both in terms of quality as well as intelligibility, as late reflections smear temporal and spectral cues. The ideal binary mask, which is an established computational approach to sound separation, was recently extended to remove reverberation. Experiments with both normal hearing and hearing impaired listeners have shown significant intelligibility improvements for reverberant speech processed using such a priori binary masks. The dereverberation problem can thus be formulated as a classification problem, where the desired output is the ideal binary mask. The goal in this approach is to produce a mask that selects the time-frequency regions where the direct energy dominates the energy from the late reflections. In this study, a binaural dereverberation algorithm is proposed which utilizes the binaural cues of interaural time and level differences as features. The algorithm is tested in highly reverberant environments using both simulated and recorded room impulse responses. Evaluations show significant improvements over the unprocessed condition as measured by both a speech quality measure and a speech intelligibility predictor.
[35] Johanna Devaney, Michael I. Mandel, and Ichiro Fujinaga. A study of intonation in three-part singing using the automatic music performance analysis and comparison toolkit (AMPACT). In Proceedings of the International Society for Music Information Retrieval conference, 2012. [ bib | .pdf ]
This paper introduces the Automatic Music Performance Analysis and Comparison Toolkit (AMPACT), is a MATLAB toolkit for accurately aligning monophonic audio to MIDI scores as well as extracting and analyzing timing-, pitch-, and dynamics-related performance data from the aligned recordings. This paper also presents the results of an analysis performed with AMPACT on an experiment studying intonation in three-part singing. The experiment examines the interval size and drift in four ensembles' performances of a short exercise by Benedetti, which was designed to highlight the conflict between Just Intonation tuning and pitch drift.
[36] Johanna Devaney, Michael I. Mandel, and Ichiro Fujinaga. Characterizing singing voice fundamental frequency trajectories. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 73--76, October 2011. [ bib | Poster | .pdf ]
This paper evaluates the utility of the Discrete Cosine Transform (DCT) for characterizing singing voice fundamental frequency (F0) trajectories. Specifically, it focuses on the use of the 1 st and 2nd DCT coefficients as approximations of slope and curvature. It also considers the impact of vocal vibrato on the DCT calculations, including the influence of segmentation on the consistency of the reported DCT coefficient values. These characterizations are useful for describing similarities in the evolution of the fundamental frequency in different notes. Such descriptors can be applied in the areas of performance analysis and singing synthesis.
[37] Michael I. Mandel, Douglas Eck, and Yoshua Bengio. Learning tags that vary within a song. In Proceedings of the International Society for Music Information Retrieval conference, pages 399--404, August 2010. [ bib | Slides | .pdf ]
This paper examines the relationship between human generated tags describing different parts of the same song. These tags were collected using Amazon's Mechanical Turk service. We find that the agreement between different people's tags decreases as the distance between the parts of a song that they heard increases. To model these tags and these relationships, we describe a conditional restricted Boltzmann machine. Using this model to fill in tags that should probably be present given a context of other tags, we train automatic tag classifiers (autotaggers) that outperform those trained on the original data.
[38] James Bergstra, Michael I. Mandel, and Douglas Eck. Scalable genre and tag prediction with spectral covariance. In Proceedings of the International Society for Music Information Retrieval conference, pages 507--512, August 2010. [ bib | .pdf ]
Cepstral analysis is effective in separating source from filter in vocal and monophonic [pitched] recordings, but is it a good general-purpose framework for working with music audio? We evaluate covariance in spectral features as an alternative to means and variances in cepstral features (particularly MFCCs) as summaries of frame-level features. We find that spectral covariance is more effective than mean, variance, and covariance statistics of MFCCs for genre and social tag prediction. Support for our model comes from strong and state-of-the-art performance on the GTZAN genre dataset, MajorMiner, and MagnaTagatune. Our classification strategy based on linear classifiers is easy to implement, exhibits very little sensitivity to hyper-parameters, trains quickly (even for web-scale datasets), is fast to apply, and offers competitive performance in genre and tag prediction.
[39] Michael I. Mandel and Daniel P. W. Ellis. The ideal interaural parameter mask: a bound on binaural separation systems. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 85--88, October 2009. [ bib | DOI | Poster | .pdf ]
We introduce the Ideal Interaural Parameter Mask as an upper bound on the performance of mask-based source separation algorithms that are based on the differences between signals from two microphones or ears. With two additions to our Model-based EM Source Separation and Localization system, its performance approaches that of the IIPM upper bound to within 0.9 dB. These additions battle the effects of reverberation by absorbing reverberant energy and by forcing the ILD estimate to be larger than it might otherwise be. An oracle reliability measure was also added, in the hope that estimating parameters from more reliable regions of the spectrogram would improve separation, but it was not consistently useful.
[40] Johanna Devaney, Michael I. Mandel, and Daniel P. W. Ellis. Improving MIDI-audio alignment with acoustic features. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 45--48, October 2009. [ bib | DOI | .pdf ]
This paper describes a technique to improve the accuracy of dynamic time warping-based MIDI-audio alignment. The technique implements a hidden Markov model that uses aperiodicity and power estimates from the signal as observations and the results of a dynamic time warping alignment as a prior. In addition to improving the overall alignment, this technique also identifies the transient and steady state sections of the note. This information is important for describing various aspects of a musical performance, including both pitch and rhythm.
[41] Edith Law, Kris West, Michael I Mandel, Mert Bay, and J. Stephen Downie. Evaluation of algorithms using games: the case of music annotation. In Proceedings of the International Society for Music Information Retrieval conference, pages 387--392, October 2009. [ bib | .pdf ]
Search by keyword is an extremely popular method for retrieving music. To support this, novel algorithms that automatically tag music are being developed. The conventional way to evaluate audio tagging algorithms is to compute measures of agreement between the output and the ground truth set. In this work, we introduce a new method for evaluating audio tagging algorithms on a large scale by collecting set-level judgments from players of a human computation game called TagATune. We present the design and preliminary results of an experiment comparing five algorithms using this new evaluation metric, and contrast the results with those obtained by applying several conventional agreement-based evaluation metrics.
[42] Ron J. Weiss, Michael I. Mandel, and Daniel P. W. Ellis. Source separation based on binaural cues and source model constraints. In Proceedings of Interspeech, pages 419--422, September 2008. [ bib | Demo | .pdf ]
We describe a system for separating multiple sources from a two-channel recording based on interaural cues and known characteristics of the source signals. We combine a probabilistic model of the observed interaural level and phase differences with a prior model of the source statistics and derive an EM algorithm for finding the maximum likelihood parameters of the joint model. The system is able to separate more sound sources than there are observed channels. In simulated reverberant mixtures of three speakers the proposed algorithm gives a signal-to-noise ratio improvement of 2.1 dB over a baseline algorithm using only interaural cues.
[43] Michael I. Mandel and Daniel P. W. Ellis. Multiple-instance learning for music information retrieval. In Proceedings of the International Society for Music Information Retrieval conference, pages 577--582, September 2008. [ bib | Poster | .pdf ]
Multiple-instance learning algorithms train classifiers from lightly supervised data, i.e. labeled collections of items, rather than labeled items. We compare the multiple-instance learners mi-SVM and MILES on the task of classifying 10-second song clips. These classifiers are trained on tags at the track, album, and artist levels, or granularities, that have been derived from tags at the clip granularity, allowing us to test the effectiveness of the learners at recovering the clip labeling in the training set and predicting the clip labeling for a held-out test set. We find that mi-SVM is better than a control at the recovery task on training clips, with an average classification accuracy as high as 87% over 43 tags; on test clips, it is comparable to the control with an average classification accuracy of up to 68%. MILES performed adequately on the recovery task, but poorly on the test clips.
[44] Daniel P. W. Ellis, Courtenay V. Cotton, and Michael I. Mandel. Cross-correlation of beat-synchronous representations for music similarity. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 57--60, April 2008. [ bib | DOI | .pdf ]
Systems to predict human judgments of music similarity directly from the audio have generally been based on the global statistics of spectral feature vectors i.e. collapsing any large-scale temporal structure in the data. Based on our work in identifying alternative ("cover") versions of pieces, we investigate using direct correlation of beat-synchronous representations of music audio to find segments that are similar not only in feature statistics, but in the relative positioning of those features in tempo-normalized time. Given a large enough search database, good matches by this metric should have very high perceived similarity to query items. We evaluate our system through a listening test in which subjects rated system-generated matches as similar or not similar, and compared results to a more conventional timbral and rhythmic similarity baseline, and to random selections.
[45] Michael I. Mandel and Daniel P. W. Ellis. EM localization and separation using interaural level and phase cues. In IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, pages 275--278, October 2007. [ bib | DOI | Poster | .pdf ]
We describe a system for localizing and separating multiple sound sources from a reverberant two-channel recording. It consists of a probabilistic model of interaural level and phase differences and an EM algorithm for finding the maximum likelihood parameters of this model. By assigning points in the interaural spectrogram probabilistically to sources with the best-fitting parameters and then estimating the parameters of the sources from the points assigned to them, the system is able to separate and localize more sound sources than there are available channels. It is also able to estimate frequency-dependent level differences of sources in a mixture that correspond well to those measured in isolation. In experiments in simulated anechoic and reverberant environments, the proposed system improved the signal-to-noise ratio of target sources by 2.7 and 3.4dB more than two comparable algorithms on average.
[46] Michael I. Mandel and Daniel P. W. Ellis. A web-based game for collecting music metadata. In Simon Dixon, David Bainbridge, and Rainer Typke, editors, Proceedings of the International Society for Music Information Retrieval conference, pages 365--366, September 2007. [ bib | Poster | .pdf ]
We have designed a web-based game to make collecting descriptions of musical excerpts fun, easy, useful, and objective. Participants describe 10 second clips of songs and score points when their descriptions match those of other participants. The rules were designed to encourage users to be thorough and the clip length was chosen to make judgments more objective and specific. Analysis of preliminary data shows that we are able to collect objective and specific descriptions of clips and that players tend to agree with one another.
[47] Michael I. Mandel, Daniel P. W. Ellis, and Tony Jebara. An EM algorithm for localizing multiple sound sources in reverberant environments. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems, pages 953--960. MIT Press, Cambridge, MA, 2007. [ bib | Poster | .pdf ]
We present a method for localizing and separating sound sources in stereo recordings that is robust to reverberation and does not make any assumptions about the source statistics. The method consists of a probabilistic model of binaural multi-source recordings and an expectation maximization algorithm for finding the maximum likelihood parameters of that model. These parameters include distributions over delays and assignments of time-frequency regions to sources. We evaluate this method against two comparable algorithms on simulations of simultaneous speech from two or three sources. Our method outperforms the others in anechoic conditions and performs as well as the better of the two in the presence of reverberation.
[48] Michael I. Mandel and Daniel P. W. Ellis. Song-level features and support vector machines for music classification. In Joshua D. Reiss and Geraint A. Wiggins, editors, Proceedings of the International Society for Music Information Retrieval conference, pages 594--599, September 2005. [ bib | Poster | .pdf ]
Searching and organizing growing digital music collections requires automatic classification of music. This paper describes a new system, tested on the task of artist identification, that uses support vector machines to classify songs based on features calculated over their entire lengths. Since support vector machines are exemplar-based classifiers, training on and classifying entire songs instead of short-time features makes intuitive sense. On a dataset of 1200 pop songs performed by 18 artists, we show that this classifier outperforms similar classifiers that use only SVMs or song-level features. We also show that the KL divergence between single Gaussians and Mahalanobis distance between MFCC statistics vectors perform comparably when classifiers are trained and tested on separate albums, but KL divergence outperforms Mahalanobis distance when trained and tested on songs from the same albums.
[49] Erik B. Sudderth, Michael I. Mandel, William T. Freeman, and Alan S. Willsky. Distributed occlusion reasoning for tracking with nonparametric belief propagation. In Lawrence K. Saul, Yair Weiss, and Léon Bottou, editors, Advances in Neural Information Processing Systems, pages 1369--1376. MIT Press, Cambridge, MA, 2005. [ bib | Demo | .pdf ]
We describe a three­dimensional geometric hand model suitable for visual tracking applications. The kinematic constraints implied by the model's joints have a probabilistic structure which is well described by a graphical model. Inference in this model is complicated by the hand's many degrees of freedom, as well as multimodal likelihoods caused by ambiguous image measurements. We use nonparametric belief propagation (NBP) to develop a tracking algorithm which exploits the graph's structure to control complexity, while avoiding costly discretization. While kinematic constraints naturally have a local structure, self­ occlusions created by the imaging process lead to complex interpendencies in color and edge­based likelihood functions. However, we show that local structure may be recovered by introducing binary hidden variables describing the occlusion state of each pixel. We augment the NBP algorithm to infer these occlusion variables in a distributed fashion, and then analytically marginalize over them to produce hand position estimates which properly account for occlusion events. We provide simulations showing that NBP may be used to refine inaccurate model initializations, as well as track hand motion through extended image sequences.

Other

[1] Eleanor Davol, Natalie Boelman, Todd Brinkman, Carissa Brown, Glen Liston, Michael Mandel, Enis Coban, Megan Perra, Kirsten Reid, Scott Leorna, et al. Automated soundscape analysis reveals strong influence of time since wildfire on boreal breeding birds. In AGU Fall Meeting Abstracts, volume 2021, pages B23C--03, 2021. [ bib ]
[2] Zhaoheng Ni, Felix Grezes, Viet Anh Trinh, and Michael I Mandel. Improved MVDR beamforming using LSTM speech models to clean spatial clustering masks, 2020. [ bib | arXiv | .pdf ]
[3] Felix Grezes, Zhaoheng Ni, Viet Anh Trinh, and Michael Mandel. Enhancement of spatial clustering-based time-frequency masks using lstm neural networks, 2020. [ bib | arXiv | .pdf ]
[4] Felix Grezes, Zhaoheng Ni, Viet Anh Trinh, and Michael Mandel. Combining spatial clustering with lstm speech models for multichannel speech enhancement, 2020. [ bib | arXiv | .pdf ]
[5] Tian Cai, Michael I Mandel, and Di He. Music autotagging as captioning. In First Workshop on NLP for Music and Audio, 2020. [ bib | Poster | http ]
Music autotagging has typically been formulated as a multilabel classification problem. This approach assumes that tags associatedwith a clip of music are an unordered set. With recent success of image and video captioning as well as environmental audio captioning, we we propose formulating music auto-tagging as a captioning task, which automatically associates tags with a clip of music inthe order a human would apply them. Under the formulation of captioning as a sequence-to-sequence problem, previous music autotagging systems can be used as the encoder, extracting a representation of the musical audio. An attention-based decoder is added to learn to predict a sequence of tags describing the given clip. Experiments are conducted on data collected from the MajorMiner game, which includes the order and timing that tags were applied to clips by individual users, and contains 3.95 captions per clip on average.
[6] Shinji Watanabe, Michael I Mandel, Jon Barker, and Emmanuel Vincent. CHiME-6 challenge: Tackling multispeaker speech recognition for unsegmented recordings, 2020. [ bib | arXiv ]
Following the success of the 1st, 2nd, 3rd, 4th and 5th CHiME challenges we organize the 6th CHiME Speech Separation and Recognition Challenge (CHiME-6). The new challenge revisits the previous CHiME-5 challenge and further considers the problem of distant multi-microphone conversational speech diarization and recognition in everyday home environments. Speech material is the same as the previous CHiME-5 recordings except for accurate array synchronization. The material was elicited using a dinner party scenario with efforts taken to capture data that is representative of natural conversational speech. This paper provides a baseline description of the CHiME-6 challenge for both segmented multispeaker speech recognition (Track 1) and unsegmented multispeaker speech recognition (Track 2). Of note, Track 2 is the first challenge activity in the community to tackle an unsegmented multispeaker speech recognition scenario with a complete set of reproducible open source baselines providing speech enhancement, speaker diarization, and speech recognition modules.
[7] Lauren Mandel, Michael I. Mandel, and Chris Streb. Soundscape ecology: How listening to the environment can shape design and planning. In American Society for Landscape Architects Conference on Landscape Architecture, San Diego, CA, 2019. [ bib ]
[8] Zhaoheng Ni and Michael I Mandel. Onssen: an open-source speech separation and enhancement library. pages 7269--7273, 2020. [ bib | DOI | arXiv ]
Speech separation is an essential task for multi-talker speech recognition. Recently many deep learning approaches are proposed and have been constantly refreshing the state-of-the-art performances. The lack of algorithm implementations limits researchers to use the same dataset for comparison. Building a generic platform can benefit researchers by easily implementing novel separation algorithms and comparing them with the existing ones on customized datasets. We introduce "onssen": an open-source speech separation and enhancement library. onssen is a library mainly for deep learning separation and enhancement algorithms. It uses LibRosa and NumPy libraries for the feature extraction and PyTorch as the back-end for model training. onssen supports most of the Time-Frequency mask-based separation algorithms (e.g. deep clustering, chimera net, chimera++, and so on) and also supports customized datasets. In this paper, we describe the functionality of modules in onssen and show the algorithms implemented by onssen achieve the same performances as reported in the original papers.
[9] Vikas Grover, Michael I Mandel, Valerie Shafer, Yusra Syed, and Austin Twine. Understanding acoustic cues non-native speakers use for identifying english /v/-/w/ using bubble noise method. In ASHA Convention, 2018. [ bib | http ]
Hindi speakers of English perceive the English /v/-/w/ contrast less accurately than English speakers (Grover et al., 2016). The specific acoustic information misperceived in /v/-/w/ contrast remains unclear. This study, using a novel method of “bubble” noise (Mandel et al., 2016), identifies the acoustic cues for perception of /v/-/w/ contrast in English and Hindi speakers of English.

Learner Outcome(s): Describe the effects of first language phonology on second language phonology Discuss a novel method (Bubble Noise) to identify specific acoustic cues Explain the importance of targeted training studies for difficult non-native contrasts

Keywords: Speech perception, Bubble Noise, Non-native speakers, Hindi, Acoustic cues

[10] Hussein Ghaly and Michael I Mandel. Analyzing human and machine performance in resolving ambiguous spoken sentences. In 1st Workshop on Speech-Centric Natural Language Processing (SCNLP), pages 18--26, 2017. [ bib | .pdf ]
[11] Jiyoung Choi and Michael I Mandel. Perception of korean fricatives and affricates in 'bubble' noise by native and nonnative speakers. In International Circle of Korean Linguistics, 2017. [ bib ]
[12] Michael I Mandel and Nicoleta Roman. Integrating markov random fields and model-based expectation maximization source separation and localization. In Acoustical Society of America Spring Meeting, 2015. [ bib | Slides ]
[13] Michael I Mandel, Sarah E Yoho, and Eric W Healy. Listener consistency in identifying speech mixed with particular “bubble” noise instances. In Acoustical Society of America Spring Meeting, 2015. [ bib | Poster ]
[14] Michael I Mandel and Song Hui Chon. Using auditory bubbles to determine spectro-temporal cues of timbre. In Cognitively Based Music Informatics Research (CogMIR), 2014. [ bib | Slides ]
Listeners can reliably identify speech in noisy conditions, but it is not well understood which specific features of speech they use to do this. This talk presents a data-driven framework for identifying these features. By analyzing listening-test results involving the same speech utterance mixed with many different "bubble" noise instances, the framework is able to compute the importance of each time-frequency point in the utterance to its intelligibility, which we call the time-frequency importance function. We show that listeners are self-consistent in their ability to identify the word in individual mixtures and are also fairly consistent with other listeners, and that different listeners' time-frequency importance functions are similar for the same utterance. In addition, a predictive model trained under this framework is able to generalize to new conditions, successfully predicting the intelligibility of mixtures involving novel noise instances, novel utterances of the same word from the same and different talkers, and even to some extent novel consonants. If there is time, I will also discuss a preliminary experiment applying this framework to the determination of the time-frequency points in a musical note that are most important to listeners for recognizing its timbre.
[15] Arnab Nandi and Michael I Mandel. The interactive join: Recognizing gestures for database queries. In CHI Works-In-Progress, 2013. [ bib | Poster | .pdf ]
Direct, ad-hoc interaction with databases has typically been performed over console-oriented conversational interfaces using query languages such as SQL. With the rise in popularity of gestural user interfaces and computing devices that use gestures as their exclusive mode of interaction, database query interfaces require a fundamental rethinking to work without keyboards. Unlike domain-specific applications, the scope of possible actions is significantly larger if not infinite. Thus, the recognition of gestures and their consequent queries is a challenge. We present a novel gesture recognition system that uses both the interaction and the state of the database to classify gestural input into relational database queries. Preliminary results show that using this approach allows for fast, efficient and interactive gesture-based querying over relational databases.
[16] Michael Mandel, Razvan Pascanu, Hugo Larochelle, and Yoshua Bengio. Autotagging music with conditional restricted boltzmann machines, March 2011. [ bib | arXiv | http ]
This paper describes two applications of conditional restricted Boltzmann machines (CRBMs) to the task of autotagging music. The first consists of training a CRBM to predict tags that a user would apply to a clip of a song based on tags already applied by other users. By learning the relationships between tags, this model is able to pre-process training data to significantly improve the performance of a support vector machine (SVM) autotagging. The second is the use of a discriminative RBM, a type of CRBM, to autotag music. By simultaneously exploiting the relationships among tags and between tags and audio-based features, this model is able to significantly outperform SVMs, logistic regression, and multi-layer perceptrons. In order to be applied to this problem, the discriminative RBM was generalized to the multi-label setting and four different learning algorithms for it were evaluated, the first such in-depth analysis of which we are aware.
[17] Michael I. Mandel and Daniel P. W. Ellis. A probability model for interaural phase difference. In ISCA Workshop on Statistical and Perceptual Audio Processing SAPA, pages 1--6, 2006. [ bib | Demo | Slides | .pdf ]
In this paper, we derive a probability model for interaural phase differences at individual spectrogram points. Such a model can combine observations across arbitrary time and frequency regions in a structured way and does not make any assumptions about the characteristics of the sound sources. In experiments with speech from twenty speakers in simulated reverberant environments, this probabilistic method predicted the correct interaural delay of a signal more accurately than generalized cross-correlation methods.
[18] Erik B. Sudderth, Michael I. Mandel, William T. Freeman, and Alan S. Willsky. Visual hand tracking using nonparametric belief propagation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 189--197, 2004. [ bib | DOI | Demo | .pdf ]
This paper develops probabilistic methods for visual tracking of a three-dimensional geometric hand model from monocular image sequences. We consider a redundant representation in which each model component is described by its position and orientation in the world coordinate frame. A prior model is then defined which enforces the kinematic constraints implied by the model's joints. We show that this prior has a local structure, and is in fact a pairwise Markov random field. Furthermore, our redundant representation allows color and edge-based likelihood measures, such as the Chamfer distance, to be similarly decomposed in cases where there is no self-occlusion. Given this graphical model of hand kinematics, we may track the hand's motion using the recently proposed nonparametric belief propagation (NBP) algorithm. Like particle filters, NBP approximates the posterior distribution over hand configurations as a collection of samples. However, NBP uses the graphical structure to greatly reduce the dimensionality of these distributions, providing improved robustness. Several methods are used to improve NBP's computational efficiency, including a novel KD-tree based method for fast Chamfer distance evaluation. We provide simulations showing that NBP may be used to refine inaccurate model initializations, as well as track hand motion through extended image sequences.