Xu, Y, Huang, Q, Wang, W, Foster, P, Sigtia, S, Jackson, PJB and Plumbley, MD (2017) Unsupervised Feature Learning Based on Deep Models for Environmental Audio Tagging IEEE/ACM Transactions on Audio, Speech, and Language Processing.

Abstract

Environmental audio tagging aims to predict only the presence or absence of certain acoustic events in the interested acoustic scene. In this paper we make contributions to audio tagging in two parts, respectively, acoustic modeling and feature learning. We propose to use a shrinking deep neural network (DNN) framework incorporating unsupervised feature learning to handle the multi-label classification task. For the acoustic modeling, a large set of contextual frames of the chunk are fed into the DNN to perform a multi-label classification for the expected tags, considering that only chunk (or utterance) level rather than frame-level labels are available. Dropout and background noise aware training are also adopted to improve the generalization capability of the DNNs. For the unsupervised feature learning, we propose to use a symmetric or asymmetric deep de-noising auto-encoder (syDAE or asyDAE) to generate new data-driven features from the logarithmic Mel-Filter Banks (MFBs) features. The new features, which are smoothed against background noise and more compact with contextual information, can further improve the performance of the DNN baseline. Compared with the standard Gaussian Mixture Model (GMM) baseline of the DCASE 2016 audio tagging challenge, our proposed method obtains a significant equal error rate (EER) reduction from 0.21 to 0.13 on the development set. The proposed asyDAE system can get a relative 6.7% EER reduction compared with the strong DNN baseline on the development set. Finally, the results also how that our approach obtains the state-of-the-art performance with 0.15 EER on he evaluation set of the DCASE 2016 audio tagging task while EER of the first prize of this challenge is 0.17. ⤧  Next post Xu et al. (2017b) ⤧  Previous post Xu et al. (2016b)