Kong, Qiuqiang, Xu, Yong, Wang, Wenwu, and Plumbley, Mark D. “A Joint Detection-Classification Model for Audio Tagging of Weakly Labelled Data.” arXiv preprint arXiv:1610.01797 (2016).

Abstract

Audio tagging aims to assign one or several tags to an audio clip. Most of the datasets are weakly labelled, which means only the tags of the clip are known, without knowing the occurrence time of the tags. The labeling of an audio clip is often based on the audio events in the clip and no event level label is provided to the user. Previous works have used the bag of frames model assume the tags occur all the time, which is not the case in practice. We propose a joint detection-classification (JDC) model to detect and classify the audio clip simultaneously. The JDC model has the ability to attend to informative and ignore uninformative sounds. Then only informative regions are used for classification. Experimental results on the “CHiME Home” dataset show that the JDC model reduces the equal error rate (EER) from 19.0% to 16.9%. More interestingly, the audio event detector is trained successfully without needing the event level label.

Link to full paper ⤧  Next post The Verb ⤧  Previous post Kong et al. (2016a)