Conference Papers Year : 2016

Multichannel Music Separation with Deep Neural Networks

Abstract

This article addresses the problem of multichannel music separation. We propose a framework where the source spectra are estimated using deep neural networks and combined with spatial covariance matrices to encode the source spatial characteristics. The parameters are estimated in an iterative expectation-maximization fashion and used to derive a multichannel Wiener filter. We evaluate the proposed framework for the task of music separation on a large dataset. Experimental results show that the method we describe performs consistently well in separating singing voice and other instruments from realistic musical mixtures.
Fichier principal
Vignette du fichier
eusipco_w_ack.pdf (385.77 Ko) Télécharger le fichier
Origin Files produced by the author(s)
Loading...

Dates and versions

hal-01334614 , version 1 (21-06-2016)
hal-01334614 , version 2 (14-06-2017)

Identifiers

  • HAL Id : hal-01334614 , version 2

Cite

Aditya Arie Nugraha, Antoine Liutkus, Emmanuel Vincent. Multichannel Music Separation with Deep Neural Networks. European Signal Processing Conference (EUSIPCO), Aug 2016, Budapest, Hungary. pp.1748-1752. ⟨hal-01334614v2⟩
547 View
1171 Download

Share

More