Convolutional Neural Networks for Speaker-Independent Speech Recognition - Inria - Institut national de recherche en sciences et technologies du numérique Access content directly
Master Thesis Year : 2011

Convolutional Neural Networks for Speaker-Independent Speech Recognition

Abstract

In this work we analyze a neural network structure capable of achieving a degree of invariance to speaker vocal tracts for speech recognition applications. It will be shown that invariance to a speaker’s pitch can be built into the classification stage of the speech recognition process using convolutional neural networks, whereas in the past attempts have been made to achieve invariance on the feature set used in the classification stage. We conduct experiments for the segment-level phoneme classification task using convolutional neural networks and compare them to neural network structures previously used in speech recognition, primarily the time-delayed neural network and the standard multilayer perceptron. The results show that convolutional neuralnetworks can in many cases achieve superior performance than the classical structures.
Fichier principal
Vignette du fichier
Convolutional Neural Networks for Speaker Independent Speech Recognition.pdf (1.12 Mo) Télécharger le fichier
Loading...

Dates and versions

hal-01142043 , version 1 (14-04-2015)

Identifiers

  • HAL Id : hal-01142043 , version 1

Cite

Eugene Belilovsky. Convolutional Neural Networks for Speaker-Independent Speech Recognition. Machine Learning [stat.ML]. 2011. ⟨hal-01142043⟩
55 View
682 Download

Share

Gmail Facebook Twitter LinkedIn More