Fast DNN training based on auxiliary function technique - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2015

Fast DNN training based on auxiliary function technique

Résumé

Deep neural networks (DNN) are typically optimized with stochastic gradient descent (SGD) using a fixed learning rate or an adaptive learning rate approach (ADAGRAD). In this paper, we introduce a new learning rule for neural networks that is based on an auxiliary function technique without parameter tuning. Instead of minimizing the objective function, a quadratic auxiliary function is recursively introduced layer by layer which has a closed-form optimum. We prove the monotonic decrease of the new learning rule. Our experiments show that the proposed algorithm converges faster and to a better local minimum than SGD. In addition, we propose a combination of the proposed learning rule and ADAGRAD which further accelerates convergence. Experimental evaluation on the MNIST database shows the benefit of the proposed approach in terms of digit recognition accuracy.
Fichier principal
Vignette du fichier
Dung2015ICASSP_v6_final.pdf (145.61 Ko) Télécharger le fichier
Origine Fichiers produits par l'(les) auteur(s)
Loading...

Dates et versions

hal-01107809 , version 1 (21-01-2015)
hal-01107809 , version 2 (29-01-2015)
hal-01107809 , version 3 (30-01-2015)
hal-01107809 , version 4 (11-02-2015)

Identifiants

  • HAL Id : hal-01107809 , version 4

Citer

Dung T. Tran, Nobutaka Ono, Emmanuel Vincent. Fast DNN training based on auxiliary function technique. ICASSP 2015 - 40th IEEE International Conference on Acoustics, Speech and Signal Processing, Apr 2015, Brisbane, Queensland, Australia. ⟨hal-01107809v4⟩
343 Consultations
697 Téléchargements

Partager

More