Fast DNN training based on auxiliary function technique
Résumé
Deep neural networks (DNN) are typically optimized with stochastic gradient descent (SGD) using a fixed learning rate or an adaptive
learning rate approach (ADAGRAD). In this paper, we introduce a new learning rule for neural networks that is based on an auxiliary
function technique without parameter tuning. Instead of minimizing the objective function, a quadratic auxiliary function is recursively
introduced layer by layer which has a closed-form optimum. We prove the monotonic decrease of the new learning rule. Our experiments show that the proposed algorithm converges faster and to a better local minimum than SGD. In addition, we propose a combination of the proposed learning rule and ADAGRAD which further accelerates convergence. Experimental evaluation on the MNIST database shows the benefit of the proposed approach in terms of digit recognition accuracy.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...