Recurrent Kernel Networks
Résumé
Substring kernels are classical tools for representing biological sequences or text.
However, when large amounts of annotated data are available, models that allow
end-to-end training such as neural networks are often preferred. Links between
recurrent neural networks (RNNs) and substring kernels have recently been drawn,
by formally showing that RNNs with specific activation functions were points
in a reproducing kernel Hilbert space (RKHS). In this paper, we revisit this link
by generalizing convolutional kernel networks—originally related to a relaxation
of the mismatch kernel—to model gaps in sequences. It results in a new type of
recurrent neural network which can be trained end-to-end with backpropagation, or
without supervision by using kernel approximation techniques. We experimentally
show that our approach is well suited to biological sequences, where it outperforms
existing methods for protein classification tasks.
Origine | Fichiers produits par l'(les) auteur(s) |
---|
Loading...