A pruned higher-order network for knowledge extraction - Inria - Institut national de recherche en sciences et technologies du numérique
Communication Dans Un Congrès Année : 2002

A pruned higher-order network for knowledge extraction

Laurent Bougrain

Résumé

Usually, the learning stage of a neural network leads to a single model. But a complex problem cannot always be solved adequately by a global system. On the other side, several systems specialized on a subspace have some difficulties to deal with situations located at the limit of two classes. This article presents a new adaptive architecture based upon higher-order computation to adjust a general model to each pattern and using a pruning algorithm to improve the generalization and extract knowledge. We use one small multi-layer perceptron to predict each weight of the model from the current pattern (we have one estimator per weight). This architecture introduces a higher-order computation, biologically inspired, similar to the modulation of a synapse between two neurons by a third neuron. The general model can then be smaller, more adaptative and more informative.
Fichier non déposé

Dates et versions

inria-00099439 , version 1 (26-09-2006)

Identifiants

  • HAL Id : inria-00099439 , version 1

Citer

Laurent Bougrain. A pruned higher-order network for knowledge extraction. International Joint Conference on Neural Networks - IJCNN'02, May 2002, Honolulu, Hawaii, USA, 4 p. ⟨inria-00099439⟩
129 Consultations
0 Téléchargements

Partager

More