Simplified neural architectures for symmetric boolean functions
Résumé
The theoretical and practical framework of Field Programmable Neural Arrays has been defined to reconcile simple hardware topologies with complex neural architectures: FPNAs lead to powerful neural models whose original data exchange scheme allows to use hardware-friendly neural topologies. This paper addresses preliminary results in the study of the computation power of FPNAs. The computation of symmetric boolean functions is taken as a textbook example. The FPNA concept allows successive topology simplifications of standard neural models for such functions, so that the number of weights is greatly reduced with respect to previous works.