Trade-Off Analysis of Pruning Methods for Compact Neural Networks on Embedded Devices
Résumé
Pruning of neural networks is a technique often used to reduce the size of a machine learning model, as well as to reduce the computation cost for model inference. This research provides an analysis on four current pruning techniques that theoretically efficiently reduce the machine learning model size, where efficiency is defined by the relation between the compression of the model and the accuracy of the model. Furthermore, this research will assess in what way these four neural network pruning techniques affect the total energy consumption during model inference on a Raspberry Pi 4B board, applied to MobileNetV2, a machine learning model architecture optimized for image classification on embedded devices. Lastly, the research will analyze the trade-offs between energy consumption, model size and model accuracy for each of the assessed pruning algorithms applied to one of the most commonly used neural network architectures, MobileNetV2, on a Raspberry Pi 4B prototyping board. The research is expected to provide engineers a reference providing guidance upon deciding what pruning technique to use for a machine learning model to be deployed on an embedded device.