The Internet of Things (IoT) has rapidly emerged as a crucial driver of the digital economy, generating massive amounts of data. Machine learning (ML) is an important technology to extract insights from the data generated by IoT devices. Deploying ML on low-power devices such as microcontroller units (MCUs) improves data protection, reduces bandwidth, and enables on-device data processing. However, the requirements of ML algorithms exceed the processing power, memory, and energy consumption capabilities of these devices. One solution to adapt ML networks to the limited capacities of MCUs is network pruning, the process of removing unnecessary connections or neurons from a neural network. In this work, we investigate the effect of unstructured and structured pruning methods on energy consumption. A series of experiments is conducted using a Raspberry Pi Pico to classify the FashionMNIST dataset with a LeNet-5-like convolutional neural network while applying unstructured magnitude and structured APoZ pruning approaches with various model compression rates from two to 64. We find that unstructured pruning out of the box has no effect on energy consumption, while structured pruning reduces energy consumption with increasing model compression. When structured pruning is applied to remove 75% of the model parameters, inference consumes 59.06% less energy, while the accuracy declines by 3.01%. We further develop an adaption of the TensorFlow Lite framework that realizes the theoretical improvements for unstructured pruning, reducing the energy consumption by 37.59% with a decrease of only 1.58% in accuracy when 75% of the parameters are removed. Our results show that both approaches are feasible to significantly reduce the energy consumption of MCUs, leading to various possible sweet spots within the trade-off between accuracy and energy consumption. |
*** Title, author list and abstract as seen in the Camera-Ready version of the paper that was provided to Conference Committee. Small changes that may have occurred during processing by Springer may not appear in this window.