Resumen |
The development of Hebbian-based architectures is motivated by the necessity for online learning algorithms in Artificial Neural Networks (ANNs) and other biologically inspired networks. However, a gap exists between the performance of more biologically inspired models and mainstream ANNs, particularly in the case of Deep Learning. Recent research has demonstrated that Hebbian Learning can also be leveraged for Interpretability, specifically in the context of Associative Interpretability. In this approach, Hebbian Learning establishes associations between hidden representations and labels. To enhance this process, we introduce Contrastiveness Operators to identify the most contrastively associated neurons produced by Hebbian Learning. These operators help to identify units that are specifically more related to a particular class but are also not strongly related to other classes. By making this specific adjustment and summing the output of the most contrastively associated units, we achieve better comparative results. In practice, this corresponds to a pruning operation. Applying this methodology to Face classification with Convolutional Neural Networks, we were able to achieve metrics almost on par with the baseline Adam optimizer, while using fewer neurons. This direction not only combines Hebbian Learning with Interpretability but also demonstrates how Interpretability can enhance Deep Learning classifiers. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024. |