Resumen |
Deep Convolutional Neural Networks (ConvNets) have demonstrated successful implementations in various vision tasks, including image classification, segmentation, and image captioning. Despite their achievements, concerns persist regarding the explainability of these models, often referred to as black-box classifiers. While some interpretability papers suggest the existence of object detectors in ConvNets, others refute this notion. In this paper, we address the challenge of identifying such neurons by utilizing Hebbian learning to discover the most associated neurons for a given stimulus. Our method focuses on the VGG19 and ResNet50 networks with the Dogs-vs-Cats dataset. During experimentation, we found that the most associated hidden neurons to the labels are not object detectors. Instead, they seem to encode relevant aspects of the category. By shedding light on these findings, we aim to improve the understanding and interpretability of deep ConvNets for future advancements in the field of computer vision. © 2023 IEEE. |