Resumen |
In the modern world, our dependency on Artificial intelligence (AI) and Machine Learning (ML) is increasing with each passing day, especially in the so-called smart cities. The application list of AI and ML in smart cities is very long ranging from healthcare to education, transportation, law enforcement, and agriculture. Despite the high application of AI and ML in smart cities, several challenges and risks emerge. One is the lack of interpretability, which means that humans are unable to understand the causes behind AI models’ predictions, which engenders mistrust. In most applications, AI and ML are used as a black box without any insights into the working mechanism of the algorithms. It has been observed along with predictions of AI algorithms, the causes behind these predictions are also desired in smart cities applications. Such explanations make sure AI and ML predictions are not biased. However, there is a trade-off between accuracy and explanations. The classical ML algorithms are more explainable compared to Deep Learning (DL) algorithms. However, DL algorithms have superior prediction and decision-making capabilities. The rapid increase in the computational power of modern machines and the availability of large amounts of data are other factors contributing to this rapid shift toward the use of DL methods. Despite displaying near-human performance on many complex tasks such as speech and text recognition, image classification and computer vision, etc., it is used as a black box without insights into the exact working mechanism of these algorithms, and the causes behind their prediction are not very clear.In this review paper, how we can provide an explanation of DL algorithms for smart city applications. Also analyze the trade-off between accuracy and explanations and propose innovative solutions by maintaining the balance between accuracy/performance and explanations. Moreover, to explore how the explanation provided by XAI techniques can help in guarding against potential adversarial attacks. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024. |