Description

Deep learning has demonstrated the ability to achieve highly accurate results in different application domains. In addition, it is very efficient in the preprocessing stage of feature extraction. However, deep learning is usually related to its ‘black-box’ nature and it is still difficult to know what happened inside a Deep neural network. In this sense, there are many applications and environments (such as autonomous cars, medical or court decisions) in which is extremely important the interpretability and explainability. For example, it is really important to know why autonomous cars took action since one could think that it is not the correct one.
The aim of this session is to provide a forum to disseminate and discuss different methods related to explainable deep neural networks. The proposed methods should propose clearly explainable architectures in order to address the ‘black-box’ nature of the deep neural networks