Kaplanoglou Pantelis (Phd Candidate)

Thesis title: Explainable Machine Learning for Intelligent Systems
Supervisor: Diamantaras Konstantinos
Advisory Committee Members:
George A. Papakostas, Professor IHU
Ignatios Deligiannis, Professor IHU

A crucial issue towards widespread application of Machine Learning models is the capability of explaining their functionality and the causes that drive their decisions. The new research area of Explainable Machine Learning offers methods that produce evidence of the system’s behavior in understandable means for humans. By supplying visualizations, metrics and mathematical tools, the understanding on the general functionality of a model, which is based on the formal definition of the method, is expanded to an analytic explanation of its internal characteristics. The non-explainable Deep Neural Networks show increased accuracy compared to explainable models, but with the downside of functioning like black-boxes, to which we provide an input in order to generate an output. Thoughts about widespread use are accompanied with questions regarding reliability, bias and concerns about ethics, physical security. Additionally, there is a lack of trust in them especially in healthcare, pharmaceutical and biomedical sectors. Potential social ramifications have led legislators to establish the “right to explanation” a provision that affects the applicability of state-of-the-art models in products. In parallel, to implement innovative intelligent systems, the experimental implementations of Machine Learning methods need to evolve into software architectures, taking under consideration additional aspects that concern the new sector of Machine Learning Engineering. Research as part of this doctoral thesis focuses on explanatory methods for existing models and attempts to introduce new explainable models, that will be capable to produce consistent predictive results, that humans both anticipate and understand. Secondarily, will produce new standards for machine learning software that provide explanations.