Abstract

Remaining Useful Life (RUL) estimation is directly related with the application of predictive maintenance. When RUL estimation is performed via data-driven methods and Artificial Intelligence algorithms, explainability and interpretability of the model are necessary for trusted predictions. This is especially important when predictive maintenance is applied to gas turbines or aeroengines, as they have high operational and maintenance costs, while their safety standards are strict and highly regulated. The objective of this work is to study the explainability of a Deep Neural Network (DNN) RUL prediction model. An open-source database is used, which is composed by computed measurements through a thermodynamic model for a given turbofan engine, considering non-linear degradation and data points for every second of a full flight cycle. First, the necessary data pre-processing is performed, and a DNN is used for the regression model. The selection of its hyper-parameters is done using random search and Bayesian optimisation. Tests considering the feature selection and the requirements of additional virtual sensors are discussed. The generalisability of the model is performed, showing that the type of faults as well as the dominant degradation has an important effect on the overall accuracy of the model. The explainability and interpretability aspects are studied, following the Local Interpretable Model-agnostic Explanations (LIME) method. The outcomes are showing that for simple data sets, the model can better understand physics, and LIME can give a good explanation. However, as the complexity of the data increases, both the accuracy of the model drops but also LIME seems to have difficulties in giving satisfactory explanations.

This content is only available via PDF.
You do not currently have access to this content.