Abstract

Understanding on how a machine learning model interprets data is a crucial step to verify its reliability and avoid overfitting. While the focus of the scientific community is nowadays orientated towards deep learning approaches, which are considered as black box approaches, this work presents a toolbox that is based on complementary methods of feature extraction and selection, where the classification decisions of the model are transparent and can be physically interpreted. On the example of guided wave benchmark data from the open guided waves platform, where delamination defects were simulated at multiple positions on a carbon fiber reinforced plastic plate under varying temperature conditions, the authors could identify suitable frequencies for further investigations and experiments. Furthermore, the authors presented a realistic validation scenario which ensures that the machine learning model learns global damage characteristics rather than position specific characteristics.

This content is only available via PDF.
You do not currently have access to this content.