Fuelled by recent technological advances, Machine Learning (ML) is being introduced to safety and security-critical applications like defence systems, financial systems, and autonomous machines. ML components can be used either for processing input data and/or for decision making. The response time and success rate demands are very high and this means that the deployed training algorithms often produce complex models that are not readable and verifiable by humans (like multi layer neural networks). Due to the complexity of these models, achieving complete testing coverage is in most cases not realistically possible. This raises security threats related to the ML components presenting unpredictable behavior due to malicious manipulation (backdoor attacks). This paper proposes a methodology based on established security principles like Zero-Trust and defence-in-depth to help prevent and mitigate the consequences of security threats including ones emerging from ML-based components. The methodology is demonstrated on a case study of an Unmanned Aerial Vehicle (UAV) with a sophisticated Intelligence, Surveillance, and Reconnaissance (ISR) module.

This content is only available via PDF.
You do not currently have access to this content.