Abstract
In an age of worsening global threat landscape and accelerating uncertainty, the design and manufacture of systems must increase resilience and robustness across both the system itself and the entire systems design process. We generally trust our colleagues after initial clearance/background checks; and systems to function as intended and within operating parameters after safety engineering review, verification, validation, and/or system qualification testing. This approach has led to increased insider threat impacts; thus, we suggest moving to the “trust, but verify” approach embodied by the Zero-Trust paradigm. Zero-Trust is increasingly adopted for network security but has not seen wide adoption in systems design and operation. Achieving the goal of Zero-Trust throughout the systems lifecycle will help to ensure that no single bad actor—whether human or machine learning/artificial intelligence (ML/AI)—can induce failure anywhere in a system’s lifecycle. Additionally, while ML/AI and their associated risks are already entrenched within the operations phase of many systems’ lifecycles, ML/AI is gaining traction during the design phase. For example, generative design algorithms are increasingly popular, but there is less understanding of potential risks. Adopting the Zero-Trust philosophy helps ensure robust and resilient design, manufacture, operations, maintenance, upgrade, and disposal of systems. We outline the rewards and challenges of implementing Zero-Trust and propose the framework for Zero-Trust for the system design lifecycle. This article highlights several areas of ongoing research with focus on high priority areas where the community should focus efforts.