Abstract
The rapid advance in sensing technology has expedited data-driven innovation in manufacturing by enabling the collection of large amounts of data from factories. Big data provides an unprecedented opportunity for smart decision-making in the manufacturing process. However, big data also attracts cyberattacks and makes manufacturing systems vulnerable due to the inherent value of sensitive information. The increasing integration of artificial intelligence (AI) within smart factories also exposes manufacturing equipment susceptible to cyber threats, posing a critical risk to the integrity of smart manufacturing systems. Cyberattacks targeting manufacturing data can result in considerable financial losses and severe business disruption. Therefore, there is an urgent need to develop AI models that incorporate privacy-preserving methods to protect sensitive information implicit in the models against model inversion attacks. Hence, this paper presents the development of a new approach called mosaic neuron perturbation (MNP) to preserve latent information in the framework of the AI model, ensuring differential privacy requirements while mitigating the risk of model inversion attacks. MNP is flexible to implement into AI models, balancing the trade-off between model performance and robustness against cyberattacks while being highly scalable for large-scale computing. Experimental results, based on real-world manufacturing data collected from the computer numerical control (CNC) turning process, demonstrate that the proposed method significantly improves the ability to prevent inversion attacks while maintaining high prediction performance. The MNP method shows strong potential for making manufacturing systems both smart and secure by addressing the risk of data breaches while preserving the quality of AI models.