Abstract

The rapid advance in sensing technology has expedited data-driven innovation in manufacturing by enabling the collection of large amounts of data from factories. Big data provides an unprecedented opportunity for smart decision-making in the manufacturing process. However, big data also attracts cyberattacks and makes manufacturing systems vulnerable due to the inherent value of sensitive information. The increasing integration of artificial intelligence (AI) within smart factories also exposes manufacturing equipment susceptible to cyber threats, posing a critical risk to the integrity of smart manufacturing systems. Cyberattacks targeting manufacturing data can result in considerable financial losses and severe business disruption. Therefore, there is an urgent need to develop AI models that incorporate privacy-preserving methods to protect sensitive information implicit in the models against model inversion attacks. Hence, this paper presents the development of a new approach called mosaic neuron perturbation (MNP) to preserve latent information in the framework of the AI model, ensuring differential privacy requirements while mitigating the risk of model inversion attacks. MNP is flexible to implement into AI models, balancing the trade-off between model performance and robustness against cyberattacks while being highly scalable for large-scale computing. Experimental results, based on real-world manufacturing data collected from the computer numerical control (CNC) turning process, demonstrate that the proposed method significantly improves the ability to prevent inversion attacks while maintaining high prediction performance. The MNP method shows strong potential for making manufacturing systems both smart and secure by addressing the risk of data breaches while preserving the quality of AI models.

References

1.
Yang
,
H.
,
Kumara
,
S.
,
Bukkapatnam
,
S. T.
, and
Tsung
,
F.
,
2019
, “
The Internet of Things for Smart Manufacturing: A Review
,”
IISE Trans.
,
51
(
11
), pp.
1190
1216
.
2.
IBM
,
2022
, “
X-Force Threat Intelligence Index 2022
,” https://www.ibm.com/security/data-breach/threat-intelligence/
3.
Ponemon-Institute
,
2019
, “
2019 Global State of Cybersecurity in Small and Medium-Sized Businesses
,” https://start.keeper.io/2019-ponemon-report
4.
Rigaki
,
M.
, and
Garcia
,
S.
,
2020
, “
A Survey of Privacy Attacks in Machine Learning
,”
ACM Comput. Surv.
, arXiv preprint. arXiv:2007.07646
5.
Tuptuk
,
N.
, and
Hailes
,
S.
,
2018
, “
Security of Smart Manufacturing Systems
,”
J. Manuf. Syst.
,
47
, pp.
93
106
.
6.
Narayanan
,
A.
, and
Shmatikov
,
V.
,
2008
, “
Robust De-anonymization of Large Sparse Datasets
,”
2008 IEEE Symposium on Security and Privacy (SP 2008)
,
Oakland, CA
,
May 18–22
,
IEEE
, pp.
111
125
.
7.
Dwork
,
C.
, and
Roth
,
A.
,
2014
, “
The Algorithmic Foundations of Differential Privacy
,”
Found. Trends Theor. Comput. Sci.
,
9
(
3–4
), pp.
211
407
.
8.
Fredrikson
,
M.
,
Lantz
,
E.
,
Jha
,
S.
,
Lin
,
S.
,
Page
,
D.
, and
Ristenpart
,
T.
,
2014
, “
Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing
,”
23rd USENIX Security Symposium (USENIX Security 14)
,
San Diego, CA
,
Aug. 20–22
, pp.
17
32
.
9.
Ma
,
C.
,
Li
,
J.
,
Wei
,
K.
,
Liu
,
B.
,
Ding
,
M.
,
Yuan
,
L.
,
Han
,
Z.
, and
Poor
,
H. V.
,
2023
, “
Trusted AI in Multiagent Systems: An Overview of Privacy and Security for Distributed Learning
,”
Proc. IEEE
,
111
(
9
), pp.
1097
1132
.
10.
Esposito
,
C.
,
Castiglione
,
A.
,
Martini
,
B.
, and
Choo
,
K.-K. R.
,
2016
, “
Cloud Manufacturing: Security, Privacy, and Forensic Concerns
,”
IEEE Cloud Comput.
,
3
(
4
), pp.
16
22
.
11.
Wu
,
D.
,
Ren
,
A.
,
Zhang
,
W.
,
Fan
,
F.
,
Liu
,
P.
,
Fu
,
X.
, and
Terpenny
,
J.
,
2018
, “
Cybersecurity for Digital Manufacturing
,”
J. Manuf. Syst.
,
48
, pp.
3
12
.
12.
Sweeney
,
L.
,
2002
, “
k-Anonymity: A Model for Protecting Privacy
,”
Int. J. Uncertainty Fuzziness Knowledge-Based Syst.
,
10
(
5
), pp.
557
570
.
13.
Hassan
,
M. U.
,
Rehmani
,
M. H.
, and
Chen
,
J.
,
2019
, “
Differential Privacy Techniques for Cyber Physical Systems: A Survey
,”
IEEE Commun. Surv. Tutorials
,
22
(
1
), pp.
746
789
.
14.
Sweeney
,
L.
,
2013
, “
Matching Known Patients to Health Records in Washington State Data
,”
preprint arXiv:1307.1370
.
15.
Dwork
,
C.
,
McSherry
,
F.
,
Nissim
,
K.
, and
Smith
,
A.
,
2006
, “
Calibrating Noise to Sensitivity in Private Data Analysis
,”
Theory of Cryptography Conference
,
New York, NY
,
Mar. 4–7
,
Springer
, pp.
265
284
.
16.
Fredrikson
,
M.
,
Jha
,
S.
, and
Ristenpart
,
T.
,
2015
, “
Model Inversion Attacks That Exploit Confidence Information and Basic Countermeasures
,”
Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security
,
Denver, CO
,
Oct. 2–16
, pp.
1322
1333
.
17.
Chaudhuri
,
K.
,
Monteleoni
,
C.
, and
Sarwate
,
A. D.
,
2011
, “
Differentially Private Empirical Risk Minimization
,”
J. Mach. Learn. Res.
,
12
(
29
), pp.
1069
1109
.
18.
Zhang
,
J.
,
Zhang
,
Z.
,
Xiao
,
X.
,
Yang
,
Y.
, and
Winslett
,
M.
,
2012
, “
Functional Mechanism: Regression Analysis Under Differential Privacy
,”
Proc. VLDB Endowment
,
5
(
11
), pp.
1364
1375
.
19.
Song
,
S.
,
Chaudhuri
,
K.
, and
Sarwate
,
A. D.
,
2013
, “
Stochastic Gradient Descent With Differentially Private Updates
,”
2013 IEEE Global Conference on Signal and Information Processing
,
Austin, TX
,
Dec. 3–5
,
IEEE
, pp.
245
248
.
20.
Wang
,
Y.
,
Si
,
C.
, and
Wu
,
X.
,
2015
, “
Regression Model Fitting Under Differential Privacy and Model Inversion Attack
,”
Proceedings of the 24th International Conference on Artificial Intelligence
,
Buenos Aires, Argentina
,
July 25–31
, pp.
1003
1009
.
21.
Krall
,
A.
,
Finke
,
D.
, and
Yang
,
H.
,
2020
, “
Gradient Mechanism to Preserve Differential Privacy and Deter Against Model Inversion Attacks in Healthcare Analytics
,”
2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)
,
Montreal, QC, Canada
,
July 20–24
,
IEEE
, pp.
5714
5717
.
22.
Krall
,
A.
,
Finke
,
D.
, and
Yang
,
H.
,
2020
, “
Mosaic Privacy-Preserving Mechanisms for Healthcare Analytics
,”
IEEE J. Biomed. Health Inf.
,
25
(
6
), pp.
2184
2192
.
23.
Hu
,
Q.
,
Chen
,
R.
,
Yang
,
H.
, and
Kumara
,
S.
,
2020
, “
Privacy-Preserving Data Mining for Smart Manufacturing
,”
Smart Sustain. Manuf. Syst.
,
4
(
2
).
24.
Abadi
,
M.
,
Chu
,
A.
,
Goodfellow
,
I.
,
McMahan
,
H. B.
,
Mironov
,
I.
,
Talwar
,
K.
, and
Zhang
,
L.
,
2016
, “
Deep Learning With Differential Privacy
,”
Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security
,
Vienna, Austria
,
Oct. 24–28
, pp.
308
318
.
25.
Arachchige
,
P. C. M.
,
Bertok
,
P.
,
Khalil
,
I.
,
Liu
,
D.
,
Camtepe
,
S.
, and
Atiquzzaman
,
M.
,
2019
, “
Local Differential Privacy for Deep Learning
,”
IEEE Internet Things J.
,
7
(
7
), pp.
5827
5842
.
26.
Wang
,
Y.
,
Gu
,
M.
,
Ma
,
J.
, and
Jin
,
Q.
,
2019
, “
DNN-DP: Differential Privacy Enabled Deep Neural Network Learning Framework for Sensitive Crowdsourcing Data
,”
IEEE Trans. Comput. Social Syst.
,
7
(
1
), pp.
215
224
.
27.
Kang
,
Y.
,
Liu
,
Y.
,
Niu
,
B.
,
Tong
,
X.
,
Zhang
,
L.
, and
Wang
,
W.
,
2020
, “
Input Perturbation: A New Paradigm Between Central and Local Differential Privacy
,”
preprint arXiv:2002.08570
.
28.
Nori
,
H.
,
Caruana
,
R.
,
Bu
,
Z.
,
Shen
,
J. H.
, and
Kulkarni
,
J.
,
2021
, “
Accuracy, Interpretability, and Differential Privacy Via Explainable Boosting
,”
International Conference on Machine Learning
,
Virtual
,
July 18–24
,
PMLR
, pp.
8227
8237
.
29.
Li
,
X.
,
Yan
,
H.
,
Cheng
,
Z.
,
Sun
,
W.
, and
Li
,
H.
,
2022
, “
Protecting Regression Models With Personalized Local Differential Privacy
,”
IEEE Trans. Dependable Secure Comput.
,
20
(
2
), pp.
960
974
.
30.
Jarin
,
I.
, and
Eshete
,
B.
,
2022
, “
Dp-util: Comprehensive Utility Analysis of Differential Privacy in Machine Learning
,”
Proceedings of the Twelfth ACM Conference on Data and Application Security and Privacy
,
Baltimore, MD
,
Apr. 25–27
, pp.
41
52
.
31.
He
,
Z.
,
Zhang
,
T.
, and
Lee
,
R. B.
,
2019
, “
Model Inversion Attacks Against Collaborative Inference
,”
Proceedings of the 35th Annual Computer Security Applications Conference
,
San Juan, Puerto Rico
,
Sept. 9–13
, pp.
148
162
.
32.
Srivastava
,
N.
,
Hinton
,
G.
,
Krizhevsky
,
A.
,
Sutskever
,
I.
, and
Salakhutdinov
,
R.
,
2014
, “
Dropout: A Simple Way to Prevent Neural Networks From Overfitting
,”
J. Machine Learning Res.
,
15
(
1
), pp.
1929
1958
.
33.
Wang
,
Q.
, and
Yang
,
H.
,
2020
, “
Sensor-Based Recurrence Analysis of Energy Efficiency in Machining Processes
,”
IEEE Access
,
8
, pp.
18326
18336
.
You do not currently have access to this content.