Privacy-preserving AI for cybersecurity: Balancing threat intelligence collection with user data protection

Sridevi Kakolu 1, 2, *, Muhammad Ashraf Faheem 3, 4 and Muhammad Aslam 3, 5

1 Boardwalk Pipelines, Houston, Texas, USA.
2 Jawaharlal Nehru Technological University, Hyderabad, India.
3 Speridian Technologies, Lahore, Pakistan.
4 Lahore Leads University, Lahore, Pakistan.
5 University of Punjab, Lahore, Pakistan.
 
Review
International Journal of Science and Research Archive, 2021, 02(02), 280–292.
Article DOI: 10.30574/ijsra.2021.2.2.0071
Publication history: 
Received on 13 April 2021; revised on 16 June 2021; accepted on 20 June 2021
 
Abstract: 
This paper explores the case of using privacy-preserving artificial intelligence in cybersecurity by analyzing the importance of effective threat intelligence in the conflict with potential invasions and high user data protection standards. With the increased articulation of cyber threats, AI is crucial in fortifying detection, reaction, and prevention measures for cyber threats in CSFs. However, such large-scale information feeding these systems raises many privacy issues, and hence, strong privacy preservation mechanisms that ensure user anonymity and protect the information from misuse are needed. This study reveals how AI threat detection accuracy can be preserved while protecting users' privacy through data obfuscation, differential privacy, and federated learning. Furthermore, the article highlights the need to apply privacy-enhancing patterns, including Privacy by Design, as new patterns in cybersecurity lifecycles. The recommendations derived here are intended to help researchers and practitioners achieve equal data protection results and threat intelligence efficiency when employing AI models. This approach promotes a secure and highly sensitive terrain for disseminating AI-assisted cybersecurity innovations.
 
 
Keywords: 
Cybersecurity; Threat intelligence; Federated learning; Bias in AI; Data minimization
 
Full text article in PDF: