1 College of Science, Engineering and Technology, Texas Southern University. Texas, USA.
2 Department of Management Information Systems, Baylor University, Texas, USA.
International Journal of Science and Research Archive, 2024, 13(01), 3647-3656
Article DOI: 10.30574/ijsra.2024.13.1.2101
Received on 18 September 2024; revised on 25 October 2024; accepted on 29 October 2024
Security operations centres integrate continuous monitoring, triage, and coordinated incident response activities that must withstand high uncertainty and time pressure. Modern security analytics increasingly apply machine learning for intrusion detection and anomaly detection, yet operational settings expose persistent gaps between laboratory performance and deployable, trustworthy behaviour. A major contributor is opaque model behaviour that limits an analyst’s ability to validate alerts, understand failure modes, and justify actions, especially when models act on non-stationary, adversarial data streams. This paper proposes an XAI for Security Operations Centre framework that combines interpretable decision models with an explanation layer that turns model outputs into cyber risk intelligence aligned with SOC workflows and human decision needs. The framework operationalises post-hoc explanation techniques such as local surrogate explanations and feature attribution to support incident response, vulnerability prioritisation, and evidence-oriented investigation. A proof-of-concept experimental study on the NSL-KDD benchmark evaluates the feasibility of producing actionable explanations alongside competitive detection performance using lightweight, interpretable scoring functions. Results illustrate that an interpretable linear discriminant baseline can provide strong separability while exposing a concise feature-level rationale that can be translated into SOC-relevant risk narratives.
Explainable Artificial Intelligence; Security Operations Center; Cybersecurity Analytics; Interpretable Machine Learning; Digital Forensics; Incident Response; Vulnerability Management; Trustworthy AI
Preview Article PDF
Paul Clement Uwamotobon Akpabio and Rosemary Chisom Dimakunne. Explainable Artificial Intelligence (XAI) for Trustworthy Security Operations: Enhancing SOC Analysts’ Decision-Making Through Interpretable Cyber Risk Intelligence. International Journal of Science and Research Archive, 2024, 13(01), 3647-3656. Article DOI: https://doi.org/10.30574/ijsra.2024.13.1.2101.






