Home
International Journal of Science and Research Archive
International, Peer reviewed, Open access Journal ISSN Approved Journal No. 2582-8185

Main navigation

  • Home
    • Journal Information
    • Abstracting and Indexing
    • Editorial Board Members
    • Reviewer Panel
    • Journal Policies
    • IJSRA CrossMark Policy
    • Publication Ethics
    • Issue in Progress
    • Current Issue
    • Past Issues
    • Instructions for Authors
    • Article processing fee
    • Track Manuscript Status
    • Get Publication Certificate
    • Become a Reviewer panel member
    • Join as Editorial Board Member
  • Contact us
  • Downloads

ISSN Approved Journal || eISSN: 2582-8185 || CODEN: IJSRO2 || Impact Factor 8.2 || Google Scholar and CrossRef Indexed

Peer Reviewed and Referred Journal || Free Certificate of Publication

Research and review articles are invited for publication in April 2026 (Volume 19, Issue 1) Submit manuscript

Explainable Artificial Intelligence (XAI) for Trustworthy Security Operations: Enhancing SOC Analysts’ Decision-Making Through Interpretable Cyber Risk Intelligence

Breadcrumb

  • Home
  • Explainable Artificial Intelligence (XAI) for Trustworthy Security Operations: Enhancing SOC Analysts’ Decision-Making Through Interpretable Cyber Risk Intelligence

Paul Clement Uwamotobon Akpabio 1, * and Rosemary Chisom Dimakunne 2

1 College of Science, Engineering and Technology, Texas Southern University. Texas, USA.
2 Department of Management Information Systems, Baylor University, Texas, USA.

Research Article

International Journal of Science and Research Archive, 2024, 13(01), 3647-3656

Article DOI: 10.30574/ijsra.2024.13.1.2101

DOI url: https://doi.org/10.30574/ijsra.2024.13.1.2101

Received on 18 September 2024; revised on 25 October 2024; accepted on 29 October 2024

Security operations centres integrate continuous monitoring, triage, and coordinated incident response activities that must withstand high uncertainty and time pressure. Modern security analytics increasingly apply machine learning for intrusion detection and anomaly detection, yet operational settings expose persistent gaps between laboratory performance and deployable, trustworthy behaviour. A major contributor is opaque model behaviour that limits an analyst’s ability to validate alerts, understand failure modes, and justify actions, especially when models act on non-stationary, adversarial data streams. This paper proposes an XAI for Security Operations Centre framework that combines interpretable decision models with an explanation layer that turns model outputs into cyber risk intelligence aligned with SOC workflows and human decision needs. The framework operationalises post-hoc explanation techniques such as local surrogate explanations and feature attribution to support incident response, vulnerability prioritisation, and evidence-oriented investigation. A proof-of-concept experimental study on the NSL-KDD benchmark evaluates the feasibility of producing actionable explanations alongside competitive detection performance using lightweight, interpretable scoring functions. Results illustrate that an interpretable linear discriminant baseline can provide strong separability while exposing a concise feature-level rationale that can be translated into SOC-relevant risk narratives.

Explainable Artificial Intelligence; Security Operations Center; Cybersecurity Analytics; Interpretable Machine Learning; Digital Forensics; Incident Response; Vulnerability Management; Trustworthy AI 

https://ijsra.net/sites/default/files/fulltext_pdf/IJSRA-2024-2101.pdf

Preview Article PDF

Paul Clement Uwamotobon Akpabio and Rosemary Chisom Dimakunne. Explainable Artificial Intelligence (XAI) for Trustworthy Security Operations: Enhancing SOC Analysts’ Decision-Making Through Interpretable Cyber Risk Intelligence. International Journal of Science and Research Archive, 2024, 13(01), 3647-3656. Article DOI: https://doi.org/10.30574/ijsra.2024.13.1.2101.

Copyright © Author(s). All rights reserved. This article is published under the terms of the Creative Commons Attribution 4.0 International License (CC BY 4.0), which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as appropriate credit is given to the original author(s) and source, a link to the license is provided, and any changes made are indicated.


All statements, opinions, and data contained in this publication are solely those of the individual author(s) and contributor(s). The journal, editors, reviewers, and publisher disclaim any responsibility or liability for the content, including accuracy, completeness, or any consequences arising from its use.

Get Certificates

Get Publication Certificate

Download LoA

Check Corssref DOI details

Issue details

Issue Cover Page

Editorial Board

Table of content

          

   

Copyright © 2026 International Journal of Science and Research Archive - All rights reserved

Developed & Designed by VS Infosolution