Ethical considerations and accountability frameworks in deploying fully autonomous AI for real-time cybersecurity response in the USA

Oluwabiyi Oluwawapelumi Ajakaye 1, *, Ifeoma Eleweke 2, Ikenna Patrick Nwobu 3, Isaac Yusuf,  4,  Abdul-Waliyyu Bello 5 and Idris Wonuola 5

1 Department of Telecommunications Engineering, University of Sunderland, Tyne and Wear, United Kingdom.
2 Department of Technology and Engineering,Westcliff University,  Florida, USA.
3 Department of Business Administration, Liberty University, Virginia, USA.
4 Department of Mathematics. University of Ibadan, Ibadan, Nigeria.
5 Department of Computer Science. Austin Peay State University, Tennessee, USA.
 
Review
International Journal of Science and Research Archive, 2024, 13(02), 1513-1540.
Article DOI: 10.30574/ijsra.2024.13.2.2646
Publication history: 
Received on 13 November 2024; revised on 24 December 2024; accepted on 29 December 2024
 
Abstract: 
Researchers are looking into fully autonomous artificial intelligence (AI) systems more and more for real-time responses to cybersecurity threats, especially in high-stakes defense settings. These systems promise to be faster and bigger than anything else when it comes to stopping cyber-attacks, but they also bring up important moral and legal issues about who is responsible, how open they are, and how they are run. This paper goes into great detail about the moral issues and accountability systems that are important for using fully autonomous AI in U.S. defense cybersecurity. We look at the most up-to-date theoretical frameworks, such as the U.S. Department of Defense (DoD) ethical AI principles and federal guidelines. Then, we do a legal-ethical analysis to find holes in the laws and policies that are already in place. We look at real-world problems by looking at case studies of autonomous cyber defense systems, such as DARPA's "Mayhem" system in the Department of Defense and industry solutions like Darktrace and CrowdStrike in defense settings. Some of the main ethical problems that have been found are the possibility of AI making decisions that are biased or unclear, the risk of invading people's privacy, and the lack of clarity about who is responsible for actions taken by AI. In response, we suggest a structured accountability framework that makes it clear what AI developers, integrators, and deployers are responsible for. This framework would be supported by oversight tools like human-in-the-loop controls, audit trails, and following new AI risk management standards. The results show that strong accountability frameworks and proactive ethical guidelines are necessary to safely take advantage of the benefits of autonomous AI cyber defense in the U.S. while still following the law and keeping the public's trust.
 
Keywords: 
Autonomous AI; Cybersecurity; Ethical AI; Accountability; Defense; Legal Frameworks; Real-Time Response; AI Governance
 
Full text article in PDF: