official Journal of AlNoor University

Towards Robust and Explainable AI-Based Malware Detection: A Survey of Adversarial Attacks, Defenses, and Open Challenges

Document Type : Research paper

Authors

1 University of Mosul, College of Computer Science and Mathematics, Department of Computer Science, Iraq

2 University of Mosul, College of Computer Science and Mathematics, Department of Computer Science, Iraq,

Abstract
Machine learning (ML) and deep learning (DL) have become the hotbeds of automated feature extraction that achieve high-accuracy classification results over a wide range of platforms in modern malware detection. This makes them, however, very susceptible to adversarial manipulations since slight, benign-looking modifications can fool ML/DL classifiers while still carrying out the intended malicious activity. The survey is organized as follows: It first covers static, dynamic, and hybrid analysis techniques; feature representation methods; and major artificial intelligence models used for malware detection today; then it provides an advanced taxonomy on gradient-based attacks at the binary level and behavioral reinforcement learning-driven evasions against current advanced defenses, both proactive and reactive, discovering remaining weaknesses, robustness, scalability limitations, and locking down final public datasets benchmarks testing coverage labeling consistency and real-world representativeness. Most of the tools perform well when tested with standard datasets. However, their performance drastically reduces in the presence of an intelligent adversary or adaptive attack. Therefore, this paper highlights the need for future research to focus on developing explainable, robust, cross-domain, generalized, and adaptive defense strategy-based models for securing practical AI-based malware detections.

Keywords

Subjects



Articles in Press, Accepted Manuscript
Available Online from 19 January 2026