Home > Published Issues > 2025 > Volume 16, No. 11, 2025 >
JAIT 2025 Vol.16(11): 1638-1643
doi: 10.12720/jait.16.11.1638-1643

Enhancing IoT Attack Detection with Explainable AI: A Robust Evaluation of LIME and SHAP Interpretability

Yousef Qawqzeh 1,2
1. College of Engineering and Technology, Fujairah University, Fujairah, UAE
2. Cyber Security Department, Khawarizmi University Technical College, Amman, Jordan
Email: y.qawqzeh@fu.ac.ae

Manuscript received June 3, 2025; revised August 13, 2025; accepted August 20, 2025; published November 21, 2025.

Abstract—As Internet of Things (IoT) sensors became more common in weather monitoring systems, the need for classification models that are both accurate and interpretable has grown, particularly for applications such as smart city infrastructure, natural disaster prediction, and cyber threat detection. In IoT weather attack classification, this need is even more critical, as detecting malicious activity alongside normal environmental variations requires both precision and transparency. Although Machine Learning (ML) models such as Random Forest (RF) and Extreme Gradient Boosting (XGBoost) have demonstrated excellent prediction accuracy, their ‘black box’ nature makes it difficult to understand how they reach decisions, which can undermine trust in their results. This research addresses the transparency challenge by developing a dual explanation framework that combines SHapley Additive exPlanations (SHAP) for understanding a model’s overall behavior and Local Interpretable Model-agnostic Explanations (LIME) for explaining individual predictions, providing both global and local insights. Using a pre-processed IoT-sensor weather dataset, six classifiers were trained and evaluated, RF, Gradient Boosting, AdaBoost, XGBoost, Logistic Regression, and Support Vector Machine (SVM), across four performance metrics. RF and XGBoost achieved the highest performance, with RF obtaining 99.67% accuracy, precision, recall, and F1-Score, and an Area Under the Curve (AUC) of 1.00 for nearly all classes. Across all models, the “label” feature consistently emerged as the most influential predictor. SHAP identified it as the dominant driver of overall decisions, while LIME confirmed its significance in specific cases, strengthening confidence in the model’s reasoning. The framework also introduced a novel cross-verification step, comparing SHAP and LIME outputs to ensure alignment, akin to having two independent experts validate the model. Beyond delivering high predictive accuracy, this approach provided clear, actionable insights that meteorologists and security teams can use for forecasting storms and detecting anomalous or malicious sensor activity. While SHAP and LIME offer complementary interpretability, their computational trade-offs must be considered for real-time IoT deployments.
 
Keywords—Explainable Artificial Intelligence (XAI), SHapley Additive exPlanations (SHAP) & Local Interpretable Model-agnostic Explanations (LIME), IoT sensor data, adversarial attack, ensemble learning

Cite: Yousef Qawqzeh, "Enhancing IoT Attack Detection with Explainable AI: A Robust Evaluation of LIME and SHAP Interpretability," Journal of Advances in Information Technology, Vol. 16, No. 11, pp. 1638-1643, 2025. doi: 10.12720/jait.16.11.1638-1643

Copyright © 2025 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Article Metrics in Dimensions