Home > Published Issues > 2026 > Volume 17, No. 4, 2026 >
JAIT 2026 Vol.17(4): 678-695
doi: 10.12720/jait.17.4.678-695

Enhancing Aggregation Efficiency and Training Stability in Heterogeneous Federated Learning Using the Adaptive LoRA FL Framework

Lin-Huang Chang, Yu-Sheng Hsieh, and Tsung-Han Lee *
Department of Computer Science, National Taichung University of Education, Taiwan
Email: lchang@mail.ntcu.edu.tw (L.H.C.); bcs113101@gm.ntcu.edu.tw (Y.S.H.); thlee@mail.ntcu.edu.tw (T.H.L.)
*Corresponding author

Manuscript received November 3, 2025; revised November 21, 2025; accepted January 13, 2026; published April 16, 2026.

Abstract—In recent years, Low-Rank Adaptation (LoRA) has emerged as a highly effective parameter-efficient fine-tuning technique for Transformer-based models. This paper extends the applicability of LoRA to traditional Deep Neural Networks (DNNs) by integrating it into a Federated Learning (FL) framework. The principal aim of this study is to facilitate data privacy protection while simultaneously minimizing weight transfer overhead in heterogeneous environments. To address the challenges of Non-Independent Identically Distribution (Non-IID) data, this study proposes the Adaptive LoRA FL framework, which incorporates feature mapping and category unification mechanisms to harmonize highly heterogeneous datasets. By integrating LoRA fine-tuning with feature mapping and category unification mechanisms, it effectively integrates highly heterogeneous datasets (e.g., CIC-IDS2017, Edge-IIoTset, and TON-IoT datasets). Furthermore, the performance of Adaptive LoRA FL framework is evaluated against state-of-the-art FL methods and demonstrate its generalizability across diverse application domains. Experimental results demonstrate that compared to full-weight aggregation, the Adaptive LoRA FL framework significantly reduces client communication costs while maintaining improved global accuracy. By integrating an early stopping mechanism, the training process terminates once convergence is reached, effectively minimizing redundant computation and communication overhead.
 
Keywords—Low-Rank Adaptation (LoRA), heterogeneous federated learning, feature mapping

Cite: Lin-Huang Chang, Yu-Sheng Hsieh, and Tsung-Han Lee, "Enhancing Aggregation Efficiency and Training Stability in Heterogeneous Federated Learning Using the Adaptive LoRA FL Framework," Journal of Advances in Information Technology, Vol. 17, No. 4, pp. 678-695, 2026. doi: 10.12720/jait.17.4.678-695

Copyright © 2026 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).

Article Metrics in Dimensions