Home
Author Guide
Editor Guide
Reviewer Guide
Published Issues
Special Issue
Introduction
Special Issues List
Sections and Topics
Sections
Topics
Internet of Things (IoT) in Smart Systems and Applications
journal menu
Aims and Scope
Editorial Board
Indexing Service
Article Processing Charge
Open Access
Copyright and Licensing
Preservation and Repository Policy
Publication Ethics
Editorial Process
Contact Us
General Information
ISSN:
1798-2340 (Online)
Frequency:
Monthly
DOI:
10.12720/jait
Indexing:
ESCI (Web of Science)
,
Scopus
,
CNKI
,
etc
.
Acceptance Rate:
12%
APC:
1000 USD
Average Days to Accept:
87 days
Journal Metrics:
Impact Factor 2023: 0.9
4.2
2023
CiteScore
57th percentile
Powered by
Article Metrics in Dimensions
Editor-in-Chief
Prof. Kin C. Yow
University of Regina, Saskatchewan, Canada
I'm delighted to serve as the Editor-in-Chief of
Journal of Advances in Information Technology
.
JAIT
is intended to reflect new directions of research and report latest advances in information technology. I will do my best to increase the prestige of the journal.
What's New
2024-11-27
JAIT Vol. 15, No. 11 has been published online!
2024-10-23
JAIT Vol. 15, No. 10 has been published online!
2024-09-25
Vol. 15, No. 9 has been published online!
Home
>
Published Issues
>
2022
>
Volume 13, No. 3, June 2022
>
JAIT 2022 Vol.13(3): 295-300
doi: 10.12720/jait.13.3.295-300
Structured Pruning for Deep Neural Networks with Adaptive Pruning Rate Derivation Based on Connection Sensitivity and Loss Function
Yasufumi Sakai
1
, Yu Eto
2
, and Yuta Teranishi
2
1. Fujitsu Research, Fujitsu Limited, Kawasaki, Japan
2. QNET Group, Fujitsu Limited, Fukuoka, Japan
Abstract
—Structured pruning for deep neural networks has been proposed for network model compression. Because earlier structured pruning methods assign pruning rate manually, finding appropriate pruning rate to suppress the degradation of pruned model accuracy is difficult. In this paper, we propose a structured pruning method by deriving pruning rate for each layer adaptively based on gradient and loss function. The proposed method first calculates a threshold of L1-norm of the pruned weight for each layer using loss function and gradient per layer. The threshold guarantees no degradation of the loss function by pruning when the L1-norm of pruned weight is less than that threshold. Then, by comparing the L1-norm of the pruned weight and the calculated threshold while changing the pruning rate, we derive the pruning rate for each layer, which does not degrade the loss function. By applying the derived pruning rate per layer, the accuracy degradation of the pruned model is suppressed. We evaluate the proposed method on CIFAR-10 task with VGG-16 and ResNet in iterative pruning method. For example, our proposed method reduces model parameters of ResNet-56 by 66.3% with 93.71% accuracy.
Index Terms
—neural networks, structured pruning, automatic pruning rate search
Cite: Yasufumi Sakai, Yu Eto, and Yuta Teranishi, "Structured Pruning for Deep Neural Networks with Adaptive Pruning Rate Derivation Based on Connection Sensitivity and Loss Function," Journal of Advances in Information Technology, Vol. 13, No. 3, pp. 295-300, June 2022.
Copyright © 2022 by the authors. This is an open access article distributed under the Creative Commons Attribution License (
CC BY-NC-ND 4.0
), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.
13-C066-Japan
PREVIOUS PAPER
The Use of Confidence Indicating Prediction Score in Online Signature Verification
NEXT PAPER
Fast and Efficient Feature Selection Method Using Bivariate Copulas