Home > Published Issues > 2023 > Volume 14, No. 4, 2023 >
JAIT 2023 Vol.14(4): 656-667
doi: 10.12720/jait.14.4.656-667

HASumRuNNer: An Extractive Text Summarization Optimization Model Based on a Gradient-Based Algorithm

Muljono 1,*, Mangatur Rudolf Nababan 2, Raden Arief Nugroho 3, and Kevin Djajadinata 1
1. Department of Informatics Engineering, Universitas Dian Nuswantoro, Semarang, Indonesia;
Email: p31201902233@dinus.ac.id (K.D.)
2. English Department, Universitas Sebelas Maret, Surakarta, Indonesia;
Email: amantaradja.nababan_2017@staff.uns.ac.id (M.R.N.)
3. English Department, Universitas Dian Nuswantoro, Semarang, Indonesia;
Email: arief.nugroho@dsn.dinus.ac.id (R.A.N.)
*Correspondence: muljono@dsn.dinus.ac.id (M.)

Manuscript received November 30, 2022; revised February 2, 2023, accepted March 23, 2023; published July 11, 2023.

Abstract—This article is based on text summarization research model, also referred to as “text summarization”, which is the act of summarizing materials in a way that directly communicates the intent or message of a document. Hierarchical Attention SumRuNNer (HASumRuNNer), an extractive text summary model based on the Indonesian language is the text summary model suggested in this study. This is a novelty for the extractive text summary model based on the Indonesian language, as there is currently very few related research, both in terms of the approach and dataset. Three primary methods—BiGRU, CharCNN, and hierarchical attention mechanisms—were used to create the model for this study. The optimization in this suggested model is likewise carried out using a variety of gradient-based methods, and the ROUGE-N approach is used to assess the outcomes of text synthesis. The test results demonstrate that Adam’s gradient-based approach is the most effective for extracting text summarization using the HASumRuNNer model. As can be seen, the values of RED-1 (70.7), RED-2 (64.33), and RED-L (68.14) are greater than those of other methods employed as references. The approach used in the suggested HASumRuNNer Model, which combines BiGRU with CharCNN, can result in more accurate word and sentence representations at word and sentence levels. Additionally, the word and sentence-level hierarchical attention mechanisms aid in preventing the loss of information on each word in documents that are typically brought on by the length of the input model word or sentence.
 
Keywords—extractive text summarization, hierarchical attention mechanism, deep learning, BiGRU, CharCNN

Cite: Muljono, Mangatur Rudolf Nababan, Raden Arief Nugroho, and Kevin Djajadinata, "HASumRuNNer: An Extractive Text Summarization Optimization Model Based on a Gradient-Based Algorithm ," Journal of Advances in Information Technology, Vol. 14, No. 4, pp. 656-667, 2023.

Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.