Home > Published Issues > 2020 > Volume 11, No. 4, November 2020 >

Comparison of Korean Preprocessing Performance according to Tokenizer in NMT Transformer Model

Geumcheol Kim and Sang-Hong Lee
Department of Computer Science & Engineering, Anyang University, Anyang-si, Republic of Korea

Abstract—Mechanical translation using neural networks in natural language processing is making rapid progress. With the development of natural language processing model and tokenizer, accurate translation is becoming possible. In this paper, we will create a transformer model that shows high performance recently and compare the performance of English Korean according to tokenizer. We made a traditional neural network-based Neural Machine Translation (NMT) model using a transformer and compared the Korean translation results according to the tokenizer. The Byte Pair Encoding (BPE)-based Tokenizer showed a small vocabulary size and a fast learning speed, but due to the nature of Korean, the translation result was not good. The morphological analysis-based Tokenizer showed that the parallel corpus data is large and the vocabulary is large, the performance is higher regardless of the characteristics of the language.
 
Index Terms—translation, tokenizer, neural machine translation, natural language processing, deep learning

Cite: Geumcheol Kim and Sang-Hong Lee, "Comparison of Korean Preprocessing Performance according to Tokenizer in NMT Transformer Model," Journal of Advances in Information Technology, Vol. 11, No. 4, pp. 228-232, November 2020. doi: 10.12720/jait.11.4.228-232

Copyright © 2020 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.