Home > Published Issues > 2023 > Volume 14, No. 5, 2023 >
JAIT 2023 Vol.14(5): 1082-1087
doi: 10.12720/jait.14.5.1082-1087

Image Recognition Based on High Accuracy 3D Depth Map Information

Yu-Cheng Fan 1,*, Chun Ju Huang 2, and Chitra Meghala Yelamandala 3
1. Department of Electronic Engineering, National Taipei University of Technology, Taipei, Taiwan
2. Siemens EDA, Wilsonville, Oregon, USA (Headquartered)
3. Iout Private Limited Company, Guntur, India
*Correspondence: skystar@ntut.edu.tw (Y.-C.F.)

Manuscript received April 3, 2023; revised June 15, 2023; accepted June 30, 2023; published October 13, 2023.

Abstract—In recent years, self-driving cars have developed rapidly. Many academic research institutes have begun to develop self-driving cars. However, the object recognition rate of current self-driving cars is still not high, especially the recognition of pedestrians and small objects. In order solve this problem, we presented an image recognition system adopting high-accuracy 3D depth map information. This algorithm combines two kinds of sensor data for calculation, uses the 3D depth signal of the 3D point cloud map to segment, finds the location of the small objects, and uses the color image information to recognize the object. A series of experiments have proved that the proposed scheme can generate marked color images and point cloud images and improve the efficiency of the algorithm. Our algorithm improves the disadvantages of the traditional YOLO neural network in pedestrian recognition, reduces the input image range through point cloud image segmentation, and improves the recognition rate of small objects in the YOLO network. Our method could recognize randomly small objects with over 75% recognition accuracy, outperforming other methods in the literature.
 
Keywords—3D image, depth map, high accuracy, image recognition, smart cities

Cite: Yu-Cheng Fan, Chun Ju Huang, and Chitra Meghala Yelamandala, "Image Recognition Based on High Accuracy 3D Depth Map Information," Journal of Advances in Information Technology, Vol. 14, No. 5, pp. 1082-1087, 2023.

Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC-ND 4.0), which permits use, distribution and reproduction in any medium, provided that the article is properly cited, the use is non-commercial and no modifications or adaptations are made.