Contenido principal del artículo

Jorge Montenegro Navarro
Universidad de Málaga
España
https://orcid.org/0009-0005-8480-1140
Alberto García Guillén
Universidad de Málaga
España
Francisco Manuel Castro Payán
Universidad de Málaga
España
https://orcid.org/0000-0002-7340-4976
Jorge Luis Martínez Rodríguez
Universidad de Málaga
España
https://orcid.org/0000-0002-8940-2465
Jesús Morales Rodríguez
Universidad de Málaga
España
https://orcid.org/0000-0003-1095-4775
Núm. 45 (2024), Robótica
DOI: https://doi.org/10.17979/ja-cea.2024.45.10870
Recibido: jun. 5, 2024 Aceptado: jul. 3, 2024 Publicado: jul. 12, 2024
Derechos de autor

Resumen

Este artículo plantea el desarrollo de un entorno de pruebas para la detección de participantes del tráfico en entornos urbanos, mediante redes neuronales a partir del procesamiento de los datos procedentes de los sensores del vehículo: una cámara RGB y un sensor LiDAR 3D. Para ello se presenta la integración del simulador realista CARLA (Car Learning to Act), que permite la recreación de escenarios urbanos complejos, junto a ROS2 (Robot Operating System), que es un entorno para la creación de aplicaciones robóticas. En concreto, se evalúa cualitativamente el rendimiento de la red CNN (Convolutional Neural Network) YOLOv8 y la red transformadora especializada en detección DETR (Detection Transformer) para el caso de imágenes RGB. De forma análoga, para la detección de participantes del tráfico en nubes de puntos se analizan las redes PV-RCNN (PointVoxel Regional based Convolutional Neural Network) y su evolución Part-A2-Net.

Detalles del artículo

Citas

Balasubramaniam, A., Pasricha, S., 2022. Object detection in autonomous vehicles: Status and open challenges. arXiv preprint arXiv:2201.07706. DOI: 10.48550/arXiv.2201.07706

Biswas, A., Wang, H.-C., 2023. Autonomous vehicles enabled by the integration of IoT, edge intelligence, 5G, and blockchain. Sensors 23 (4). DOI: 10.3390/s23041963 DOI: https://doi.org/10.3390/s23041963

Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., Zagoruyko, S., 2020. End-to-End Object Detection with Transformers. Springer International Publishing, pp. 213–229. DOI: 10.1007/978-3-030-58452-8 13 DOI: https://doi.org/10.1007/978-3-030-58452-8_13

Dosovitskiy, A., Ros, G., Codevilla, F., Lopez, A., Koltun, V., 2017. Carla: An open urban driving simulator. In: Proceedings of the 1st Annual Conference on Robot Learning. Proceedings of Machine Learning Research, pp. 1–16. DOI: 10.48550/arXiv.1711.03938

Fischer, T., Vollprecht, W., Traversaro, S., Yen, S., Herrero, C., Milford, M., 2021. A robostack tutorial: Using the robot operating system alongside the conda and jupyter data science ecosystems. IEEE Robotics and Automation Magazine. DOI: 10.1109/MRA.2021.3128367 DOI: https://doi.org/10.1109/MRA.2021.3128367

Gannamaneni, S., Houben, S., Akila, M., 2021. Semantic concept testing in autonomous driving by extraction of object-level annotations from carla. In: 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). pp. 1006–1014. DOI: 10.1109/ICCVW54120.2021.00117 DOI: https://doi.org/10.1109/ICCVW54120.2021.00117

Geiger, A., Lenz, P., Stiller, C., Urtasun, R., 2013. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research 32 (11), 1231–1237. DOI: 10.1177/0278364913491297 DOI: https://doi.org/10.1177/0278364913491297

Liu, H., Gu, Z., Wang, C., Wang, P., Vukobratovic, D., 2023a. A lidar semantic segmentation framework for the cooperative vehicle-infrastructure system. In: 2023 IEEE 98th Vehicular Technology Conference (VTC2023-Fall). pp. 1–5. DOI: 10.1109/VTC2023-Fall60731.2023.10333790 DOI: https://doi.org/10.1109/VTC2023-Fall60731.2023.10333790

Liu, H., Wu, C., Wang, H., 05 2023b. Real time object detection using lidar and camera fusion for autonomous driving. Scientific Reports 13. DOI: 10.1038/s41598-023-35170-z DOI: https://doi.org/10.1038/s41598-023-35170-z

Moreau, J., Ibanez-Guzman, J., 2023. Emergent visual sensors for autonomous vehicles. IEEE Transactions on Intelligent Transportation Systems 24 (5), 4716–4737. DOI: 10.1109/TITS.2023.3248483 DOI: https://doi.org/10.1109/TITS.2023.3248483

Nikolenko, S. I., 2021. Synthetic data for deep learning. Vol. 174. Springer. DOI: 10.1007/978-3-030-75178-4 DOI: https://doi.org/10.1007/978-3-030-75178-4

OpenPCDet, 2020. Openpcdet: An open-source toolbox for 3d object detection from point clouds. https://github.com/open-mmlab/OpenPCDet.

O’Shea, K., Nash, R., 11 2015. An introduction to convolutional neural networks. ArXiv e-prints. DOI: 10.48550/arXiv.1511.08458

Pradhan, S., 2023. ROS 2 wrapper for OpenPCDet. https://github.com/pradhanshrijal/pcdet_ros2.

Redmon, J., Divvala, S., Girshick, R., Farhadi, A., 2016. You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 779–788. DOI: 10.48550/arXiv.1506.02640 DOI: https://doi.org/10.1109/CVPR.2016.91

SAE, 2024. Sociedad de ingenieros de la automoción.URL: https://www.sae.org/

Shi, S., Guo, C., Jiang, L., Wang, Z., Shi, J., Wang, X., Li, H., 2020a. Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 10526–10535. DOI: 10.1109/CVPR42600.2020.01054 DOI: https://doi.org/10.1109/CVPR42600.2020.01054

Shi, S., Wang, Z., Shi, J., Wang, X., Li, H., 2020b. From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network. IEEE transactions on pattern analysis and machine intelligence 43 (8), 2647–2664. DOI: 10.1109/TPAMI.2020.2977026 DOI: https://doi.org/10.1109/TPAMI.2020.2977026

Urmila., O., Megalingam, R. K., 2020. Processing of lidar for traffic scene perception of autonomous vehicles. In: 2020 International Conference on Communication and Signal Processing (ICCSP). pp. 298–301. DOI: 10.1109/ICCSP48568.2020.9182175 DOI: https://doi.org/10.1109/ICCSP48568.2020.9182175