Driver-Assistant System Using Computer Vision and Machine Learning

Main Article Content

Autores

Cristian Valencia-Payan, M.Sc. https://orcid.org/0000-0002-7270-4870
Julián Muñoz-Ordóñez, M.Sc. https://orcid.org/0000-0001-9393-6139
Leonairo Pencue-Fierro https://orcid.org/0000-0002-1662-7495

Abstract

Safety has been one of the key points in vehicle design, in this case one of its main objectives is to implement warning systems to notify the driver about inappropriate or atypical process in their driving process, trying to avoid accidents that affect their vehicle passengers, as well as inflicting damage on third parties. Day by day, more systems are created to monitor the environment around the vehicle in order to ensure safe driving at all times. According to the World Health Organization, for 2016 there were 1.35 million deaths related to traffic accidents. This research presents the first driving assistance system developed for Colombia, the system detects and recognizes preventive and regulatory traffic signals and its precision is not affected by rotations and scale of the traffic signals present in an actual route, this is this way because the system is based on Haar classifiers. The system recognizes lane deviations, estimation of the curve direction and obstacle protruding along the way using computer vision algorithms, making it a low-cost computational system. Furthermore, this research provides the first resulting cascades for the detection of Colombian regulatory and preventive traffic signals. The system is tested in real environments on Colombian roads, obtaining an accuracy of over 90%. This research shows that computer vision-based methods are competitive against current proposals such as deep neural networks.

Keywords:

Article Details

Licence

Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

All articles included in the Revista Facultad de Ingeniería are published under the Creative Commons (BY) license.

Authors must complete, sign, and submit the Review and Publication Authorization Form of the manuscript provided by the Journal; this form should contain all the originality and copyright information of the manuscript.

The authors who publish in this Journal accept the following conditions:

a. The authors retain the copyright and transfer the right of the first publication to the journal, with the work registered under the Creative Commons attribution license, which allows third parties to use what is published as long as they mention the authorship of the work and the first publication in this Journal.

b. Authors can make other independent and additional contractual agreements for the non-exclusive distribution of the version of the article published in this journal (eg, include it in an institutional repository or publish it in a book) provided they clearly indicate that the work It was first published in this Journal.

c. Authors are allowed and recommended to publish their work on the Internet (for example on institutional or personal pages) before and during the process.
review and publication, as it can lead to productive exchanges and a greater and faster dissemination of published work.

d. The Journal authorizes the total or partial reproduction of the content of the publication, as long as the source is cited, that is, the name of the Journal, name of the author (s), year, volume, publication number and pages of the article.

e. The ideas and statements issued by the authors are their responsibility and in no case bind the Journal.

References

[1] WHO, Global Status Report on Road Safety, 2018.

[2] Y. Amichai-Hamburger, Y. Mor, T. Wellingstein, T. Landesman, and Y. Ophir, “The Personal Autonomous Car: Personality and the Driverless Car,” Cyberpsychology, Behavior, and Social Networking, vol. 23 (4), pp. 242-245, Apr. 2020. https://doi.org/10.1089/cyber.2019.0544

[3] S. Gu, Y. Zhang, X. Yuan, J. Yang, T. Wu, and H. Kong, “Histograms of the Normalized Inverse Depth and Line Scanning for Urban Road Detection,” IEEE Transactions on Intelligent Transportation Systems, vol. 20 (8), pp. 3070-3080, Aug. 2019. https://doi.org/10.1109/TITS.2018.2871945

[4] H. Liu, X. Han, X. Li, Y. Yao, P. Huang, and Z. Tang, “Deep representation learning for road detection using Siamese network,” Multimedia Tools and Applications, vol. 78, pp. 24269-24283, May 2019. https://doi.org/10.1007/s11042-018-6986-1

[5] K. Wang, F. Yan, B. Zou, L. Tang, Q. Yuan, and C. Lv, “Occlusion-free road segmentation leveraging semantics for autonomous vehicles,” Sensors (Switzerland), vol. 19 (21), e4711, Nov. 2019. https://doi.org/10.3390/s19214711

[6] X. Lu et al., “Multi-Scale and Multi-Task Deep Learning Framework for Automatic Road Extraction,” IEEE Transactions on Geoscience and Remote Sensing, vol. 57 (11), pp. 9362-9377, Nov. 2019. https://doi.org/10.1109/TGRS.2019.2926397

[7] M. Dong, X. Zhao, X. Fan, C. Shen, and Z. Liu, “Combination of modified U-Net and domain adaptation for road detection,” IET Image Processing, vol. 13 (14), pp. 2735-2743, Dec. 2019. https://doi.org/10.1049/iet-ipr.2018.6696

[8] S. Gu, Y. Zhang, J. Tang, J. Yang, J. M. Alvarez, and H. Kong, “Integrating Dense LiDAR-Camera Road Detection Maps by a Multi-Modal CRF Model,” IEEE Transactions on Vehicular Technology, vol. 68 (12), pp. 11635-11645, Dec. 2019. https://doi.org/10.1109/TVT.2019.2946100

[9] J. Pérez, V. Milanés, J. Alonso, E. Onieva, and T. de Pedro, “Adelantamiento con vehiculos autónomos en carreteras de doble sentido,” Revista Iberoamericana de Automática e Informática industrial, vol. 7 (3), pp. 25-33, Jul. 2010. https://doi.org/10.1016/s1697-7912(10)70039-x

[10] D. Tabernik, and D. Skocaj, “Deep Learning for Large-Scale Traffic-Sign Detection and Recognition,” IEEE Transactions on Intelligent Transportation Systems, vol. 21 (4), pp. 1427-1440, Apr. 2020. https://doi.org/10.1109/TITS.2019.2913588

[11] F. Rundo, “Deep LSTM with dynamic time warping processing framework: A novel advanced algorithm with biosensor system for an efficient car-driver recognition,” Electronics, vol. 9 (4), e6156, Apr. 2020. https://doi.org/10.3390/electronics9040616

[12] A. Martín, V. M. Vargas, P. A. Gutiérrez, D. Camacho, and C. Hervás-Martínez, “Optimising Convolutional Neural Networks using a Hybrid Statistically-driven Coral Reef Optimisation algorithm,” Applied Soft Computing, vol. 90, e106144, May 2020. https://doi.org/10.1016/j.asoc.2020.106144

[13] J. Muñoz-Ordóñez, C. Cobos, M. Mendoza, E. Herrera-Viedma, F. Herrera, and S. Tabik, “Framework for the Training of Deep Neural Networks in TensorFlow Using Metaheuristics,” in International Conference on Intelligent Data Engineering and Automated Learning, 2018, pp. 801-811. https://doi.org/10.1007/978-3-030-03493-1_83

[14] D. P. Kingma, and J. L. Ba, “Adam: A method for stochastic optimization,” arXiv:1412.6980, Dec. 2015.

[15] F. D. Turek, Machine Vision Fundamentals: How to Make Robots ‘See', 2011. http://www.techbriefs.com/component/content/article/10531?start=2

[16] D. Yufeng, and Z. Bo, “Intelligent Identification Method of Bicycle Logo Based on Haar Classifier,” in 5th International Conference on Systems and Informatics, Jan. 2019, pp. 973-977. https://doi.org/10.1109/ICSAI.2018.8599499

[17] P. Viola, and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2001. https://doi.org/10.1109/cvpr.2001.990517

[18] H. Kawabe, S. Seto, H. Nambo, and Y. Shimomura, “Experimental study on scanning of degraded braille books for recognition of dots by machine learning,” in Advances in Intelligent Systems and Computing, 2020, pp. 322-334. https://doi.org/10.1007/978-3-030-21248-3_24

[19] R. Lienhart, and J. Maydt, “An extended set of Haar-like features for rapid object detection,” in IEEE International Conference on Image Processing, 2002. https://doi.org/10.1109/icip.2002.1038171

[20] G. Farías, M. Santos, F. J. L. Marron, and D. Informática, “Determinación de parámetros de la Transformada Wavelet para la clasificación de señales del diagnóstico scattering thomson,” in XXV Jornadas de Automática, 2004.

[21] W. X. Kang, Q. Q. Yang, and R. P. Liang, “The comparative research on image segmentation algorithms,” in Proceedings of the 1st International Workshop on Education Technology and Computer Science, 2009, pp. 703-707. https://doi.org/10.1109/ETCS.2009.417

[22] OpenCV Team, OpenCV, 2020. https://opencv.org/

[23] R. C. Gonzalez, and R. E. Woods, Digital Image Processing (3rd Edition), Pearson, 2007.

[24] H. Ling, and K. Okada, “An efficient earth mover’s distance algorithm for robust histogram comparison,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29 (5), pp. 840-853, May 2007. https://doi.org/10.1109/TPAMI.2007.1058

[25] A. Khashman, “A modified backpropagation learning algorithm with added emotional coefficients,” IEEE Transactions on Neural Networks, vol. 19 (11), pp. 1896-1909, 2008. https://doi.org/10.1109/TNN.2008.2002913

[26] F. Jurie, and M. Dhome, “A simple and efficient template matching algorithm.,” in Proceedings of the IEEE International Conference on Computer Vision, 2001, pp. 544-549. https://doi.org/10.1109/iccv.2001.937673

[27] W. Rong, Z. Li, W. Zhang, and L. Sun, “An improved Canny edge detection algorithm,” in IEEE International Conference on Mechatronics and Automation, 2014, pp. 577-582. https://doi.org/10.1109/ICMA.2014.6885761

Downloads

Download data is not yet available.