Use of Convolutional Neural Networks in Smartphones for the Identification of Oral Diseases Using a Small Dataset

Main Article Content


Jormany Quintero-Rojas, M.Sc.
Jesús David González


Image recognition and processing is a suitable tool in systems using machine learning methods. The addition of smartphones as complementary tools in the health area for diagnosis is a fact nowadays due to the advantages they present. Following the trend of providing tools for diagnosis, this research aimed to develop a prototype mobile application for the identification of oral lesions, including potentially malignant lesions, based on convolutional neural networks, as early detection of indications of possible types of cancer in the oral cavity. A mobile application was developed for the Android operating system that implemented the TensorFlow library and the Mobilenet V2 convolutional neural network model. The training of the model was performed by transfer learning with a database of 500 images distributed in five classes for recognition (Leukoplakia, Herpes Simplex Virus Type 1, Aphthous stomatitis, Nicotinic stomatitis, and No lesion). The 80% of the images were used for training and 20% for validation. It was obtained that the application presented at least 80% precision in the recognition of four class. The f1-score and area under curve metrics were used to evaluate performance. The developed mobile application presented an acceptable performance with metrics higher than 75% for the recognition of three lesions, on the other hand, it yielded an unfavorable performance lower than 70% for identifying nicotinic stomatitis cases with the chosen dataset.


Article Details


Creative Commons License

This work is licensed under a Creative Commons Attribution 4.0 International License.

All articles included in the Revista Facultad de Ingeniería are published under the Creative Commons (BY) license.

Authors must complete, sign, and submit the Review and Publication Authorization Form of the manuscript provided by the Journal; this form should contain all the originality and copyright information of the manuscript.

The authors who publish in this Journal accept the following conditions:

a. The authors retain the copyright and transfer the right of the first publication to the journal, with the work registered under the Creative Commons attribution license, which allows third parties to use what is published as long as they mention the authorship of the work and the first publication in this Journal.

b. Authors can make other independent and additional contractual agreements for the non-exclusive distribution of the version of the article published in this journal (eg, include it in an institutional repository or publish it in a book) provided they clearly indicate that the work It was first published in this Journal.

c. Authors are allowed and recommended to publish their work on the Internet (for example on institutional or personal pages) before and during the process.
review and publication, as it can lead to productive exchanges and a greater and faster dissemination of published work.

d. The Journal authorizes the total or partial reproduction of the content of the publication, as long as the source is cited, that is, the name of the Journal, name of the author (s), year, volume, publication number and pages of the article.

e. The ideas and statements issued by the authors are their responsibility and in no case bind the Journal.


[1] C. Carvajal, “El impacto del diagnóstico médico como experiencia traumática. Algunas reflexiones,” Revista Médica Clínica Las Condes, vol. 28, no. 6, pp. 841-848, 2017.

[2] A. Goic, “Sobre el origen y desarrollo del libro Semiología Médica,” Revista Médica Chile, vol. 146, no. 3, pp. 387-390, 2018.

[3] M. Del Río, J. M. López, C. Vaquero, “La inteligencia artificial en el ámbito médico,” Revista Española de Investigaciones Quirúrgicas, vol. 21, no. 3, pp. 113-116, 2018.

[4] R. Karthikayan, A. Sukumaran, M. Parangimalai, V. Raj, “Accuracy of smartphone based photography in screening for potentially malignant lesions among a rural population in Tamil Nadu: A cross-sectional study,” Digital Medicine, vol. 5, no. 2, pp. 56, 2019.

[5] A. Pereira, S. A. Lazaro, C. G. Molina-Bastos, V. L. Oliveira, R. Nunes, M. Rodrigues, V. Coelho, “Teledentistry in the diagnosis of oral lesions: A systematic review of the literature,” Journal of the American Medical Informatics Association, vol. 27, no. 7, pp. 1166-1172, 2020.

[6] M. Estai, Y. Kanagasingam, D. Xiao, J. Vignarajan, B. Huang, E. Kruger, M. Tennant, “A proof-of-concept evaluation of a cloud-based store-and-forward telemedicine app for screening for oral diseases,” Journal of telemedicine and telecare, vol. 22, no. 6, pp. 319-325, 2016.

[7] J. González, “Diseño e implementación de una aplicación móvil inteligente en Android para reconocimiento de lesiones y enfermedades cutáneas y en la mucosa bucal,” Grade Thesis, Universidad de Los Andes, Mérida, Venezuela, 2019.

[8] R. Anantharaman, M. Velazquez, Y. Lee, “Utilizing Mask R-CNN for detection and segmentation of oral diseases,” in IEEE International Conference on Bioinformatics and Biomedicine, Spain, 2018, pp. 2197-2204.

[9] A. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, H. Adam, “Mobilenets: Efficient convolutional neural networks for mobile vision applications,” arXiv Preprint, 2017.

[10] A. Leite, K. Vasconceslos, H. Willems, R. Jacobs, “Radiomics and machine learning in oral healthcare,” Proteomics Clinical Applications, vol. 14, e1900040, 2020.

[11] A. Bhattacharya, A. Young, A. Wong, S. Stalling, M. Wei, D. Hadley, “Precision Diagnosis Of Melanoma And Other Skin Lesions From Digital Images,” in AMIA Joint Summits on Translational Science proceedings, 2017, pp. 220-226

[12] J. Velasco, C. Pascion, J. W. Alberio, J. Apuang, J. S. Cruz, M. A. Gomez, B. Jr. Molina, L. Tuala, A. Thio-ac, R. Jr. Jorda, “A Smartphone-Based Skin Disease Classification Using MobileNet CNN,” International Journal of Advanced Trends in Computer Science and Engineering, vol. 8, no. 5, pp. 2632-2637, 2019.

[13] A. Romero-Lopez, X. Giro-i-Nieto, J. Burdick, O. Marques, “Skin Lesion Classification from Dermoscopic Images Using Deep Learning Techniques,” in Biomedical Engineering, Canada, 2017, pp. 49-54.

[14] J. Burdick, O. Marques, J. Weinthal, B. Furht, “Rethinking Skin Lesion Segmentation in a Convolutional Classifier,” Journal of Digital Imaging, vol. 31, no. 4, pp. 4354440, 2018.

[15] S. Han, M. Kim, W. Lim, G. Park, I. Park, S. Chang, “Classification of the Clinical Images for Benign and Malignant Cutaneous Tumors Using a Deep Learning Algorithm,” Journal of Investigative Dermatology, vol. 138, no. 7, pp. 1529-1538, 2018.

[16] E. Ziaten-Cerezo, “Clasificación de lesiones en la piel usando aprendizaje profundo,” Grade Thesis, Universidad de Málaga, Málaga, Spain, 2019.

[17] J. Yap, W. Yolland, P. Tschandl, “Multimodal skin lesion classification using deep learning,” Experimental Dermatology, vol. 27, no. 11, pp. 1261-1267, 2018.

[18] T. Domènech, “Clasificación de imágenes dermatoscópicas utilizando Redes Neuronales Convolucionales e información de metadatos,” Grade Thesis, Universidad Politécnica de Catalunya, Barcelona, Spain, 2019.

[19] C. S. Cheng, P. Shueng, C. Chang, C. Kuo, “Adapting an Evidence-based Diagnostic Model for Predicting Recurrence Risk Factors of Oral Cancer”, Journal of Universal Computer Science, vol. 24, no. 6, pp. 742-752, 2018.

[20] R. Méndez-Hernández, “Aprendizaje profundo para la segmentación de lesiones pigmentadas de la piel,” Grade Thesis, Universidad de Sevilla, Sevilla, Spain, 2019.

[21] A. Rana, G. Yauney, L. C. Wong, O. Gupta, A. Muftu, P. Shah, “Automated Segmentation of Gingival Diseases from Oral Images,” in IEEE Healthcare Innovations and Point of Care Technologies, United States, 2017, pp. 144-147.

[22] M. Estai, Y. Kanagasingam, B. Huang, J. Shiikha, E. Kruger, S. Bunt, M. Tenant, “Comparison of a smartphone-based photographic method with face-to-face caries assessment: a mobile teledentistry model,” Telemedicine and e-Health, vol. 23, no. 5, pp. 435-440, 2017.

[23] L. Jae‐Hong, K. Do-Hyung, J. Seong-Nyum, “Diagnosis of cystic lesions using panoramic and cone beam computed tomographic images based on deep learning neural network,” Oral Diseases, vol. 26, pp. 152-158, 2019.

[24] J. Yang, X. Sun, J. Liang, P. L. Rosin, “Clinical Skin Lesion Diagnosis Using Representations Inspired by Dermatologist Criteria,” in IEEE/CVF Conference on Computer Vision and Pattern Recognition, United States, 2018, pp. 1258-1266.

[25] J. Serrano-Fernández, “Sistema de ayuda al diagnóstico para la detección temprana de melanomas,” Grade Thesis, Universidad Carlos III, Madrid, Spain, 2017.

[26] Y. Gal, R. Islam, Z. Ghahramani, “Deep Bayesian Active Learning with Image Data,” in Proceedings 34th International Conference on Machine Learning, Australia, 2017, pp. 1183-1192.

[27] J. Sánchez-Hernández, D. Hernández-Rabadán, “Comparación de métodos de clasificación aplicados al diagnóstico de melanomas malignos mediante asimetría,” Programación Matemática y Software, vol. 6, no. 2, pp. 51-56, 2014.

[28] J. Kawahara, A. BenTaieb, G. Hamarneh “Deep features to classify skin lesions,” in IEEE 13th International Symposium on Biomedical Imaging, Praga, 2016, pp. 1397-1400.

[29] U. Kalwa, C. Legner, T. Kong, S.Pandey, “Skin Cancer Diagnostics with an All-Inclusive Smartphone Application,” Symmetry, vol. 11, no. 6, pp. 790, 2019.


Download data is not yet available.