Ta strona używa pliki cookies, w celu polepszenia użyteczności i funkcjonalności oraz w celach statystycznych. Dowiedz się więcej w Polityce prywatności.
Korzystając ze strony wyrażasz zgodę na używanie plików cookies, zgodnie z aktualnymi ustawieniami przeglądarki.
Akceptuję wykorzystanie plików cookies
Journal of Stomatology
eISSN: 2299-551X
ISSN: 0011-4553
Journal of Stomatology
Current issue Archive Manuscripts accepted About the journal Editorial board Reviewers Abstracting and indexing Subscription Contact Instructions for authors Ethical standards and procedures
Editorial System
Submit your Manuscript
SCImago Journal & Country Rank
1/2025
vol. 78
 
Share:
Share:
Original paper

Deep learning model using YOLO-v5 for detecting and numbering of teeth in dental bitewing images

Cansu Görürgöz
1
,
Akhilanand Chaurasia
2
,
Mohmed Isaqali Karobari
3
,
Ibrahim Sevki Bayrakdar
4, 5
,
Özer Çelik
5, 6
,
Kaan Orhan
7, 8

  1. Department of Dentomaxillofacial Radiology, Bolu İzzet Baysal Oral Health Hospital, Turkey
  2. Department of Oral Medicine and Radiology, King George’s Medical University, India
  3. Dental Research Unit, Center for Global Health Research, Saveetha Medical College and Hospital, Saveetha Institute of Medical and Technical Sciences, India
  4. Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Turkey
  5. Center of Research and Application for Computer Aided Diagnosis and Treatment in Health, Eskisehir Osmangazi University, Turkey
  6. Department of Mathematics and Computer Science, Faculty of Science, Eskisehir Osmangazi University, Turkey
  7. Department of Dentomaxillofacial Radiology, Faculty of Dentistry, Ankara University, Turkey
  8. Department of Oral Diagnostics, Faculty of Dentistry, Semmelweis University, Hungary
J Stoma 2025; 78, 1: 42-51
Online publish date: 2025/03/19
Article file
- JOS-01070.pdf  [0.76 MB]
Get citation
 
PlumX metrics:
 

INTRODUCTION

Making an accurate diagnosis is one of the most criti­cal tasks in dental practice. In addition to dentist’s experience, there are complementary technologies, which can provide essential data to evaluate clinical situation [1]. Radiographic examination plays a cornerstone role in a patient’s dental journey [2, 3].
In the field of dentistry, different scanning techniques have been implemented in the same anatomical region of the same person [4]. Intra-oral radiography methods, such as bitewing and periapical radiographs, are also frequently used. When compared with periapical radiographs, bitewing images offer views of the crown, root, and part of the bone surrounding the teeth in the upper and lower jaw, and more teeth can be analyzed on a single image. Bitewing radiographs can identify interproximal caries, alveolar bone loss, and calculus [5]. Although they are undoubtedly helpful, their interpretation will always depend on the expertise of the person analyzing them. More experienced examiners were found to have better diagnostic accuracy than less experienced ones [6]. Any skilled physician, however, might suffer from both mental and visual weariness that may lead to missing important radiographic indicates, which can influence their final diagnostic conclusions, and result in misdiagnosis [7].
Convolutional neural networks (CNNs), an advancement in artificial intelligence (AI) as well as deep learning (DL), which promise to enhance and expedite diagnosis, have grown in popularity in recent years. Therefore, the afore-mentioned issues may be resolved by automated dental radiography diagnostic systems, which can eliminate the variations caused by subjectivity of different practitioners [4].
In general, dental imaging (both 2D and 3D) as innovative technology, has demonstrated exceptional performance [8-10]. Majority of studies assess how well CNN performs in identifying anatomical features, medical procedures, and disorders [11-13]. When reporting CNNs results, the detection and classification tasks, which refer to the process of object identification and categorization, respectively, are very frequently investigated [14]. One of them, tooth detection and numbering, is a fundamental subject that has been explored in research projects with the application of AI in evaluation of dental radiographs [15]. This task is important for medico-legal experts in identification of an individual, but can also be used to create statistical data and dental charts [16, 17]. To our knowledge, few studies have used DL techniques to detect and number teeth in bitewing images [18-20].

OBJECTIVES

The current study aimed to develop a DL model for detecting and numbering of teeth in bitewing images, and to validate the diagnostic performance of the developed model.

MATERIAL AND METHODS

This study was approved by the Institutional Review Board at Saveetha Dental College (decision date and approval number: SRB/SDC/Faculty/22/Endo/060), and complied with the Declaration of Helsinki and ethical guidelines of the responsible committee.
IMAGE DATASET PROTECTION AND SAMPLING
The study sample consisted of 3,491 anonymized bitewing radiographs, exported in JPEG, and acquired randomly using various intra-oral phosphor plates with different sizes and resolutions. Radiographs were taken using three different X-ray units, according to the manufacturer’s instructions. Their exposure conditions are summarized in Table 1. Radiographs of patients with permanent dentition (without missing teeth or dental implants) and only good diagnostic quality bitewing images were included in the study, while radiographs with mixed or deciduous teeth, motion artifacts, or root fragments were excluded.
PROCESSING IMAGE DATASETS AND REFERENCE TESTING
Ground truth labeling was performed with CranioCatch annotation software (CranioCatch, Eskisehir, Turkey) by 2 dentomaxillofacial radiologists (CG, ISB) in agreement on all images. Experts were instructed to draw minimum-size boxes around every tooth (including the whole crown and root), and to number each box with two digits from FDI system (13-18, 43-48 for right-side bitewing radiographs, and 23-28, 33-38 for left-side bitewing radiographs).
ARCHITECTURE OF CNN
Methods for detecting objects involve one-stage and two-stage techniques. Single-stage approaches are popular for their speed and accuracy, as they are able to perform both classification and position detection tasks simultaneously [21]. You only look once (YOLO) is the most common one-step object detection method, and is characterized by simple data processing network that perform both object detection and classification at the same time [22].
Several versions of YOLO architecture exist, but tests in the present study were carried out with YOLO-v5. This is because YOLO-v5 is lighter and faster than previous version, YOLO-v4 [23]. Additionally, it has been proven to be very fast and achieving good results. The YOLO-v5 model consists of three parts: CSP-DarkNet53 as the backbone, SPP, and PANet in the model neck and head used in YOLO-v4 [24].
To describe these models, it is necessary to clarify some technical terms. Several hyper-parameters need to be set before starting of the learning process. One of these is the learning rate that controls the speed at which the network updates its parameters. Epochs define how many times one complete instance traverses this network during training. Total number of datasets trained in a single batch is defined as the batch size.
ESTABLISHMENT OF DATASETS
CLASSIFICATION MODEL
All images were re-sized to 224 × 224 pixels. The 3,491 anonymized bitewing images were randomized into training, validation, and test sets of 2,792, 367, and 332 radiographs, respectively. The proposed AI model approach for classifying bitewing radiographs as the left or right was trained with YOLO-v5. A batch size of 32, learning rate of 0.01, and epoch number of 100 were employed.
TOOTH DETECTION AND NUMBERING MODEL OF THE RIGHT BITEWING RADIOGRAPHS
The following categories for 1,887 anonymous bitewing images were chosen at random – train: 1,496 im-ages, 12,950 labels; validation: 195 images, 1,598 labels; test: 196 images, 1,584 labels.
The proposed AI model approach to detect and number teeth in the right bitewing images using labels including 13-18 and 43-48 was based on deep CNN. The hyper- parameters applied to train YOLO-v5 were a batch size of 32, learning rate of 0.01, and epoch number of 300.
Contrast limited adaptive histogram equalization (CLAHE), and data augmentation were utilized to train and validate. The total number of images was 5,269. Train: 1,496 × 3 = 4,488 images, 38,850 label; validation: 195 × 3 = 585 images, 4,794 labels; test: 196 images, 1,584 labels.
TOOTH DETECTION AND NUMBERING MODEL OF THE LEFT BITEWING RADIOGRAPHS
The following categories for 1,604 anonymous bitewing images were chosen at random – train: 1,273 im-ages, 10,907 labels; validation: 165 images, 1,316 labels; test: 166 images, 1,362 labels.
The proposed AI model approach to detect and number teeth in the left bitewing radiographs using la-bels including 23-28 and 33-38 was based on deep CNN. The hyper-parameters applied to train YOLO-v5 were a batch size of 32, learning rate of 0.01, and epoch number of 300.
For the training and validation groups, CLAHE and data augmentation were utilized. The total number of images was 4,480. Train: 1,273 × 3 = 3,819 images, 32,721 labels; validation: 165 × 3 = 495 images, 3,948 labels; test: 166 images, 1,362 labels.
METRICS FOR VALIDATION AND STATISTICAL ANALYSIS
The most common metrics used in studies on tooth detection and numbering are precision, sensitivity, and F1 score. The true positive (TP), false positive (FP), and false negative (FN) numbers were calculated. After the calculation of TP, FP, and FN, the following metrics were derived: sensitivity = TP/(TP + FN), precision = TP/(TP + FP), and F1 score = 2TP/(2TP + FP + FN).

RESULTS

The Eskisehir Osmangazi University Faculty of Dentistry Dental-AI Laboratory’s computer equipment with Dell PowerEdge T640 GPU Calculation Server and NVIDIA Tesla with a V100 16G Passive GPU (Dell Inc., Texas, USA), was used for the experiments. For the classification of bitewing radiographs, promising outcomes were obtained with deep CNN architecture-based AI models. The classification performance of YOLO-v5, including accuracy, sensitivity, specificity, and NPV on the test set is presented in Table 2. The sensitivity and precision for the classification task were 0.9940 and 1, respectively. The estimated F1 score was 0.99970, indicating good recall/precision balance. The proposed DL performed well in detecting and numbering permanent teeth in bitewing images. The results are presented in Table 3.
Figures 1 and 2 demonstrate the confusion matrix, showing the diagnostic results. Higher diagonals and darker blue in confusion matrices indicate higher accuracy. On the right side, the diagnostic accuracy was the highest for upper premolars (97.0%) and the lowest for upper third molars (76%). On the left side, the diagnostic accuracy was the highest for maxillary second premolars (98.0%) and the lowest for mandibular canine (71%). The graphics of F1-confidence curve, precision-confidence curve, precision-recall curve, and recall-confidence curve are shown in Figures 3 and 4.
Figures 5 and 6 describe the training and validation loss curves for the right and left sides, respectively.
The goal of training an object detection model is to minimize total loss, which is a combination of box loss, object loss, and class loss. The loss values should tend to decrease as training proceeds, which indicates improvement in the ability of the model to detect teeth in images. Examples of automatic tooth numbering using the YOLO-v5 model are illustrated in Figures 7 and 8.

DISCUSSION

Applying AI to medicine is regarded as one of the most impressive advancements accomplished in recent years. Restorative dentistry [25-27], oral and maxillofacial surgery [28, 29], and forensic dentistry [30, 31] are some of dental specialties, which have already adopted this novelty.
In dental radiology, several AI systems were proposed for 2D imaging, and many studies attempted to automate the diagnosis of pathology and therapy in panoramic [32-34] and intra-oral radiographs [25, 35, 36]. Furthermore, AI-based software was employed to analyze and automatically recognize anatomical components in 3D images [37-39].
Bitewing radiography is a technique that can obtain parts of maxillary and mandibular teeth on a single film. This method diagnoses inter-proximal caries, which are difficult to detect, and secondary caries under restorations on extra-oral radiographs [5]. AI applications trained with deep learning-trained AI systems started to identify a wide range of dental disorders, including missing teeth, crowns, caries, periodontal dis-eases, and periapical pathologies. Some AI research employed bitewing radiographs for dental diagnostic process [5, 18, 40].
Furthermore, several studies using Bayesian approaches, linear models, or binary support vector ma-chines have been conducted to detect and number teeth [41-43]. However, manual curation of image features is often required, and majority of these systems usually perform an image-enhancing procedure before features’ ex-traction and segmentation. It is time-consuming, producing high workload. The extracted feature quality has a significant impact on the performance of image recognition [44].
Lin et al. [43] developed a model to number and classify teeth based on the tooth region and contour in-formation from bitewing images. The study showed that tooth and pulp shapes were an important part of the classification for correct tooth numbering; therefore, segmentation improved classification. Morpho-lo­gical transformations are usually not considered in dental radiographs due to noise, low contrast, and non-uniform exposure. Although good results have been demonstrated, the classification of poor images, such as those with only 3 or 4 teeth, inconsistent tooth arrangement in the maxilla and mandible, or over-lapping teeth in both jaws, is still a major challenge.
A study by Aeini and Mahmoudi [42] proposed an algorithm to number and classify teeth using the universal numbering scheme based on mesiodistal distance. Their data consisting of 476 dental radio-graphs is superior to several published studies on the number of teeth, number of fractured teeth, and number and type of edentulous teeth. It was found that contour-based classification was preferable when there was less than a 0.3 percent difference between different teeth. However, if only one type of tooth was present in the jaw, the type of tooth could not be determined from the mesiodistal data.
Recently, AI and DL have made profound developments in computer vision, including object, face, and activity recognition, tracking, 3D mapping, and locali­zation. There are recent advancements in DLs, which can automatically extract image characteristics by utilizing original pixel data as input. These systems drastically minimize the amount of work required, and are able to detect some features, which even human eye finds difficult to identify. The classification of medical images using CNN-based DL has recently gained im-portance [45]. For tooth detection and numbering, Chen et al. [35] utilized a faster R-CNN in TensorFlow framework. To test the method, the authors gathered 1,250 periapical radiographs in total. The study’s outcome measurements were insufficient before the application of post-processing procedures. Even the cases, where two “half teeth” were incorrect for one tooth, were considered insufficient, despite their improvement.
Yasa et al. [18] reported an AI model technique to detect and number teeth with Faster R-CNN algorithm. In the test data of the dataset, 697 of the 715 teeth identified in 109 bitewing images utilized in the research were precisely numbered and classified. The confusion matrix values were applied to measure the neural network accuracy. However, it was not possible to identify the exact borders of teeth using the proposed method.
In a study published by Tekin et al. [19], their DL approach had a neural network structure that operated by region. A region-based automated segmentation system that supported the analysis with masking, de-creased the effort on the expert. One technique for segmentation is Mask R-CNN that, when employed on each ROI, predicts a pixel-by-pixel segmentation mask. The segmented teeth are classified and numbered using a FDI notation by the tooth numbering module. Its experimental results showed a precision of 100% and a mAP of 97.49%.
For the tooth numbering, an accuracy of 94.35% and a mAP of 91.51% were achieved. Imaging and analyzing the dental structure are essential for diagnosing disease, monitoring oral health, and planning treatment [3, 18]. To help professionals automatically identify and number teeth in bitewing radiographs, this study evaluated the clinical usefulness of an AI in terms of accuracy. This research suggests that AI-based software can help improving efficiency in detecting teeth in bitewing images. However, due to the anatomical symmetry of the mouth and the alignment of the upper and lower teeth, it can be misclassified. Unlike panoramic radiographs, the bitewing radiographs used in the study only allow for imaging of certain areas of the mouth, not the whole mouth. In particular, canines and third molars can appear in a small area in some bitewing radiographs and larger areas in others. This reduces the detection capability of the model, as YOLO-v5 is not optimized to detect small objects [46].
Several AI research papers have suggested that the use of multiple neural networks for different detection tasks could improve CNN performance. This was proven to help learning faster, therefore producing more accurate results [7, 32]. A study conducted by Leite et al. [7] showed positive results in detecting, numbering, and segmenting teeth in panoramic radiographs. In particular, the researchers noted that the best performance came from combining DeepLabv3 CNN with ResNet101 CNN. Two different types of neural networks have also been used in previous studies to improve the performance of CNN. Researchers employed the latest version of Faster R-CNN model to detect teeth from radiographs, and VGG-16 architecture to number the detected dental structures [14, 32].
The potential of YOLO-v5 for automated diagnostic processes in bitewing radiographs and dental image interpretation were demonstrated in this study. In terms of tooth detection and numbering, the model showed high performance. However, the model performed worst in areas partially visible in bitewing radio-graphs (especially canines and third molars), and within the image area of varying size. In future studies, this problem may be overcome by using a combination of one-stage and two-stage object detection algorithms.
The current study have several limitations. First, there were no complex images in our dataset. Further studies can include primary teeth, implants, root fragments, and teeth with severely damaged crowns. Second, the performance of the experts was not assessed. More research are needed to support clinicians and provide meaningful clinical implications.

CONCLUSIONS

The impact of YOLO-v5 on tooth detection and classification was demonstrated using the experimental results of the proposed system. Even with good results obtained, the classification of poor images, such as inconsistent tooth arrangement in both jaws and overlapping teeth, is still a major challenge. Future research should test different types of neural networks and architectures, and find a better way to improve the results of tooth detection.

DISCLOSURES

1. Institutional review board statement: The study was approved by the Institutional Review Board at Sa­veetha Dental College (decision date and approval number: SRB/SDC/Faculty/22/Endo/060).
2. Assistance with the article: None.
3. Financial support and sponsorship: None.
4. Conflicts of interest: The authors declare no potential conflicts of interest concerning the research, authorship, and/or publication of this article.
References
1. Abdalla-Aslan R, Yeshua T, Kabla D, Leichter I, Nadler C. An arti­ficial intelligence system using machine-learning for automatic detection and classification of dental restorations in panoramic radiography. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 130: 593-602.
2. Chan M, Dadul T, Langlais R, Russell D, Ahmad M. Accuracy of extraoral bitewing radiography in detecting proximal caries and crestal bone loss. J Am Dent Assoc 2018; 149: 51-58.
3. Vandenberghe B, Jacobs R, Bosmans H. Modern dental imaging: a review of the current technology and clinical applications in dental prac-tice. Eur Radiol 2010; 20: 2637-2655.
4. Schwendicke FA, Samek W, Krois J. Artificial intelligence in dentistry: chances and challenges. J Dent Res 2020; 99: 769-774.
5. Akarslan ZZ, Akdevelioglu M, Güngör K, Erten H. A comparison of the diagnostic accuracy of bitewing, periapical, unfiltered and filtered digital panoramic images for approximal caries detection in posterior teeth. Dentomaxillofac Radiol 2008; 37: 458-463.
6. Cantu AG, Gehrung S, Krois J, Chaurasia A, Rossi JG, Gaudin R, et al. Detecting caries lesions of different radiographic extension on bitewings using deep learning. J Dent 2020; 100: 103425. DOI: 10.1016/j.jdent.2020.103425.
7. Leite AF, Gerven AV, Willems H, Beznik T, Lahoud P, Gaêta-Araujo H, et al. Artificial intelligence-driven novel tool for tooth detection and segmentation on panoramic radiographs. Clin Oral Inves-tig 2021; 25: 2257-2267.
8. Kunz F, Stellzig-Eisenhauer A, Zeman F, Boldt J. Artificial intelligence in orthodontics: evaluation of a fully automated cephalometric analy-sis using a customized convolutional neural network. J Orofac Orthop 2020; 81: 52-68.
9. Takahashi T, Nozaki K, Gonda T, Mameno T, Wada M, Ikebe K. Identification of dental implants using deep learning-pilot study. Int J Im-plant Dent 2020; 6: 53. DOI: 10.1186/s40729-020-00250-6.
10. Erpaçal B, Adıgüzel Ö, Cangül S. The use of micro-computed tomography in dental applications. Int J Dent Res 2019; 9: 78-91.
11. Bilgir E, Bayrakdar İŞ, Çelik Ö, Orhan K, Akkoca F, Sağlam H, et al. An artifıcial intelligence approach to automatic tooth detection and numbering in panoramic radiographs. BMC Med Imaging 2021; 21: 124. DOI: 10.1186/s12880-021-00656-7.
12. Mertens S, Krois J, Cantu AG, Arsiwala LT, Schwendicke F. Arti­ficial intelligence for caries detection: randomized trial. J Dent 2021; 115: 103849. DOI: 10.1016/j.jdent.2021.103849.
13. Mohammad-Rahimi H, Motamedian SR, Rohban MH, Krois J, Uribe SE, Mahmoudinia E, et al. Deep learning for caries detection: a systematic review. J Dent 2022; 106: 104115. DOI: 10.1016/j.jdent.2022.104115.
14. Bonfanti-Gris M, Garcia-Canas A, Alonso-Calvo R, Rodriguez-Manzaneque MPS, Ramiro GP. Evaluation of an artificial intelligence web-based software to detect and classify dental structures and treatments in panoramic radiographs. J Dent 2022; 126: 104301. DOI: 10.1016/j.jdent.2022.104301.
15. Kim C, Kim D, Jeong H, Yoon SJ, Youm S. Automatic tooth detection and numbering using a combination of a CNN and heuristic algorithm. Appl Sci 2020; 10: 5624. DOI: 10.3390/app10165624.
16. Nomir O, Abdel-Mottaleb M. Human identification from dental X-ray images based on the shape and appearance of the teeth. IEEE Trans Inf Forensics Secur 2007; 2: 188-197.
17. Tohnak S, Mehnert AJH, Mahoney M, Crozier S. Synthesizing dental radiographs for human identification. J Dent Res 2007; 86: 1057-1062.
18. Yasa Y, Celik O, Bayrakdar IS, Pekince A, Orhan K, Akarsu S, et al. An artificial intelligence proposal to automatic teeth detection and num-bering in dental bite-wing radiographs. Acta Odontol Scand 2021; 79: 275-281.
19. Yaren Tekin B, Ozcan C, Pekince A, Yasa Y. An enhanced tooth segmentation and numbering according to FDI notation in bitewing radiographs. Comput Biol Med 2022; 146: 105547. DOI: 10.1016/j.compbiomed.2022.105547.
20. Baydar O, Różyło-Kalinowska I, Futyma-Gąbka K, Sağlam H. The U-Net approaches to evaluation of dental bite-wing radiographs: an artificial intelligence study. Diagnostics 2023; 13: 453. DOI: 10.3390/diagnostics13030453.
21. Redmon J, Farhadi, A. Yolov3: An incremental improvement. arXiv 2018, arXiv:1804.02767. DOI: https://doi.org/10.48550/arXiv. 1804.02767.
22. Chung YL, Lin CK. Application of a model that combines the YOLOv3 object detection algorithm and canny edge detection algorithm to de-tect highway accidents. Symmetry 2020; 12: 1875. DOI: https://doi.org/10.3390/sym12111875.
23. Olorunshola OE, Irhebhude ME, Evwiekpaefe AE. A comparative study of YOLOv5 and YOLOv7 object detection algorithms. J Comput Soc Inform 2023; 2: 1-12.
24. Li Y, Wang H, Dang LM, Han D, Moon H, Nguyen TN. A deep learning-based hybrid framework for object detection and recognition in au-tonomous driving. IEEE Access 2020; 8: 194228-194239.
25. Lee JH, Kim DH, Jeong SN, Choi SH. Detection and diagnosis of dental caries using a deep learning-based convolutional neural network algo-rithm. J Dent 2018; 77: 106-111.
26. Casalegno F, Newton T, Daher R, Abdelaziz M, Lodi-Rizzini A, Schürmann F, et al. Caries detection with near-infrared transillumination using deep learning. J Dent Res 2019; 98: 1227-1233.
27. Aliaga IJ, Vera V, De Paz JF, García AE, Mohamad MS. Modelling the longevity of dental restorations by means of a CBR system. Biomed Res Int 2015; 2015: 540306. DOI: 10.1155/2015/540306.
28. Lin H, Chen H, Weng L, Shao J, Lin J. Automatic detection of oral cancer in smartphone-based images using deep learning for early diagno-sis. J Biomed Opt 2021; 26: 086007. DOI: 10.1117/1.JBO.26.8.086007.
29. Zhang W, Li J, Li ZB, Li Z. Predicting postoperative facial swelling following impacted mandibular third molars extraction by using artificial neural networks evaluation. Sci Rep 2018; 8: 12281. DOI: 10.1038/s41598-018-29934-1.
30. Niño-Sandoval TC, Guevara Pérez SV, González FA, Jaque RA, Infante-Contreras C. Use of automated learning techniques for predicting mandibular morphology in skeletal class I, II and III. Forensic Sci Int 2017; 281: 187.e1-187.e7. DOI: 10.1016/j.forsciint.2017.10.004.
31. De Tobel J, Radesh P, Vandermeulen D, Thevissen PW. An automated technique to stage lower third molar development on panoramic ra-diographs for age estimation: a pilot study. J Forensic Odontostomatol 2017; 35: 42-54.
32. Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, et al. Tooth detection and numbering in panoramic radio-graphs using convolutional neural networks. Dentomaxillofac Radiol 2019; 48: 20180051. DOI: 10.1259/dmfr.20180051.
33. Lee JS, Adhikari S, Liu L, Jeong HG, Kim H, Yoon SJ. Osteoporosis detection in panoramic radiographs using a deep convolutional neural net-work-based computer-assisted diagnosis system: a preliminary study. Dentomaxillofac Radiol 2018; 48: 20170344. DOI: 10.1259/dmfr.20170344.
34. Kuwana R, Ariji Y, Fukuda M, Kise Y, Nozawa M, Kuwada C, et al. Performance of deep learning object detection technology in the detection and diagnosis of maxillary sinus lesions on panoramic radio-graphs. Dentomaxillofac Radiol 2021; 50: 20200171. DOI: 10.1259/dmfr.20200171.
35. Chen H, Zhang K, Lyu P, Li H, Zhang L, Wu J, Lee CH. A deep learning approach to automatic teeth detection and numbering based on object detection in dental periapical films. Sci Rep 2019; 9: 3840. DOI: 10.1038/s41598-019-40414-y.
36. Liu M, Wang S, Chen H, Liu Y. A pilot study of a deep learning approach to detect marginal bone loss around implants. BMC Oral Health 2022; 22: 11. DOI: 10.1186/s12903-021-02035-8.
37. Hatvani J, Horváth A, Michetti J, Basarab A, Kouamé D, Gyöngy M. Deep learning-based super-resolution applied to dental computed tomography. IEEE Trans Radiat Plasma Med Sci 2018; 3: 120-128.
38. Sin Ç, Akkaya N, Aksoy S, Orhan K, Öz U. A deep learning algorithm proposal to automatic pharyngeal airway detection and segmentation on CBCT images. Orthod Craniofac Res 2021; 24 (Suppl 2): 117-123.
39. Gillot M, Miranda F, Baquero B, Ruellas A, Gurgel M, Al Turkestani N, et al. Automatic landmark identification in cone-beam computed tomography. Orthod Craniofac Res 2023; 26: 560-567.
40. Lee S, Oh SI, Jo J, Kang S, Shin Y, Park JW. Deep learning for early dental caries detection in bitewing radiographs. Sci Rep 2021; 11: 16807. DOI: 10.1038/s41598-021-96368-7.
41. Mahoor MH, Abdel-Mottaleb M. Classification and numbering of teeth in dental bitewing images. Pattern Recog 2005; 38: 577-586.
42. Aeini F, Mahmoudi F. Classification and numbering of posterior teeth in bitewing dental images. Int Conf Adv Comput theory Eng 2010; 6: 67-72.
43. Lin PL, Lai YH, Huang PW. An effective classification and numbering system for dental bitewing radiographs using teeth region and contour information. Pattern Recognit 2010; 43: 1380-1392.
44. Wang P, Fan E, Wang, P. Comparative analysis of image classification algorithms based on traditional machine learning and deep learning. Pattern Recognit Lett 2021; 141: 61-67.
45. Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data 2019; 6: 113. DOI: https://doi.org/10.1186/s40537-019-0276-2.
46. Diwan T, Anirudh G Tembhurne JV. Object detection using YOLO: challenges, architectural successors, datasets and applications. Multimed Tools Appl 2023; 82: 9243-9275.
This is an Open Access journal, all articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). License (http://creativecommons.org/licenses/by-nc-sa/4.0/), allowing third parties to copy and redistribute the material in any medium or format and to remix, transform, and build upon the material, provided the original work is properly cited and states its license.
 
Quick links
© 2025 Termedia Sp. z o.o.
Developed by Bentus.