[go: up one dir, main page]

GB2639582A - System and Method for Error Detection in Dental Imaging - Google Patents

System and Method for Error Detection in Dental Imaging

Info

Publication number
GB2639582A
GB2639582A GB2403787.1A GB202403787A GB2639582A GB 2639582 A GB2639582 A GB 2639582A GB 202403787 A GB202403787 A GB 202403787A GB 2639582 A GB2639582 A GB 2639582A
Authority
GB
United Kingdom
Prior art keywords
image
errors
error
features
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2403787.1A
Other versions
GB202403787D0 (en
Inventor
Tuhami Amro
Aboelkhair Asmaa
Mussa Mostafa
Elbahnihi Ahmed
Saramijou Safwan
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Voxel3di Ltd
Original Assignee
Voxel3di Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voxel3di Ltd filed Critical Voxel3di Ltd
Priority to GB2403787.1A priority Critical patent/GB2639582A/en
Publication of GB202403787D0 publication Critical patent/GB202403787D0/en
Priority to PCT/GB2025/050526 priority patent/WO2025191278A1/en
Publication of GB2639582A publication Critical patent/GB2639582A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/98Detection or correction of errors, e.g. by rescanning the pattern or by human intervention; Evaluation of the quality of the acquired patterns
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/033Recognition of patterns in medical or anatomical images of skeletal patterns

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Image Analysis (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A system and method for error detection in panoramic dental images 401. The method comprises converting a dental panoramic image into a byte array, followed by identifying and extracting critical features from the byte array. Next, a plurality of feature map layers is generated for identifying errors at different scales. Region proposals for potential errors within each feature map layer are then generated and fixed sized features from the region proposals are extracted. Features may be extracted from eight region proposals simultaneously. Features within each region proposal are classified followed by regressing bounding boxes 401-403 around classified errors. Finally, the image is outputted with bounding boxes indicating errors within the image, along with an indication on whether the image is viable based on the errors. The output is ideally provided within 15 seconds of the input. Causes for each error and suggested corrections may also be outputted.

Description

System and Method for Error Detection in Dental Imaging The present invention relates to a system and method for error detection, particularly to error detection in panoramic dental images.
Background
Within the dental field, diagnostic accuracy heavily relies on imaging techniques. Dental imaging plays a vital role in identifying and understanding many conditions. However, the complexity of interpreting these images can often lead to substantial time consumption and require significant expertise, with the inherent risk of human error potentially leading to inaccurate diagnoses, delayed treatments, and increased patient anxiety.
Panoramic radiograph, also known as orthopantomogram (OPG), is a panoramic scanning dental X-ray of the upper and lower jaw and a crucial diagnostic scan that most dentists use as a routine patient check. OPG is technique sensitive as it requires precise parameters and specific patient position inside the panoramic machine to produce a reliable image which can then be used for diagnosis.
If the panoramic scan is not setup correctly, the resulting scan is subject to a lot of errors due to a variety of elements and potential distortions, such as tongue space, vertebrae shadow, reverse smile curve, patient movement, etc. These errors can significantly affect the quality of the images, resulting in difficulty in accurate diagnosis.
At present, there is no automated system to detect these errors, which subsequently burdens healthcare professionals, who must manually identify and correct these errors.
An error in the panoramic image can result in a misdiagnosis and an unnecessary treatment plan. The current system also delays the delivery of results, due to the time taken by healthcare professionals to attempt to identify errors in the images, leading to prolonged anxiety and uncertainty for patients. This inefficiency in the diagnostic process can lead to increased healthcare costs and suboptimal treatments.
It is an aim of the present invention to mitigate one or more of the above-mentioned problems.
Statements of invention
According to a first aspect of the invention there is provided a method for detecting errors in a dental panoramic image, comprising; converting a dental panoramic image into a byte array, identifying and extracting critical features from the byte array, generating a plurality of feature map layers, generating region proposals for potential errors within each feature map layer, extracting fixed sized features from the region proposals, classifying features within each region proposal, regressing bounding boxes around classified error, outputting the image with bounding boxes indicating errors within the image; and outputting an indication whether image is viable based on the errors.
According to a second aspect of the invention, there is provided with a system for detecting errors in a dental panoramic image, comprising; a user device for uploading images and converting them into a byte array, a backbone network for identifying and extracting critical features from the byte array, a feature pyramid network for generating a plurality of feature map layers, a region proposal network for generating region proposals for potential errors within each feature map layer, and a Fast R-CNN for extracting fixed sized features from the region proposals and classifying features within each region proposal, wherein the user device is configured to display a dental panoramic image with bounding boxes indicating errors within the image and an indication whether image is viable.
According to a third aspect of the invention there is provided a method for detecting errors in an image, comprising; converting an image into a byte array, identifying and extracting critical features from the byte array, generating a plurality of feature map layers, generating region proposals for potential errors within each feature map layer, extracting fixed sized features from the region proposals, classifying features within each region proposal, regressing bounding boxes around classified error, outputting the image with bounding boxes indicating errors within the image; and outputting an indication whether image is viable based on the errors.
Detailed description
Practicable embodiments of the invention are described in further detail below with reference to the accompanying drawings, of which: Figure 1 shows an overview of the system 100 in accordance with an example of the invention.
Figure 2 shows as overview of the method 200 in accordance with an example of the invention.
s Figure 3 shows a first output image 300 in accordance with an example of the invention.
Figure 4 shows a second output image 400 in accordance with an example of the invention.
The error detection system and method of the present invention could be used to detect scanning errors in other imaging modalities and detect pathosis and anomalies in the radiographs. The example of the invention described herein relates to the error detection system and method of panoramic images in the dental field.
Overview of the system 100 Figure 1 shows an overview of a system 100 for error detection in panoramic dental images.
The system 100 comprises a user device 101 with a user interface.
The user uploads the panoramic image to the system via the user device 101. The user may upload the image to the system via a website. The user may have their 25 own account on the website such that images can be stored and viewed on the website.
The system 100 further comprises a backbone network 102. Once uploaded to the system 100 the image enters a backbone network 102. The backbone network 102 is used for feature extraction.
The backbone network 102 may comprise a convolutional neural network. The backbone network 102 may comprise a residual network. The backbone network 102 may comprise a 50 layer convolutional neural network, the 50 layers comprising 48 convolutional layers, one MaxPool layer, and one average pool layer. The backbone network 102 may comprise ResNet-50.
The backbone network 102 is pre-trained. In this example, the backbone network 102 is pre-trained on panoramic dental images. The backbone network 102 may be pre-trained using any suitable method. The backbone network 102 may be pre-trained by being fed panoramic dental images that have been labelled with critical features.
The system 100 further comprises a feature pyramid network 103, also referred to as FPN. The feature pyramid network 103 detects objects at different scales within the image.
The feature pyramid network 103 may comprise a feature extractor. The feature pyramid network 103 may be independent of the backbone network 102.
is The system 100 further comprises a regional proposal network 103, also referred to as RPN. The regional proposal network 103 generates region proposals for objects within the image. The regional proposal network 103 may comprise a fully convolutional network.
The RPN 103 further comprises region of interest pooling, also referred to as Rol pooling. Rol pooling extracts fixed-size features from each region proposal generated by the RPN 103.
The system 100 further comprises a Fast Region-based Convolution 105, also referred to as a Fast R-CNN 105. The Fast R-CNN 105 takes in the fixed-size features extracted by Rol pooling and performs object classification and bounding box regression. The Fast R-CNN comprises a classifier and a regressor.
The Fast R-CNN 105 comprises at least one loss function. The loss function 30 evaluates how well the system has algorithm models your dataset.
The system may comprise a plurality of loss functions. In this example, the system comprises three loss functions. The three loss functions comprise a classification loss, which penalises incorrect object classifications, a bounding box regression loss, which penalises inaccurate bounding box predictions, and an anchor matching loss, which penalises incorrect anchor matching during proposal generation.
In a system for error detection a mask head is used for instance segmentation. In this example, the mask head is removed from the system.
Overview of the method 200 Figure 2 shows a method 200 for error detection in panoramic dental images A panoramic dental image of a patient is captured.
A panoramic dental image comprises a two-dimensional image that captures the entire mouth of a patient. The panoramic dental image comprises the teeth, upper and lower jaws, surrounding structures and tissues, of the patient. Particularly, the panoramic dental image comprises details of the bones and teeth of the patient.
201 Upload panoramic image to the system The panoramic image is uploaded to the system 100 by a user. In this case, the user is likely to be a dentist or other healthcare professionals.
In this example the image is uploaded to a website. The website comprise encryption protocols such that the image uploaded is encrypted when uploaded to the website to remain secure. Alternatively, the image may be uploaded to an application downloaded onto the device.
The system can accept various panoramic images in various formats as inputs, for example, JPEG, PNG, BMP, etc. Multiple panoramic images can be stored in the system for review at a later date. The users have a personalised profile allowing access to panoramic images of their patients stored on the system. The system prevents unauthorised access to panoramic images from other users of the system unless access is granted by the user who uploaded the image, this prevents the misuse of data. The system complies with all relevant healthcare regulations, including data privacy and security standards like HIPAA. It ensures the secure handling and storage of all patient data and images.
Once uploaded the image is converted into a bytes or pixels. The uploaded image may be converted into a byte array. The system 100 analyses the image in byte format. Converting the image into a byte array allows the analysis to be performed on binary data.
202 Extract critical features Once in byte format, the critical features of the image are identified and extracted 202 by the backbone network 102. The byte or pixel format of the image is the io input layer.
The backbone network 102 identifies and extracts critical features as pre-trained.
The backbone network 102 detects patterns from the pre-training data set by analysing the colours, shapes, the distances between the shapes, where objects border each other, etc, so that it identifies a profile of what the critical features mean.
The pre-trained backbone network 102 then applies this profile of the critical features to the uploaded image to identify and extract critical features within the uploaded image that were labelled in the pre-training data set.
The backbone network 102 recognises patterns and applies parameters and weights to the byte array of the image. The model weights are initialised from the checkpoint URL of the same model configuration. The backbone network 102 leverages the pre-trained weights for faster convergence.
In this example, the critical features of the panoramic dental image may comprise areas of the image where errors are known to occur. For example, the critical features may comprise one or a plurality of the following earring ghost image, hair ghost image, earring, tongue ring, left condyle image cut off, right condyle image cut off, and nose ring, The central layers may be used for detecting vertebrae shadow, wide upper anterior teeth, lip space, removable appliance, thin upper anterior teeth, wide lower anterior, thin lower anterior teeth, premolars overlap (shifting), upper anterior teeth do not appear, the lower anterior root does not appear, short lower anterior teeth, short upper anterior teeth. The last layers may be used for detecting tongue space, reverse smile curve, flat smile, lead apron ghost image, v-shape smile, hard palate shadow, patient movement, midline shift, glasses ghost image, etc. 203 Generate multi-scale feature maps.
Once the critical features have been identified and extracted by the backbone network 102, the Feature Pyramid Network (FPN) 103 generates multi-scale feature maps 203 for detecting objects at different scales.
The FPN 103 detects objects of various sizes at different scales within the uploaded image. The FPM 103 takes a single-scale image of an arbitrary size as input, and outputs proportionally sized feature maps at multiple levels. This process performed by the FPN is independent of that performed by the backbone network 102.
The FPN 103 builds feature pyramids to be used for object detection. The FPN generates multiple feature map layers (multi-scale feature maps) with better quality information than the regular feature pyramid for object detection.
The FPN creates a convolutional feature pyramid from the input image comprising multiple layers. The first layers are detail-aware (zoom in) and the last layers are context-aware (zoom out). For example, the second layer may be used for detecting small features, the central layers may be used for detecting medium-sized features, and the last layers may be used for detecting large features.
In this example, the first layers may be used for detecting earring ghost image, hair ghost image, earring, tongue ring, left condyle image cut off, right condyle image cut off, and nose ring, The central layers may be used for detecting vertebrae shadow, wide upper anterior teeth, lip space, removable appliance, thin upper anterior teeth, wide lower anterior, thin lower anterior teeth, premolars overlap (shifting), upper anterior teeth do not appear, the lower anterior root does not appear, short lower anterior teeth, short upper anterior teeth. The last layers may be used for detecting tongue space, reverse smile curve, flat smile, lead apron ghost image, v-shape smile, hard palate shadow, patient movement, midline shift, glasses ghost image, etc. Multi-scale feature maps are necessary to deal with objects of different sizes. In this example, as the image is a 2D representation of a curved 3D object, multi-scale feature maps allow for detecting various objects within the images at different scales, enhancing the detail and accuracy of the diagnostics.
204 Generate region proposals.
The RPN 104 generates region proposals based on the feature maps generated 5 by the FPN 103. The RPN performs anchor matching.
The RPN takes these feature maps and generates region proposals for specific areas in the images where potential anomalies or points of interest might be located. For example, there may be region proposals for the V shape smile, to tongue space, lower anterior teeth, etc. The RPN predicts the probability that an object exists in each proposed region. The RPN predicts an abjectness score for each proposed region. The RPN predicts bounding box offsets for each proposal. The RPN predicts feature bounds by learning from the feature maps generated by the FPN 103.
205 Extract fixed size features from each region proposal In object detection, each proposal will be of a different shape, therefore, the next step of the method is for converting all the proposals to a fixed shape as required by fully connected layers.
The RPN further comprises Rol pooling. The Rol pooling extracts fixed-size features from each generated region proposal. The Rol pooling extracts fixed-size features to standardise the data for further analysis. The bounding box proposals from the RPN are used to pool features from the FPN feature map.
Rol pooling may comprise taking each region proposed by the RPN 104 and taking the section of the feature map generated by the FPN 103 which corresponds to that proposed region and converting the feature map section into a fixed dimension map. The RPN does this by taking the region corresponding to a proposal from the FPN feature map; dividing this region into a fixed number of sub-windows; performing max-pooling over these sub-windows to give a fixed size output.
Rol pooling reduces the amount of information in the image while maintaining the essential features necessary for accurate image recognition.
The output fixed dimension map of the Rol pooling for every Rol neither depends s on the input feature map nor on the proposal sizes, it solely depends on the layer parameters pooled_width, pooled_height, and spatial scale. Pooled_width and pooled_height are hyperparameters which can be changed based on the image provided. Pooled_width and pooled_height indicate the number of grids the feature map corresponding to the proposal should be divided into. This will be the output dimension of this layer The fixed size input created by the Rol pooling allows the system to go from image classification to object detection.
The batch size per image determines how many region proposals are processed at one time by the RPN performing the Rol pooling. The RPN may work at a batch size between 1 and 20 per image. The RPN may work at a batch size of between 5 and 10 per image. The RPN may work at a batch size of 8 per image.
206 Object and error classification and bounding box regression The Fast R-CNN 105 takes the fixed-size features extracted by Rol pooling and performs object and error classification and bounding box regression. This step of the method comprises the Fast R-CNN network processing the fixed-size features created by the Rol pooling stage of the RPN. The Fast R-CNN performs object classification (identifying what the object is) and bounding box regression (locating the object's position within the image).
The Fast R-CNN 105 classifier performs object and error classification and the Fast R-CNN 105 regressor performs bounding box regression.
The classifier classifies each box as object or background. The classifier then classifies each object as a class of objects. The objects are given classification scores by the classifier. The classification score comprises the probability of the object belonging to each class.
The classifier of the system may identify at least 10 classes in the image. The system may identify between 10 and 40 classes in the image. The system may identify 28 classes in the images. The system may identify the following classes in the image: tongue space, vertebrae shadow, reverse smile curve, flat smile, wide s upper anterior teeth, lead apron ghost image, v-shape smile, earring ghost image, hair ghost image, earring, lip space, removable appliance, midline shift, tongue ring, thin upper anterior teeth, wide lower anterior, left condyle image cut off, right condyle image cut off, hard palate shadow, thin lower anterior teeth, premolars overlap (shifting), upper anterior teeth roots do not appear, the lower anterior roots do not appear, patient movement, glasses ghost image, nose ring, short lower anterior teeth, short upper anterior teeth.
Instead of extracting features independently for each region of interest, Fast R-CNN aggregates them into a single pass over the image, i.e. regions of interest 15 from the same image share computation and memory.
The regressor performs bounding box regression. The regressor uses the results from the classifier as to whether each box is an object or background and. The regressor adjusts each bounding box to ensure a precise fit around the object.
Faster R-CNN uses at least one loss function. Faster R-CNN may use a plurality of loss functions. In this example, there are three loss functions used in Faster R-CNN: Classification loss, which penalises incorrect object classifications.
Bounding box regression loss, which penalises inaccurate bounding box predictions.
Anchor matching loss, which penalises incorrect anchor matching during proposal generation.
In this example, there is no mask head. Eliminating the mask head reduces computational load, leading to faster processing times. With the systems focus narrowed to detection (not segmentation), resources are more efficiently utilised, potentially increasing the accuracy of object detection tasks. The system becomes more streamlined, benefiting deployment.
207 Panorama image output, with the bounding box and detection The system determines the type of error within each bounding box and the size, shape and location of each bounding box. The output is the image in its initial quality showing a set of bounding boxes each is labelled with its identified error type.
Figure 3 shows a first output image 300 in accordance with an example of the invention.
The image comprises a panoramic dental image 301 labelled with a first bounding io box 302, a second bounding box 303, a third bounding box 304 and a fourth bounding box 305.
The first bounding box 302 categorises the error hard palate shadow. The second bounding box 303 categorises the error tongue space. The third bounding box 304 categorises the error V shape smile. The fourth bounding box 305 categorises the error short lower anterior teeth.
The system outputs the processed panoramic images with bounding boxes in a format that medical professionals can easily interpret. The images output maintain the quality and resolution of the original images.
Both the object classification and the bounding boxes are displayed on the user device.
The system also outputs the cause and correction for each error identified. For example, the error identified in the first bounding box 302 is a hard palate shadow, caused by the patient being in an incorrect vertical position, to correct position the Frankfurt laser line in the correct position. The error identified in the second bounding box 303 is a tongue space, caused by the patient's tongue not touching the roof of the mouth, to correct instruct the patient to place the tongue in the roof of the mouth whilst a new image is being taken. The error identified in the third bounding box 304 is a V-shaped smile, caused by the patient's chin being tilted too far down, correct by tilting the patient's chin up when retaking the image. The fourth bounding box 305 error is short lower anterior teeth, caused by the patient's lower anterior teeth not biting correctly in the indicated bite block, correct by asking the patient to bite correctly and adjusting the patient's Frankfurt plan and rescan the patient.
Figure 4 shows a second output image 400 in accordance with an example of the invention.
The image comprises a panoramic dental image 401 labelled with a first bounding s box 402, a second bounding box 403.
The first bounding box 401 categorises the error vertebrae shadow. The second bounding box 402 categorises the error right condyle cut off. The third bounding box 403 categorises the error left condyle cut off.
The causes for each error will be shown on the user interface (not visible in figure 4). The bounding box 401 error is vertebrae shadow which happens because the patient is slumped position in the machine so the spinal column isn't well stretched causing a ghost image of the spine superimposed in the centre of the image and could be corrected by moving the patient little bit forward and straighten his neck/spine. The bounding box 402 error is right condyle cut off, this happen due to incorrect vertical position of the patient's head in the machine, and can be corrected by lower the patient's chin machine support and higher the machine c-arm. The bounding box 303 error is left condyle cut off, this happens due to the incorrect vertical position of the patient's head in the machine, and can be corrected by lowering the patient's chin machine support and higher the machine c-arm.
The system also outputs the cause and correction for each error identified. For example, the error identified in the first bounding box 302 is a hard palate shadow, caused by the patient being in an incorrect vertical position, to correct position the Frankfurt laser line in the correct position. The error identified in the second bounding box 303 is a tongue space, caused by the patient's tongue not touching the roof of the mouth, to correct instruct the patient to place the tongue in the roof of the mouth whilst a new image is being taken. The error identified in the third bounding box 304 is a V-shaped smile, caused by the patient's chin being tilted too far down, correct by tilting the patient's chin up when retaking the image. The fourth bounding box 305 error is short lower anterior teeth, caused by the patient's lower anterior teeth not biting correctly in the indicated bite block, correct by asking the patient to bite correctly and adjusting the patient's Frankfurt plan and rescan the patient.
Both the object classification and the bounding boxes are displayed on the user device.
The system provides detailed reports highlighting the detected errors, their s locations, and the potential implications, aiding in the diagnostic process.
The system also outputs an indication of whether the image taken is viable based on errors. The system also outputs an indication of whether the errors identified will prevent accurate diagnosis.
The time the system takes to output of an indication of whether the image taken is viable from the image being uploaded is less than one minute. The time taken may be between 5 and 25 seconds. The time taken may be 15 seconds.
If necessary the user can instruct the image to be taken again. As the system provides the output within 15 seconds of the image uploaded to the system, this allows for numerous retakes until the output image is acceptable for diagnosis.
The user can review the results provided and verify the accuracy and noting any zo diagnostic insights the system provides.
The system handles errors, provides informative error messages to users, and maintains a log of any issues for further investigation and improvement.
The method continually learns and improves its accuracy and error-detection capabilities over time. It is also easy to update with new findings, technological advancements, or regulation changes.
The system complies with all relevant healthcare regulations, including data privacy and security standards like IHIPAA. It ensures the secure handling and storage of all patient data and images.

Claims (11)

  1. Claims 1. A method for detecting errors in a dental panoramic image, comprising: converting a dental panoramic image into a byte array; identifying and extracting critical features from the byte array using a pre-trained backbone network; generating a plurality of feature map layers for identifying errors at different scales; generating region proposals for potential errors within each feature map layer; extracting fixed sized features from the region proposals; classifying errors within each region proposal; regressing bounding boxes around said classified error; outputting the image with bounding boxes indicating the errors within the image; and outputting an indication whether image is viable based on the errors.
  2. The method of claim 1, further comprising outputting an indication of a cause for each error.
  3. The method of claim 2, further comprising outputting a suggested correction for each error.
  4. The method of any preceding claim, wherein the bounding box indicates the location of the error within the dental panoramic image.
  5. The method of any preceding claim, wherein the system is narrowed to detection.
  6. The method of any preceding claim, wherein the output is provided within 15 seconds of the input.
  7. The method of any preceding claim, wherein the critical features comprise areas of the image where errors are known to occur.
  8. The method of any preceding claim, wherein three multiple feature map layers are generated.
  9. The method of any preceding claim, wherein features are extracted from eight region proposals at one time. 2. 3. 4. 5. 6. 7. 8. 9.
  10. 10.The method of any preceding claim, wherein there are twenty-eight classes of errors.
  11. 11.A system for detecting errors in a dental panoramic image, comprising: a user device for uploading images and converting them into a byte array; a backbone network for identifying and extracting critical features from the byte array; s a feature pyramid network for generating a plurality of feature map layers; a region proposal network for generating region proposals for potential errors within each feature map layer; and a Fast R-CNN for extracting fixed sized features from the region proposals and classifying errors within each region proposal and regressing bounding io boxes around the error.wherein the user device is configured to display a dental panoramic image with bounding boxes indicating errors within the image and an indication whether image is viable.
GB2403787.1A 2024-03-15 2024-03-15 System and Method for Error Detection in Dental Imaging Pending GB2639582A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB2403787.1A GB2639582A (en) 2024-03-15 2024-03-15 System and Method for Error Detection in Dental Imaging
PCT/GB2025/050526 WO2025191278A1 (en) 2024-03-15 2025-03-14 System and method for error detection in dental imaging

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2403787.1A GB2639582A (en) 2024-03-15 2024-03-15 System and Method for Error Detection in Dental Imaging

Publications (2)

Publication Number Publication Date
GB202403787D0 GB202403787D0 (en) 2024-05-01
GB2639582A true GB2639582A (en) 2025-10-01

Family

ID=90826082

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2403787.1A Pending GB2639582A (en) 2024-03-15 2024-03-15 System and Method for Error Detection in Dental Imaging

Country Status (2)

Country Link
GB (1) GB2639582A (en)
WO (1) WO2025191278A1 (en)

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DENTO-MAXILLO-FACIAL RADIOLOGY, vol 48, 2019, DMITRY V TUZOFF ET AL, "Tooth detection and numbering in panoramic radiographs using convolutional neural networks", Article No. 20180051 *
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHIE INTELLIGENCE, vol 39, 2017, REN ET AL, "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks : IEEE Journals & Magazine : IEEE Xplore", pages 1137-1149 *
INTERNATIONAL JOURNAL OF ADVANCED RESEARCH (INDORE), vol 4, 2016, SINGH DR CHEENA ET AL, "Artifacts in Dental Radiography: A Mini Review", pages 958-960 *
PHYSICS IN MEDICINE AND BIOLOGY, vol 65, 2020, MATTEA L WELCH ET AL, "Automatic classification of dental artifact status for efficient image veracity checks: effects of image resolution and convolutional neural network depth", page 15005 *

Also Published As

Publication number Publication date
WO2025191278A1 (en) 2025-09-18
GB202403787D0 (en) 2024-05-01

Similar Documents

Publication Publication Date Title
US11790643B2 (en) Deep learning for tooth detection and evaluation
US11398013B2 (en) Generative adversarial network for dental image super-resolution, image sharpening, and denoising
US11367188B2 (en) Dental image synthesis using generative adversarial networks with semantic activation blocks
US11553874B2 (en) Dental image feature detection
US20240087725A1 (en) Systems and methods for automated medical image analysis
US11189028B1 (en) AI platform for pixel spacing, distance, and volumetric predictions from dental images
US11366985B2 (en) Dental image quality prediction platform using domain specific artificial intelligence
US11311247B2 (en) System and methods for restorative dentistry treatment planning using adversarial learning
US10984529B2 (en) Systems and methods for automated medical image annotation
US12106848B2 (en) Systems and methods for integrity analysis of clinical data
US20200364624A1 (en) Privacy Preserving Artificial Intelligence System For Dental Data From Disparate Sources
US20210118132A1 (en) Artificial Intelligence System For Orthodontic Measurement, Treatment Planning, And Risk Assessment
US20200411167A1 (en) Automated Dental Patient Identification And Duplicate Content Extraction Using Adversarial Learning
US20200411201A1 (en) Systems And Method For Artificial-Intelligence-Based Dental Image To Text Generation
US20210357688A1 (en) Artificial Intelligence System For Automated Extraction And Processing Of Dental Claim Forms
Yeshua et al. Automatic detection and classification of dental restorations in panoramic radiographs
KR20200058316A (en) Automatic tracking method of cephalometric point of dental head using dental artificial intelligence technology and service system
Kuo et al. A convolutional neural network approach for dental panoramic radiographs classification
GB2639582A (en) System and Method for Error Detection in Dental Imaging
KR102333726B1 (en) System for supporting creation of dental radiographic reading
Carneiro Enhanced tooth segmentation algorithm for panoramic radiographs
Roopitha et al. Integration of Preprocessing Techniques and Artificial Intelligence for Accurate Inferior Alveolar Nerve Segmentation in Cone Beam Computed Tomography
Zdravković et al. Tooth detection with small panoramic radiograph images datasets and Faster RCNN model
KR102710043B1 (en) Predictive correction image providing method
US20250217983A1 (en) Method for the analysis of radiographic images, and in particular lateral-lateral teleradiographic images of the skull, and relative analysis system