WO2025202065A1 - An intraoral scanning system for improving composed scan information - Google Patents
An intraoral scanning system for improving composed scan informationInfo
- Publication number
- WO2025202065A1 WO2025202065A1 PCT/EP2025/057816 EP2025057816W WO2025202065A1 WO 2025202065 A1 WO2025202065 A1 WO 2025202065A1 EP 2025057816 W EP2025057816 W EP 2025057816W WO 2025202065 A1 WO2025202065 A1 WO 2025202065A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- information
- color channels
- scan information
- internal structure
- scan
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
- A61C9/0046—Data acquisition means or methods
- A61C9/0053—Optical means or methods, e.g. scanning the teeth by a laser or light beam
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0082—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes
- A61B5/0088—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence adapted for particular medical purposes for oral or dental tissue
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61C—DENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
- A61C9/00—Impression cups, i.e. impression trays; Impression methods
- A61C9/004—Means or methods for taking digitized impressions
Definitions
- the disclosure relates to an intraoral scanning system that is configured to determine a composed scan information that includes enhanced internal structure information and/or enhanced textural information of a tooth. More specifically, the disclosure relates to an improved determination of composed scan information which includes further enhanced internal structural information, such as caries.
- the contrast enhancement algorithm may be one or more of following non-linear contrast enhancement algorithms or a combination of two or more of the following non-linear contrast enhancement algorithms:
- the example where the three standard deviations include only positive pixel values or only pixel values above the mean pixel value may be defined as a half histogram distribution, and, the example where the three standard deviations include both positive and negative pixel values, or pixel values of both above and below the mean pixel value may be defined as a full histogram distribution.
- the contrast enhancement algorithm may then extend the pixel values of the truncated histogram distribution such that the pixel values of the truncated histogram distribution are within a second pixel value range, and wherein the second pixel value range is larger than the first pixel value range.
- the second pixel value range may be between 0 and 255, 0 and 511 etc.
- the truncation and the extended distribution of the remaining pixels of the plurality of pixels over the second pixel value range results in an improved contrast of the internal structure information, the plurality of color channels and the composed scan information, i.e. the first composed scan information.
- the first composed scan information may include a plurality of pixels, and the one or more processors may be configured to enhance contrast in the first composed scan information based on the contrast enhancement algorithm.
- a composed scan information may include a plurality of pixels, and the one or more processors may be configured to enhance contrast in the composed scan information based on the contrast enhancement algorithm.
- the one or more processors may be configured to determine different composed scan information based on the plurality of color channels and the internal structure information.
- the different composed scan information may at least include the first composed scan information and a second composed scan information which may be combined to form a fused scan information.
- the one or more processors may be configured to determine the second composed scan information which includes a difference between the internal structure information and one or more of the plurality of color channels with assigned weight coefficients, and wherein the one or more of the plurality of color channels of the first composed scan information is different from the one or more of the plurality of color channels of the second composed scan information.
- the fused composed scan information may be combined with another fused composed scan information, and wherein the fused composed scan information is assigned to a first color and the other fused composed scan information is assigned to a second colour.
- the fused composed scan information is assigned to a first color
- the other fused composed scan information is assigned to a second colour.
- the first fused composed scan information includes a chromatic difference between the plurality of color channels of the visible light information and the internal structure information.
- a second (FCOM4) and third (FCOM5) fused composed scan information is using the chromatic difference (FCOMi) between NIR and the white color channels of the plurality of color channels in combination with the fluorescence red (NFR) and the fluorescence green (NFR) green composed scan information.
- a further example is illustrated in the below table.
- the below example illustrates an example where two composed scan information (COM2, COM3) is determined and combined into a fused composed scan information (FCOMe)
- the first composed scan information includes a difference between each of the plurality of color channels and the internal structure information (NIR)
- the second composed scan information includes a difference between the internal structure information (NIR) and each of the plurality of color channels.
- the contrast of the internal structure information (NIR) has been enhanced at least two times by the contrast enhancement algorithm.
- the contrast of each of the composed scan information (COM2, COM3) in the fused composed scan information (FCOMe) has been enhanced by the contrast enhancement algorithm.
- each contrast enhancements may not be performed by the same contrast enhancement algorithm.
- each of the plurality of color channels is assigned to a weight coefficient which may have a value of between 0 and 1.
- the plurality of color channels includes red (WhiteRed), green (WhiteGreen) and blue (WhiteBlue).
- the fused composed scan information (FCOMe) may be displayed as an image or as an overlay to an image that includes a composed scan information, internal structure information or visible light information.
- a composed scan information and a fused composed scan information may be displayed as an image or as an overlay to an image that includes composed scan information, visible light information, or internal structure information.
- the one or more processors may be configured to identify caries, cracks and fillings based on a contrast profile for an area-of-interest in the composed scan information or fused composed scan information.
- the area-of-interest overlaps a group of pixels of the composed scan information or fused composed scan information, and the area-of-interest corresponds to an area on a dental object.
- the one or more processors may be configured to determine the contrast profile based on the pixel values of the group of pixels, and wherein a difference between the pixel values determines the contrast profile of the area- of-interest.
- the contrast profile includes contrast differences between pixel values of selected pixels of the plurality of pixels of the first composed scan information or the fused composed scan information, wherein the selected pixels are within the area-of- interest.
- the one or more processors may be configured to determine a dental feature within the area-of-interest of the dental object by comparing the contrast profile to a reference contrast profile, wherein the reference contrast profile may be stored in a memory unit of the system, and wherein the reference contrast profile corresponds to a type of dental feature.
- the comparing may be based on a shape and/or a size of the contrast profile.
- the size of the contrast profile may be a difference between a maximum and a minimum pixel value of the group of pixels.
- the second composed scan information or the fused composed scan information may be displayed in the first window or a second window of the displaying unit.
- An example of a condition could be to remove a pixel with a pixel value that is above or below a pixel value threshold, or, to remove a pixel with a pixel value that is above or below a pixel value threshold for a certain color channel, or, to remove a group of pixels that are connected based on an area size of the grouped pixels in the first composed scan information or the fused composed scan information.
- the one or more processors may be configured to apply a linear minimization algorithm to the plurality of composed scan information to remove pixels that are above a maximum mask pixel value threshold.
- the maximum mask pixel value threshold may be at level which corresponds to very bright pixel values.
- the level of the maximum mask pixel value threshold is larger than a pixel value threshold.
- the one or more processors may be configured to connect neighbouring pixels of the first composed scan information or a fused composed scan information when neighbouring pixels are above a pixel value threshold.
- the connected neighbouring pixels may correspond to a dental feature or an artifact inside a tooth that is caused by unwanted reflections of ambient light or emitted light from the intraoral scanning system.
- the connected neighbouring pixels may correspond to an area size in the first composed scan information or a fused composed scan information, and wherein the connected neighbouring pixels are categorized into a dental feature.
- FIG. 1 illustrates an intraoral scanning system
- FIGS. 3 A and 3B illustrate examples of an intraoral scanning system
- FIGS. 4A, 4B, 4C, and 4D illustrate examples of one or more processors
- FIG. 5 illustrates an example of a contrast enhancement algorithm
- FIG. 7 illustrates different examples of a composed scan information and a fused composed scan information
- FIGS. 8A, 8B, 8C, and 8D illustrate a contrast profile of different images
- FIGS. 9A, 9B, 9C, and 9D illustrate a contrast profile of different images
- FIGS. 10A, 10B, and 10C illustrate a contrast profile of different images.
- the electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure.
- Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
- a scanning for providing intra-oral scan data may be performed by a dental scanning system that may include an intraoral scanning device such as the TRIOS series scanners from 3 Shape A/S.
- the dental scanning system may include a wireless capability as provided by a wireless network unit.
- the scanning device may employ a scanning principle such as triangulation-based scanning, confocal scanning, focus scanning, ultrasound scanning, x-ray scanning, stereo vision, structure from motion, optical coherent tomography OCT, or any other scanning principle.
- the scanning device is capable of obtaining surface information by operated by projecting a pattern and translating a focus plane along an optical axis of the scanning device and capturing a plurality of 2D images at different focus plane positions such that each series of captured 2D images corresponding to each focus plane forms a stack of 2D images.
- the acquired 2D images are also referred to herein as raw 2D images, wherein raw in this context means that the images have not been subject to image processing.
- the focus plane position is preferably shifted along the optical axis of the scanning system, such that 2D images captured at several focus plane positions along the optical axis form said stack of 2D images (also referred to herein as a sub-scan) for a given view of the object, i.e.
- the scanning device is generally moved and angled relative to the dentition during a scanning session, such that at least some sets of sub-scans overlap at least partially, to enable reconstruction of the digital dental 3D model by stitching overlapping 3D subscans together in real-time and display the progress of the virtual 3D model on a display as feedback to the user.
- the result of stitching is the digital 3D representation of a surface larger than that which can be captured by a single sub-scan, i.e. which is larger than the field of view of the 3D scanning device.
- Stitching also known as registration and fusion, works by identifying overlapping regions of 3D surface in various sub-scans and transforming sub-scans to a common coordinate system such that the overlapping regions match, finally yielding the digital 3D model.
- An Iterative Closest Point (ICP) algorithm may be used for this purpose.
- Another example of a scanning device is a triangulation scanner, where a time varying pattern is projected onto the dental object and a sequence of images of the different pattern configurations are acquired by one or more cameras located at an angle relative to the projector unit.
- the process of obtaining surface information in real time of a dental object to be scanned requires the scanning device to illuminate the surface and acquire high number of 2D images.
- a high-speed camera is used with a framerate of 300-2000 2D frames pr second dependent on the technology and 2D image resolution.
- the high amount of image data needed to be handled by the scanning device to eighter directly forward the raw image data stream to an external processing device or performing some image processing before transmitting the data to an external device or display.
- This process requires that multiple electronic components inside the scanner is operating with a high workload thus requiring a high demand of current.
- the scanning device comprises one or more light projectors configured to generate an illumination pattern to be projected on a three-dimensional dental object during a scanning session.
- the light projector(s) preferably comprises a light source, a mask signal having a spatial pattern, and one or more lenses such as collimation lenses or projection lenses.
- the light source may be configured to generate light of a single wavelength or a combination of wavelengths (mono- or polychromatic).
- the combination of wavelengths may be produced by using a light source configured to produce light (such as white light) comprising different wavelengths.
- the light projector(s) may comprise multiple light sources such as LEDs individually producing light of different wavelengths (such as red, green, and blue) that may be combined to form light comprising the different wavelengths.
- the light produced by the light source may be defined by a wavelength defining a specific color, or a range of different wavelengths defining a combination of colors such as white light.
- the scanning device comprises a light source configured for exciting fluorescent material of the teeth to obtain fluorescence data from the dental object.
- a light source may be configured to produce a narrow range of wavelengths.
- the light from the light source is infrared (IR) light, which is capable of penetrating dental tissue.
- the scanning device preferably further comprises optical components for directing the light from the light source to the surface of the dental object.
- the specific arrangement of the optical components depends on whether the scanning device is a focus scanning apparatus, a scanning device using triangulation, or any other type of scanning device.
- a focus scanning apparatus is further described in EP 2 442 720 Bl by the same applicant, which is incorporated herein in its entirety.
- the light reflected from the dental object in response to the illumination of the dental object is directed, using optical components of the scanning device, towards the image sensor(s).
- the image sensor(s) are configured to generate a plurality of images based on the incoming light received from the illuminated dental object.
- the image sensor unit may be a high-speed image sensor such as an image sensor configured for acquiring images with exposures of less than 1/1000 second or frame rates in excess of 250 frames pr. second (fps).
- the image sensor may be a rolling shutter (CCD) or global shutter sensor (CMOS).
- the network unit may be configured to connect the dental scanning system to a network comprising a plurality of network elements including at least one network element configured to receive the processed data.
- the network unit may include a wireless network unit or a wired network unit.
- the wireless network unit is configured to wirelessly connect the dental scanning system to the network comprising the plurality of network elements including the at least one network element configured to receive the processed data.
- the wired network unit is configured to establish a wired connection between the dental scanning system and the network comprising the plurality of network elements including the at least one network element configured to receive the processed data.
- the dental scanning system preferably further comprises a processor configured to generate scan data (such as extra-oral scan data and/or intra-oral scan data) by processing the two-dimensional (2D) images acquired by the scanning device.
- the processor may be part of the scanning device.
- the processor may comprise a Field- programmable gate array (FPGA) and/or an Advanced RISC Machines (ARM) processor located on the scanning device.
- the scan data comprises information relating to the three- dimensional dental object.
- the scan data may comprise any of: 2D images, 3D point clouds, depth data, texture data, intensity data, color data, and/or combinations thereof.
- the scan data may comprise one or more point clouds, wherein each point cloud comprises a set of 3D points describing the three-dimensional dental object.
- the processing of the 2D images may comprise the step of determining which part of each of the 2D images are in focus in order to deduce/generate depth information from the images.
- the internal depth information may be used to generate 3D point clouds comprising a set of 3D points in space, e.g., described by cartesian coordinates (x, y, z).
- the 3D point clouds may be generated by the processor or by another processing unit.
- Each 2D/3D point may furthermore comprise a timestamp that indicates when the 2D/3D point was recorded, i.e., from which image in the stack of 2D images the point originates.
- the timestamp is correlated with the z-coordinate of the 3D points, i.e., the z-coordinate may be inferred from the timestamp.
- the output of the processor is the scan data, and the scan data may comprise image data and/or depth data, e.g. described by image coordinates and a timestamp (x, y, t) or alternatively described as (x, y, z).
- the scanning device may be configured to transmit other types of data in addition to the scan data. Examples of data include 3D information, texture information such as infra-red (IR) images, fluorescence images, reflectance color images, x-ray images, and/or combinations thereof.
- IR infra-red
- the projector unit 7 is configured to emit light with different wavelengths during time periods onto at least the dental object 2.
- the projector unit 7 includes a plurality of light sources that is configured to emit the different wavelengths.
- the plurality of light sources may be arranged within the handheld intraoral scanning device 10 at different locations.
- the light source of the plurality of light sources that is configured to emit infrared wavelength(s) is arranged in a tip housing which is configured to be inserted partly into a patient’s mouth.
- the remaining light sources of the plurality of light sources are arranged either in the tip housing or in a main housing which is configured to be handled by a user during at least a scanning session of the patient.
- the image sensor unit 5 is configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information.
- the visible light information includes a plurality of color channels.
- the image sensor unit includes a plurality of pixels which a Bayer filter is arranged in-front of.
- the Bayer filter includes a plurality of color channels that corresponds to the plurality of color channels in the visible light information.
- the Bayer filter includes a plurality of color channels that includes red, green and blue channels, and wherein one or more of the plurality of color channels are configured to receive infrared light, such as internal structure information.
- FIGS. 3 A and 3B illustrate an example of the intraoral scanning system 1.
- the visible light information 24 which the one or more processors 11 receives from the image sensor unit 5 is assigned weight coefficients 25 to each of the plurality of color channels of the visible light information 24 or to one or more of the plurality of color channels.
- the weight coefficients 25 may be predetermined and stored in a memory unit of the system 1, or, determined in an iterative 35 manner by minimizing the pixel values in the composed scan information 40 or the fused composed scan information 65 for the purpose of isolating features that are only present in the internal structure information. The minimizing of the pixel values is provided by adjusting the weight coefficients 25.
- the weight coefficients 25 may in other examples be replaced by a contrast enhancement algorithm. In FIG.
- the contrast enhancement algorithm may be one or more of following non-linear contrast enhancement algorithms or a combination of two or more of the following non-linear contrast enhancement algorithms:
- HE Histogram Equalizations
- AHE Adaptive Histogram Equalization
- the contrast enhancement algorithms may be the same or different algorithms.
- FIG. 5 illustrates an example of a contrast enhancement algorithm 59.
- This example illustrates a histogram distribution 50 being determined for each color channels of the plurality of color channels, and/or, of the intraoral structure information.
- the x-axis of the histogram distribution 50 includes pixel values and the y-axis includes pixel quantities.
- a mean pixel value is determined based on the plurality of pixel values of the plurality of pixels.
- the pixels that have a pixel value that is outside a range 54A defined by for example three standard deviations off the mean pixel value will be truncated. In other examples two or more standard deviations may be used.
- the range 54A is covering positive pixel values (half histogram distribution).
- two ranges (54A, 54B) are defined covering both negative and positive pixel values (full histogram distribution).
- the remaining pixels that have a pixel value that are within the ranges (54A,54B) are extended (53, 55) to cover a pixel value range from 0 to 255, 0 to 511 etc.
- the extended (53, 54) pixels results in an enhanced contrast in the image/information.
- FIG. 6 illustrates different examples of fused composed scan information (65A, 65B and 65C) and an example of a composed scan information (61) determined by the one or more processors 11, and an infrared image 60 where the contrast has been improved with a contrast enhancement algorithm.
- the images (60,65A,65B,65C,61) show the same tooth of a patient. In each image, a caries lesion is seen. In the infrared image 60, the contrast between the caries and the surrounding is not good when comparing to the fused/composed scan information (65A,65B,65C,61).
- Image 65 A the contrast of both fused composed scan information (FCOMi and FCOM2) have not been enhanced by a contract enhancement algorithm.
- the contrast of both fused composed scan information (FCOMi and FCOM2) may be enhanced by a contract enhancement algorithm that uses a histogram distribution with a half histogram distribution, i.e. the negative pixel values are all removed.
- the purpose of assigning each fused composed scan information to different colors is to explore the advantages of the chromatic differences and similarities there are between the two fused composed scan information.
- the green channel is being reduced in the NFG, and both blue and red channels are being reduced in NFR.
- FCOMi W 6 [(NWB-NWR) + (NWB -NWG)]
- FCOM2 NFR - FCOM1
- FCOM3 NF G - FCOMi
- Red FCOM2
- Green FCOM3
- Image 65 includes a final fused composed scan information FCOMf mai that includes a difference between the internal structure information and the plurality of color channels, and vice versa.
- the final fused composed scan information FCOMf mai is determined according to the below table.
- the projector unit emits white light that is being captured by the image sensor unit.
- the image sensor unit provides a red channel WR, a blue channel WB and a green channel WG that includes the respective colors of the emitted white light that has been reflected and captured.
- the blue channel WB is enhanced by removing pixels of the blue channel, WB, with bright highlights, wherein a bright highlight has a pixel value of above a maximum mask pixel value threshold.
- two composed scan information (COM4 , COM5) are determined, and signal in the two composed scan information (COM4, COM5) have been enhanced by applying a mask signal to a green channel of both composed scan information (COM4 , COM5) for removing pixels of the green channel.
- the composed scan information COM4 the removed pixels of the green channel are below a minimum mask pixel value threshold that is determined solely for the green channel of the composed scan information (COM4).
- the composed scan information COM5 the removed pixels of the green channel are above a maximum mask pixel value threshold that is determined solely for the green channel of the composed scan information (COM5).
- the at least two composed scan information are combined into a fused composed scan information (FCOM7).
- a different mask signal may be applied to the fused composed scan information Mask(FCOMi) and Mask(FCOM2) for removing pixels that fulfill different conditions determined by the different mask signal. For example, pixels of FCOMi which have a pixel value above a first mask pixel value threshold may be removed, and pixels of FCOMi which have a pixel value above a second mask pixel value threshold. Furthermore, a conditional mask signal is applied to the fused composed scan information FCOMx where corresponding pixels from Mask(FCOMi) and Mask(FCOM2) are removed when a condition determined by the conditional mask signal is met. The fused composed scan information FCOMx is then combined with fused composed scan information FCOM7 into another fused composed scan information (FCOMs) .
- FCOMs fused composed scan information
- a conditional mask signal is applied to each of the plurality of color channels of the fused composed scan information (FCOMs).
- the plurality of color channels includes red, green, blue, and infrared.
- the conditional mask signal is unique for each of the plurality of color channels, and wherein the conditional mask signal includes a mask pixel value threshold that is determined by a cumulative histogram distribution of the pixel values for each of the plurality of color channels.
- Corresponding pixels of the plurality of color channels are removed when all corresponding pixels fulfil a condition determined by the conditional mask signal.
- the condition may be to remove a pixel with a pixel value that is above the mask pixel value threshold.
- a final fused composed scan information (FCOMf ma i) is determined by combining the fused composed scan information (FCOM10) with the previous determined fused composed scan information (FCOMe) which comprises a combination of the two composed scan information (COM2 and COM3) that includes difference between the plurality of color channels of white light (visible light information) and near-infrared light (i.e. internal structure information).
- the final fused composed scan information (FCOMf ma i) may be displayed as an image or as an overlay to an image that includes a composed scan information, internal structure information or visible light information.
- FIGS. 8A to 8D illustrate a contrast profile 82 of an image 80 that includes internal structure information that has been enhanced by a contrast enhancement algorithm, an image 81 A that includes a composed scan information 81 A and an image 8 IB that includes the composed scan information 8 IB with enhanced contrast.
- the contrast profile is determined by the one or more processors for an area-of-interest 81 in the images (80,81 A, 8 IB).
- the contrast profile includes contrast differences between pixel values of selected pixels of the plurality of pixels of the first composed scan information, wherein the selected pixels are within the area-of-interest.
- the area-of-interest 81 is defined by a line.
- the area-of-interest 81 may be a box or another shape if a three-dimensional contrast profile is needed.
- the area-of-interest 81 is placed on a caries and partly on the enamel of a tooth.
- the contrast profile 82 includes the pixel number versus pixel values, and it is clearly seen that for both composed scan information (81 A, 8 IB) the contrast profile is more box shaped than for the internal structure information 80, i.e. the infrared image 80. Furthermore, comparing the composed scan information 8 IB (incl. enhanced contrast) with the internal structure information 80 (incl enhanced contrast) a larger contrast is obtained between the maximum and minimum pixel value for the composed scan information with enhanced contrast.
- the contrast profile 82 illustrates clearly the advantage of using a composed scan information for identifying caries.
- FIGS. 9 A to 9D illustrate a contrast profile 82 for different teeth where the area-of-interest 81 is applied to the fused composed scan information (FCOM3, 65) seen in FIG. 7.
- the contrast profile 82 is determined for the area-of-interest 81 placed across a caries of a tooth.
- the contrast profile includes a red channel of the fused composed scan information 92 and the internal structure information 91.
- the area of interest 81 is placed across a glare effect, and it is seen that for the red channel 92 that no significant spike which could be misunderstood as being a caries appears in the contrast profile. In the internal structure information 91 a spike is seen, and which potentially could create a false indication of a caries.
- the one or more processors 11 is configured to determine a dental feature within the area-of-interest 81 of the dental object by comparing the contrast profile 82 to a reference contrast profile, wherein the reference contrast profile is stored in a memory unit of the system, and wherein the reference contrast profile corresponds to a type of dental feature.
- FIGS. 10A to 10C illustrate an example where the one or more processors 11 is configured to change the position of the area-of-interest 81 for the purpose of scanning a dental object for caries or other type of dental features, such as caries, cracks and fillings.
- the contrast profile 82 illustrate in FIG. 10B shows no caries. The contrast between maximum and minimum pixel values is not large enough to match with a caries.
- the contrast profile 82 in FIG. 10C clearly shows a caries where both the shape and the contrast between minimum and maximum pixel values have been used for identifying the caries.
- the one or more processors 11 may automatically identify the caries.
- An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
- a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include an infrared wavelength and a visible wavelength
- an image sensor unit configured to capture visible light information and internal light information from at least the dental object caused by the visible wavelength and the infrared wavelength, respectively, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors operably connected to the image sensor unit, and the one or more processors is configured to:
- An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
- a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength,
- an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors operably connected to the image sensor unit, and the one or more processors is configured to:
- An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
- a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength,
- an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors at least operably connected to the image sensor unit, and the one or more processors is configured to:
- An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
- a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength,
- an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors at least operably connected to the image sensor unit, and the one or more processors is configured to:
- the plurality of color channels and the internal structure information includes a plurality of pixels
- the one or more processors is configured to enhance contrast of the plurality of color channels and/or in the internal structure information by applying a contrast enhancement algorithm.
- the contrast enhancement algorithm includes:
- the intraoral scanning system according to any of previous items, wherein the first composed scan information includes a plurality of pixels, and the one or more processors is configured to enhance contrast in the first composed scan information based on the contrast enhancement algorithm.
- the one or more processors is configured to determine a contrast profile for an area-of-interest in the composed scan information or a fused composed scan information.
- the contrast profile includes contrast differences between pixel values of selected pixels of the plurality of pixels of the first composed scan information, wherein the selected pixels are within the area-of-interest.
- the one or more processors is configured to determine a dental feature within the area-of-interest of the dental object by comparing the contrast profile to a reference contrast profile, wherein the reference contrast profile is stored in a memory unit of the system, and wherein the reference contrast profile corresponds to a type of dental feature.
- the intraoral scanning system according to any of previous items, wherein the comparing is based on a shape and/or a size of the contrast profile. 17. The intraoral scanning system according to any of previous items, wherein the system includes a Bayer filter arranged in front of the image sensor unit, such that the visible light information and internal light information is forwarded to the image sensor unit by the Bayer filter, and wherein the plurality of color channels of the visible light information includes red, blue and green wavelengths filtered by the Bayer filter.
- the plurality of color channels of the visible light information includes fluorescence red and fluorescence green wavelengths filtered by the Bayer filter.
- the plurality of color channels includes red, blue, green, fluorescence red and fluorescence green wavelengths.
- the one or more processors is configured to determine a second composed scan information that includes a difference between the internal structure information and one or more of the plurality of color channels with assigned weight coefficients, and wherein the one or more of the plurality of color channels of the first composed scan information is different from the one or more of the plurality of color channels of the second composed scan information.
- the one or more processors is configured to determine a fused composed scan information that includes a difference or summation between the first composed scan information and the second composed scan information.
- the intraoral scanning system according to any of previous items wherein the contrast in the fused composed scan information is adjusted by the contrast enhancement algorithm.
- 23. The intraoral scanning system according to any of previous items, comprising a displaying unit configured to display the first composed scan information and the second composed scan information in a window of the displaying unit, and wherein the first composed scan information is displayed having a first main color, and the second composed scan information is displayed having a second main color, and wherein the first main color is different from the second main color.
- the one or more processors is configured to determine a three-dimensional model of the dental object based on one or more of the plurality of color channels.
- the intraoral scanning system comprising a displaying unit configured to display the first composed scan information or the fused composed scan information in a first window of the displaying unit.
- the displaying unit is configured to display the first composed scan information and the 3D model of the dental object in the first window, or, to display the first composed scan information and/or the second composed scan information in a first window of the displaying unit and the 3D model in a second window of the displaying unit.
- the intraoral scanning system according to item 31 wherein the combination of the plurality of color channels include corresponding pixels which for the plurality of color channels has a pixel value that is larger than zero.
- the histogram distribution is a cumulative histogram distribution, wherein a percentage of remaining pixels of each of the plurality of color channels and the internal structure information determines the level of the mean pixel value threshold.
- the one or more processors is configured to apply a plurality of conditional mask signals to one or more color channels of a plurality of composed scan information that includes the first composed scan information and at least a second composed scan information, wherein the plurality of conditional mask signals removes pixels of the plurality of composed scan information which do not fulfil the conditions determined by the plurality of conditional mask signals, and then the plurality of composed scan information is combined into a fused composed scan information.
- the one or more processors is configured to connect neighbouring pixels of the first composed scan information or a fused composed scan information when neighbouring pixels are above a pixel value threshold.
- the one or more processors is configured to remove connected neighbouring pixels from the first composed scan information or a fused composed scan information based on an area size of the connected neighbouring pixels.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Veterinary Medicine (AREA)
- Public Health (AREA)
- Animal Behavior & Ethology (AREA)
- Physics & Mathematics (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Biomedical Technology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Engineering & Computer Science (AREA)
- Pathology (AREA)
- Biophysics (AREA)
- Optics & Photonics (AREA)
- Endoscopes (AREA)
Abstract
The present disclosure relates to an intraoral scanning system configured to provide a first composed scan information. The system includes a handheld intraoral scanning device that includes a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength, and an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels. The system further comprises one or more processors at least operably connected to the image sensor unit. The one or more processors is configured to receive the visible light information and the internal light information, determine internal structure information of the dental object from the internal light information, assign a weight coefficient to each of the plurality of color channels of the visible light information, determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients, and enhance distinguishability of one or more internal structures of the dental object in the first composed scan information by adjusting one or more of the weight coefficients of the one or more color channels.
Description
AN INTRAORAL SCANNING SYSTEM FOR IMPROVING COMPOSED SCAN INFORMATION
FIELD
The disclosure relates to an intraoral scanning system that is configured to determine a composed scan information that includes enhanced internal structure information and/or enhanced textural information of a tooth. More specifically, the disclosure relates to an improved determination of composed scan information which includes further enhanced internal structural information, such as caries.
BACKGROUND
Many dental and orthodontic procedures can benefit from accurate three-dimensional (3D) descriptions of a patient's dentition and intraoral cavity. In particular, it would be helpful to provide a three-dimensional description of both the surface and internal structures of the teeth, including the enamel and dentin, as well as caries and the general internal composition of the tooth volume. Although pure surface representations of the 3D surfaces of teeth have proven extremely useful in the design and fabrication of dental restorations (e.g., crowns or bridges) the ability to image internal structures including the development of caries and dental cracks in the enamel and underlying dentin would be tremendously useful, particularly in conjunction with a surface topographical mapping.
State of the art, ionizing radiation (e.g., X-rays) has been used to image the teeth for diagnostic purposes. For example, X-ray bitewing radiographs are often used to provide non-quantitative images of the teeth's internal structures. However, in addition to the risk of ionizing radiation, such images are typically limited in their ability to show early tooth mineralization changes (e.g. initial caries) resulting in underestimation of the demineralization depth; they are unable to assess the presence or not of micro-cavitation; they result in frequent overlap of the approximal tooth surfaces which requires repetition of radiograph acquisition and thus may involve a lengthy and expensive procedure.
Some intraoral features such as soft tissues and dental plaque are usually not visualized via X-ray because of their low density. Other techniques, such as cone beam computed
tomography (CBCT) may provide tomographic images and be used to collect more information about the tissues and internal structure, but still require ionizing radiation. Furthermore, it is known that infrared (IR) light can be used for assessing internal structure of a tooth and tooth surface in the form of transillumination of teeth or light reflection and backscattering from teeth. The IR range offers a non-ionizing and safe approach to assess dental caries, restorations, cracks, enamel and dentin defects.
A tooth being exposed to infrared light would result in multiple reflections of the infrared light that at least partly would be captured by an image sensor unit which results in an infrared image that includes the multiple reflected infrared light. The multiple reflected infrared light would include reflections from an outer surface of the tooth and from internal structures, such as dentin, enamel, caries, fillings, cracks etc. A caries lesion has different severity levels that at least is evaluated based on the placement of the caries. A caries lesion that is placed in the enamel can be detected on an infrared image, however, a due to reflections from the tooth outer surface and reflections as well as absorption and scattering from internal regions of the tooth. Therefore, there is a need to remove at least the different reflections from the internal image for the purpose to improve a user’s or an algorithm’s ability to detect a caries inside the tooth regardless the placement of caries.
SUMMARY
It is an aspect of the present disclosure to overcome the above-mentioned disadvantage of detecting a caries on an infrared image.
It is a further aspect of the present disclosure to provide an automatic detection of internal structures of a tooth.
According to the aspects, an intraoral scanning system is disclosed. The intraoral scanning system configured to provide a first composed scan information that includes enhanced internal structures. The system includes a handheld intraoral scanning device that includes a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include an infrared wavelength and a visible wavelength. The visible wavelength may include wavelengths
between 350 nm and 750 nm. The infrared wavelengths may include one or more wavelengths between 800 nm and 1100 nm. The projector unit may include a plurality of light sources that is configured to emit the different wavelengths. The projector unit may be configured to switch between the different wavelengths by switching on and off the plurality of light sources or by turning up and down the power of the plurality of light sources. The time periods may include a first time period that is assigned for emitting light that includes visible wavelengths between 450 nm and 750 nm. The time periods may include a second time period that is assigned for emitting light that includes an infrared wavelength. The time periods may include a third time period that is assigned for emitting yet another visible wavelength, for example, between 350 nm and 450 nm. In the respective time periods the power of the light source that is not emitting light is either turned off or turned down in power.
The handheld intraoral scanning device may include a tip housing and a main housing, wherein the tip housing is configured to be inserted into an oral cavity of a patient, and the main housing is configured to be handheld by a user. The plurality of light sources may be arranged within the tip housing and/or the main housing. For example, one or more light sources of the plurality of light sources that is configured to emit infrared wavelengths may be arranged in the tip housing and the one or more light sources of the plurality of light sources that is configured to emit visible wavelengths may be arranged in the main housing or the tip housing. In yet another example, the plurality of light sources may be arranged in the tip housing and/or the main housing.
The handheld intraoral scanning device may include an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information. The system may include a Bayer filter arranged in front of the image sensor unit. The image sensor unit may include one or more cameras, wherein each of the one or more cameras include a pixel array that may be configured to isolate the received visible light into a plurality of color channels. Furthermore, one or more pixels of the pixel array may be configured to receive infrared light. Furthermore, one or more pixels of the pixel array may be configured to receive both visible light and infrared light.
The pixel array may include color pixels configured to receive red, green and blue wavelengths. Furthermore, the pixel array may include infrared pixels configured to receive infrared light, or, to receive both infrared light and visible light. One or more of the plurality of cameras may be arranged in the tip housing and/or in the main housing. For example, one of the plurality of cameras may be configured to solely receive infrared light and is arranged next to a light source of the plurality of light sources that is configured to emit light with infrared wavelengths.
The one or more visible images includes visible light information that is separated into a plurality of color channels. The visible light information includes the plurality of color channels which includes at least red, green and blue channels.
The system may further comprise one or more processors operably connected to at least the image sensor unit, and the one or more processors may be configured to receive the visible light information and the internal light information, wherein internal structure information of the dental object may be determined based on the internal light information. The internal light information may include imageable information that includes the internal structure information. The internal structure information may include imageable information of internal structures of the dental object, wherein the internal structures may be enamel, dentin, caries, cracks, fillings etc.
The one or more processors may further be configured to assign a weight coefficient for each of the plurality of color channels of the visible light information, and to determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients.
The difference between the internal structure information and the one or more of the plurality of color channels is a non-linear operation and is not commutable, i.e. A-B is not the same as -(B-A). This is because of the limited extension of the space that a pixel value exists in, i.e. [0,255], Truncating both internal structure information and the one or more
of the plurality of color channels the same way (i.e. at 0 and 255, respectively) without any expansion of the histogram will still be a non-commutable operation.
The one or more processors may then be configured to enhance the distinguishability of one or more internal structures of the dental object in the first composed scan information by adjusting one or more of the weight coefficients of the one or more color channels. Thereby, the user of the system would be able to see more easily internal structures of the dental object. One or more of the weight coefficients may be adjusted for the purpose to see more clear caries, cracks and/or fillings. One way of enhancing the distinguishability is provided by the linear minimization of the difference between the internal structure information and the one or more color channels of the plurality of color channels by adjusting the weight coefficients. The one or more color channels and the internal structure information includes a first group of pixels with common information and a second group of pixels with unique information, and by performing the linear minimization the first group of pixels are removed and the second group of pixels are retained in the composed scan information or the fused composed scan information. A further distinguishability in the form of enhanced contrast may be applied by a contrast enhancement algorithm.
The purpose of the linear minimization is to isolate an internal structure, such as enamel, dentine, caries, cracks or fillings. Specifically, the purpose is to isolate caries in the internal structure information. In another example, the purpose may be to isolate the enamel and dentine for then to be combined with the isolated caries.
The system may comprise one or more processors at least operably connected to the image sensor unit, and the one or more processors may be configured to receive the visible light information and the internal light information, determine internal structure information of the dental object from the internal light information, determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels, and enhance the contrast in the first composed scan information by applying a contrast enhancement algorithm to each of the one or more color channels of the plurality of color channels. In this example of the
one or more processors, the weight coefficient is set to 1 for each of the plurality of color channels. The enhancement of the contrast is applied by a contrast enhancement algorithm. Different or same contrast enhancement algorithms may be applied to the internal structure information and/or the first composed scan information. The contrast enhancement algorithm may be applied to each of the plurality of color channels, the internal structure information and the fused/composed scan information one or more times for improving the enhancement of the contrast.
According to the aspects, another intraoral scanning system is disclosed. The intraoral scanning system is configured to provide a first composed scan information. The system includes a handheld intraoral scanning device that includes a projector unit may be configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include an infrared wavelength and a visible wavelength, an image sensor unit configured to capture visible light information and internal light information from at least the dental object caused by the visible wavelength and the infrared wavelength, respectively, and wherein the visible light information includes a plurality of color channels. The system may further comprise one or more processors operably connected to the image sensor unit, and the one or more processors is configured to receive the visible light information and the internal light information, determine internal structure information of the dental object from the internal light information, assign a weight coefficient for each of the plurality of color channels of the visible light information, and determine a first composed scan information that includes a minimized difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients, or vice versa.
The minimized difference may be between pixels of the internal structure information and corresponding pixels of the plurality of color channels with assigned weight coefficients. Each of the plurality of color channels correspond to a color of a pixel of an image sensor unit. Furthermore, the combination of the plurality of color channels corresponds to the color of a pixel in the visible light information, i.e. in a color image.
The minimized difference may be determined by adjusting the weight coefficient for each of the one or more color channels.
For example, the first composed scan information (COM) may be determined by following equation:
CVIS = Wi*White[Blue]+W2*White[Red] +W3*White[Green],
COM = IR[IR] - CVIS or COM = CVIS - IR[IR],
In the above example, the projected visible light may be white light that is separated into a plurality of color channels that includes a blue channel [Blue], a red channel [Red] and a green channel [Green], The three color channels have each been assigned to a weighting coefficient (Wi,W2, and W3). The infrared light IR is assigned to an infrared channel [IR], The first composed scan information is then determined by the subtraction of the internal structure information from each of the plurality of color channels or vice versa.
Another example is seen below. In this example, the infrared light IR is assigned to a color channel of the plurality of color channels, and in this specific example, the color channel is [Green], In this specific example, the color channel [Green] is configured to both receive Green and infrared light. The remaining color channels of the plurality of color channels include a filter for filtering out the infrared light.
CVIS = Wi*White[Blue]+W2*White[Red] +W3*White[Green], COM = IR[Green] - CVIS or COM =CVIS - IR[Green],
Yet another example is seen below. In this example the plurality of color channels includes a blue channel [Blue], a red channel [Red], a green channel [Green], a fluorescence red [Red] and a fluorescence green [Green], And, each of the color channels are weighted (Wi, W2, W3, W4, W5). The fluorescence red and fluorescence green may be combined in the internal light information, i.e. in a fluorescent image, and separated into a color channel of the plurality of color channels.
CVIS = Wi*White[Blue]+W2*White[Red] +W3*White[Green] +W4*Fluo[Red] +Ws*Fluo[Green],
COM = IR[Green] - CVIS or COM = CVIS - IR[Green],
The weighting coefficients (Wi, W2, W3, W4, W5) may be between 0 and 1.
The plurality of color channels and the infrared channels may be generated by a Bayer filter arranged in front of the image sensor unit or in front of one or more cameras of the image sensor unit.
The system may include a Bayer filter arranged in front of the image sensor unit, such that the visible light information and internal light information is forwarded to the image sensor unit by the Bayer filter, and wherein the plurality of color channels of the visible light information includes red, blue and green wavelengths filtered by the Bayer filter. One or more of the color channels includes an infrared filter which is configured to remove infrared light from the one or more of the color channels. For example, the Bayer filter includes an infrared filter that is configured to remove infrared light from red, blue and one or more green channels of the plurality color channels, which means that one or more green channels of the plurality color channels is configured to receive infrared light. In another example, the Bayer filter includes an infrared channel configured to solely receive infrared light.
The projector unit may be configured to emit visible light that includes UV light, and the image sensor unit may be configured to receive fluorescence light, such as fluorescence red and fluorescence green. In this example, the red and green channels of the plurality of color channels are configured to receive fluorescence red and fluorescence green, respectively, as there are no difference in wavelengths or the nature of fluorescence red or green from regular red or green wavelengths.
The plurality of color channels of the visible light information includes fluorescence red and fluorescence green wavelengths filtered by the Bayer filter. The plurality of color channels includes color channels that comprises fluorescence red and fluorescence green wavelengths, respectively.
The plurality of color channels includes color channels that comprises red, blue, green, fluorescence red and fluorescence green wavelengths, respectively.
It has been shown that separating the white light into the plurality of color channels and provide a weighting coefficient to each of the plurality of color channels and not one generic weighting coefficient to all color channels have improved the enhancement of distinguishability of one or more internal structures in the first composed scan information which means that for example caries has become even more distinguishable in comparison to a regular infrared image or a composed scan information with the generic weighting coefficient for all color channels. The one or more color channels and the internal structure information includes a first group of pixels with common information and a second group of pixels with unique information, and by performing the linear minimization the first group of pixels are removed and the second group of pixels are retained in the composed scan information or in the fused composed scan information.
The first composed scan information or the fused composed scan information may include channel-based color fusion which combines pixels of the plurality of color channels of the visible light information and the internal structure information that fulfils a conditional mask signal that is unique for each of the plurality of color channels. The plurality of color channels includes red, green, blue, fluorescent red, fluorescent green and infrared. The conditional mask signal is unique for each of the plurality of color channels, wherein the conditional mask signal includes a mask pixel value threshold that is determined for each of the plurality of color channels. Removing a corresponding pixel from the plurality of color channels, the condition defined by the conditional mask signal has to be fulfilled by the corresponding pixels in all of the plurality of color channels. For example, to remove high brightness pixels, the conditional mask signal may determine to remove pixels of the plurality of color channels which have a pixel value that is above the mask pixel value threshold. In another example, to remove dark pixels, the conditional mask signal may determine to remove pixels of the plurality of color channels which have a pixel value that is below the mask pixel value threshold, where the threshold for each of the plurality of color channels is different and unique.
The channel-based color fusion may include separating a fused composed scan information or a composed scan information into a plurality of wavelength channels
including red, green, blue and infrared wavelengths. A conditional mask signal may be applied to each of the plurality of wavelength channels, wherein the conditional mask signal is configured to remove corresponding pixels of the plurality of wavelength channels which have a pixel value that is below or above a mask pixel value threshold , where the threshold for each of the plurality of color channels is different and unique.
The mask pixel value threshold may be determined based on a histogram distribution of pixel values of a color channel. The histogram distribution may be a cumulative histogram distribution, an adaptive histogram distribution, a histogram equalization or a regular histogram distribution.
The mask pixel value threshold may be a mean pixel value determined by a cumulative histogram distribution, an adaptive histogram distribution, a histogram equalization or a regular histogram distribution. The histogram distribution may be a histogram distribution, wherein a percentage of remaining pixels of each of the plurality of color channels determines the level of the mask pixel value threshold.
To improve the contrast or the distinguishability of internal structure in a composed scan information or a fused composed scan information a conditional mask signal or a regular mask signal may be applied to the pixels of the composes scan information or the fused composed scan information.
The regular mask signal is configured to remove corresponding pixels of the plurality of color channels which have a pixel value that is below or above a mask pixel value threshold. The mask pixel value threshold is the same for the plurality of color channels. The mask pixel value threshold is a predetermined threshold that, for example, depends on the type of the handheld intraoral scanning device, i.e., how the optical illumination and imaging system is designed, relative illumination powers and field-of-view of the different light sources, different placements of the different light sources, etc.
The conditional mask signal is configured to remove one or more pixels of the plurality of color channels based on a condition that includes a mask pixel value threshold that is
determined for each of the plurality of color channels. For removing one or more more pixels of the plurality of color channels, corresponding one or more pixels of the plurality of color channels have to fulfil the condition. The condition may be that the corresponding one or more pixels have a pixel value that is above a corresponding mask pixel value threshold.
The enhancement of the distinguishability of one or more internal structures in the first composed scan information may be determined by minimizing the difference between the internal structure information and one or more color channels of the plurality of color channels by adjusting one or more of the weight coefficients for the plurality of color channels. The weight coefficients adjusted for the purpose of minimizing the difference would resolve in, if combining the plurality color channels, an image which resembles an infrared image to the extent that dentine and enamel is seen more clearly in the image with the adjusted weighting coefficients. In this example, the internal structures, i.e. caries, has obtained a better distinguishability which improves the visibility of, for example, a caries lesion.
The difference between the internal structure information and the one or more color channels may be between pixels of the internal structure information and corresponding pixels of the plurality of color channels.
The minimized difference may be determined based on a least square method of the difference between the internal structure information and the one or more color channels of the plurality of color channels with assigned weight coefficients.
The system may include a displaying that is configured to display the first composed scan information either as an image or as an overlay to another image, wherein the another image may be a color image that includes one or more of the plurality of color channels or an infrared image that includes the internal structure information.
The plurality of color channels and the internal structure information includes a plurality of pixels, wherein the one or more processors may be configured to enhance the contrast
of the plurality of color channels and/or in the internal structure information by applying a contrast enhancement algorithm.
The one or more processors may be configured to enhance the contrast of the first composed scan information by applying the contrast enhancement algorithm.
It would be of a benefit to use the contrast enhancement algorithm on each of the plurality of color channels, the internal structure information and the first composed scan information. By performing contrast enhancement before and after determining the first composed scan information would improve even more the contrast. It may be of a benefit to repeatedly use the contrast enhancement algorithm before and after determining the first composed scan information to the extent that the image quality does not decrease.
The contrast enhancement algorithm may be a nonlinear enhancement algorithm like a histogram equalization or an adaptive histogram equalization. When the histogram of an image, such as the internal structure information or the visible light information, is equalized, all pixel values of the image are redistributed so there are approximately an equal number of pixels to each of the user-specified output grayscale classes (e.g., 32, 64, and 256). Contrast is increased at the most populated range of brightness values of the histogram (or "peaks"). It automatically reduces the contrast in very light or dark parts of the image associated with the tails of a normally distributed histogram. Histogram equalization separate pixels into distinct groups, if there are few output values over a wide range. Histogram equalization is effective only when the original image has poor contrast to start with, such as for the intraoral structure information in relation to identify internal structures, such as caries, fillings, and cracks.
In the adaptive histogram equalization, the image is divided into sections, such as rectangular domains, wherein an equalizing histogram is determined and modify for each of the sections, so that lightness values of the pixels are redistributed in the image.
In another example, the contrast enhancement algorithm may be a linear enhancement algorithm like a piecewise contrast stretching that is applied to each piece of intensity
intervals of the pixels. For example, pixels with intensities between a minimum and a maximum intensity are linearly scaled to for example 0 and 1 or 0 to 255 etc, without changing the pixels.
The contrast enhancement algorithm may be one or more of following linear contrast enhancement algorithms or a combination of two or more of the following linear contrast enhancement algorithms:
• Minimum-Maximum Linear Contrast Stretch (MMLC),
• Percentage Linear Contrast Stretch (PLC) and Piecewise
• Linear Contrast Stretch (PWLC).
The contrast enhancement algorithm may be one or more of following non-linear contrast enhancement algorithms or a combination of two or more of the following non-linear contrast enhancement algorithms:
• Histogram distribution (HD),
• Histogram Equalizations (HE),
• Adaptive Histogram Equalization (AHE) and
• Homomorphic Filters (HF) including a combination of low pass and high pass filtering.
The contrast enhancement algorithm may be a combination of one or more of the nonlinear contrast enhancement algorithms and one or more of the linear contrast enhancement algorithms.
The contrast enhancement algorithm may include determining a histogram distribution of a plurality of pixels. The histogram distribution includes a distribution of a plurality of pixel values of the plurality of pixels, wherein the plurality of pixel values is within a first pixel value range. The algorithm may further determine a mean pixel value based on the plurality of pixel values which is subtracted from each of the plurality of pixel values, and then, truncating one or more of the plurality of pixels which has a pixel value that is outside three standard deviations off zero, wherein the three standard deviations include
pixel values above the mean pixel value, or, both above and below the mean pixel value. The example where the three standard deviations include only positive pixel values or only pixel values above the mean pixel value may be defined as a half histogram distribution, and, the example where the three standard deviations include both positive and negative pixel values, or pixel values of both above and below the mean pixel value may be defined as a full histogram distribution. The contrast enhancement algorithm may then extend the pixel values of the truncated histogram distribution such that the pixel values of the truncated histogram distribution are within a second pixel value range, and wherein the second pixel value range is larger than the first pixel value range. The second pixel value range may be between 0 and 255, 0 and 511 etc.
The three standard deviations is an example, however, the three standard deviations could be replaced by a single standard deviation, two, four, five standard deviations and soon.
The truncation and the extended distribution of the remaining pixels of the plurality of pixels over the second pixel value range results in an improved contrast of the internal structure information, the plurality of color channels and the composed scan information, i.e. the first composed scan information.
The first composed scan information may include a plurality of pixels, and the one or more processors may be configured to enhance contrast in the first composed scan information based on the contrast enhancement algorithm.
A composed scan information may include a plurality of pixels, and the one or more processors may be configured to enhance contrast in the composed scan information based on the contrast enhancement algorithm.
The one or more processors may be configured to determine different composed scan information based on the plurality of color channels and the internal structure information. The different composed scan information may at least include the first composed scan information and a second composed scan information which may be combined to form a fused scan information.
The one or more processors may be configured to determine the second composed scan information which includes a difference between the internal structure information and one or more of the plurality of color channels with assigned weight coefficients, and wherein the one or more of the plurality of color channels of the first composed scan information is different from the one or more of the plurality of color channels of the second composed scan information.
The first composed scan information and the second composed scan information may be combined into a fused scan information. In this example, both the second composed scan information and the fused scan information may have the contrast enhanced by the contrast enhancement algorithm. The contrast in the fused composed scan information may be adjusted by the contrast enhancement algorithm.
The first composed scan information may include a difference between one color channel of the plurality of color channels and the internal structure information, and the second composed scan information may include a difference between multiple color channels of the plurality of color channels and the internal structure information. For example, the first composed scan information includes a difference between internal structure information and fluorescence red channel of the plurality of color channels and the second composed scan information includes a difference between the internal structure information and a blue channel of the plurality of color channels and a difference between the internal structure information and green channel of the plurality of color channels. The fused composed scan information may be displayed as an image. The contrast of the fused composed scan information is improved by applying a mask signal to the fused composed scan information. The mask signal removes pixels of the plurality of pixels with a pixel value that is below a minimum pixel threshold and/or above a maximum pixel threshold.
The fused composed scan information may be combined with another fused composed scan information, and wherein the fused composed scan information is assigned to a first color and the other fused composed scan information is assigned to a second colour. Thereby, it would be possible to isolate an internal structure, such as a caries, in the fused
composed scan information and visualize the caries with the first color, and the other fused composed scan information includes internal structures, such as the dentine and enamel, that are visualized with the second color. Or the two composed scan information behave differently when it comes to unwanted reflections from the internal structure information, e.g. near-infrared light, but behave similarly when it comes to caries. Then it is possible to separate the caries lesion from the reflections even though they look similar in raw images. In this example, the displaying of the combined fused composed scan information would improve the user’s ability to identify caries.
The visualization of both fused composed scan information may be on a displaying unit. The user may determine the color of both the first and second color as the user may have preferences on color combinations.
The one or more processors may be configured to determine a fused composed scan information that includes a difference or summation between the first composed scan information and the second composed scan information. The fused composed scan information may include further composed scan information. The contrast in the fused composed scan information may be adjusted by the contrast enhancement algorithm.
Below table illustrates an example on a channel-based color fusion of two fused composed scan information (FCOMi and FCOM2) that are being fused into a single image, i.e. a new fused composed scan information:
In the above example, the internal structure information is represented by near-infrared light (NIR). In the above example, fluorescent red channel (FluoRed) and fluorescent green channel (FluoeGreen) are subtracted (NFR and NFG) from the internal structure
information (NIR). The green, red and blue channels (WhiteBlue, WhiteRed, FluoGreen) of the visible light information are subtracted (NWB, NWB, NWG) from the internal structure information (NIR). The red color channel of the first fused composed scan information (FCOMi) and the green color channel of the second fused composed scan information (FCOM2) are being fused together in a new fused composed scan information. The new fused composed scan information provides a visualization of a caries with a yellowish color, the enamel with black and the dentine with a greenish color. For improving the contrast further, a first mask signal may be applied to the new fused composed scan information. The first mask signal removes pixels with pixel values that are below a mean pixel value threshold. A second mask signal may be applied to the already masked fused composed scan information. The second mask signal may remove colors that are not relevant for the caries. Or, both mask signals remove pixels with pixel values that are above or below a mean pixel value threshold, and a third mask signal is applied for retaining the pixels that survived the previous applied mask signals, i.e. provided by the first and the second mask signal. The user may be able to apply these mask signals manually via a user interface of the system, or, the one or more processors may automatically apply these mask signals for improving the visibility of the caries on the displaying unit. The red and green colors assigned to the first and the second fused composed scan information are just examples. It can be any colors that enhances the visibility of one or more internal structures of a tooth.
Below table illustrates another example on a channel-based color fusion where two single color channels of two fused composed scan information (FC0M4 and FC0M5) are being fused into a single image, i.e. a new fused composed scan information:
In the example above the first fused composed scan information (FCOM3) includes a chromatic difference between the plurality of color channels of the visible light information and the internal structure information. A second (FCOM4) and third (FCOM5) fused composed scan information is using the chromatic difference (FCOMi) between NIR and the white color channels of the plurality of color channels in combination with the fluorescence red (NFR) and the fluorescence green (NFR) green composed scan information. This approach creates less instances where proximal signals are accidentally subtracted away and less false highlights (background signal what are highlighted instead of being removed) appear while still maintaining the desirable information in pixels, such as information that relates to caries, cracks, fillings etc. at a desirable amplitude level. To further improve the visibility of caries, cracks and fillings, a mask signal is applied to the red color channel of the second fused composed scan information (FCOM4) and the green color channel of the third fused composed scan information (FCOM5) . The mask signal removes pixels with pixel values under a mean pixel value threshold, and wherein the threshold is different for each of the two color channels. The mask signal may include a dynamic pixel value threshold for each of the plurality of two color channels. The improved second (FCOM4) and third (FCOM5) fused composed scan information may be fused into a new fused composed scan information, i.e. into an image.
Yet another example is seen in the below table. The first composed scan information (COMi) is determined by a difference between each of the plurality of color channels (WhiteRed, WhiteGreen and whiteBlue) and internal structure information (NIR), wherein the contrast of the internal structure information (NIR) has been enhanced at least two times by the contract enhancement algorithm. Each of the plurality of color channels is combined with a weighting coefficient (Wi, W2, W3) which may have a value between 0 and 1. Additionally, the contrast of the first composed scan information (COMi) may then be enhanced by the contrast enhancement algorithm. The contrast enhancement algorithm may be the same or different between each of the contrast enhancements applied to the first composed scan information (COMi) and the internal structure information (NIR). A color map may be applied to the first composed scan information (COMi) for displaying purposes. The First composed scan information may be displayed as an image or as an
overlay to an image that includes a composed scan information, internal structure information or visible light information.
A further example is illustrated in the below table. The below example illustrates an example where two composed scan information (COM2, COM3) is determined and combined into a fused composed scan information (FCOMe) The first composed scan information includes a difference between each of the plurality of color channels and the internal structure information (NIR), and the second composed scan information includes a difference between the internal structure information (NIR) and each of the plurality of color channels. In both of the composed scan information (COM2, COM3) the contrast of the internal structure information (NIR) has been enhanced at least two times by the contrast enhancement algorithm. The contrast of each of the composed scan information (COM2, COM3) in the fused composed scan information (FCOMe) has been enhanced by the contrast enhancement algorithm. Once again, each contrast enhancements may not be performed by the same contrast enhancement algorithm. In the example, each of the plurality of color channels is assigned to a weight coefficient which may have a value of between 0 and 1. The plurality of color channels includes red (WhiteRed), green (WhiteGreen) and blue (WhiteBlue). The fused composed scan information (FCOMe) may be displayed as an image or as an overlay to an image that includes a composed scan information, internal structure information or visible light information.
In the below example, the projector unit emits white light that is being captured by the image sensor unit. The image sensor unit provides a red channel WR, a blue channel WB and a green channel WG that includes the respective colors of the emitted white light that has been reflected and captured. The blue channel WB is enhanced by removing pixels of the blue channel, WB, with bright highlights, wherein a bright highlight has a pixel value of above a maximum mask pixel value threshold. Furthermore, at least two composed scan information (COM4 , COM5) are determined, and the signal of caries in the two composed scan information (COM4 , COM5) have been enhanced by applying a mask signal to a green channel of both composed scan information (COM4 , COM5) for removing pixels of the green channel. For the composed scan information COM4 the removed pixels of the green channel are below a minimum mask pixel value threshold that is determined solely for the green channel of the composed scan information (COM4). For the composed scan information COM5 the removed pixels of the green channel are above a maximum mask pixel value threshold that is determined solely for the green channel of the composed scan information (COM5). The at least two composed scan information are combined into a fused composed scan information (FCOM7). A different mask signal may be applied to the fused composed scan information Mask(FCOMi) and Mask(FCOM2) for removing pixels that fulfill different conditions determined by the different mask signal. For example, pixels of FCOMi which have a pixel value above a first mask pixel value threshold may be removed, and pixels of FCOMi which have a pixel value above a second mask pixel value threshold. Furthermore, a conditional mask signal is applied to the fused composed scan information FCOMx where corresponding pixels from Mask(FCOMi) and Mask(FCOM2) are removed when a condition determined by the conditional mask signal is met. The fused composed scan information FCOMx is then combined with fused composed scan information FCOM7 into another fused composed scan information (FCOMs) . A conditional mask signal is applied to each of the plurality of color channels
of the fused composed scan information (FCOMs). The plurality of color channels includes red, green, blue, and infrared. The conditional mask signal is unique for each of the plurality of color channels, and wherein the conditional mask signal includes a mask pixel value threshold that is determined by a cumulative histogram distribution of the pixel values for each of the plurality of color channels. Corresponding pixels of the plurality of color channels are removed when all corresponding pixels fulfil a condition determined by the conditional mask signal. The condition may be to remove a pixel with a pixel value that is above the mask pixel value threshold. The pixel differences in the fused composed scan information are then linear minimized by a linear minimization algorithm resulting in yet another fused composed scan information (FCOM9). The two fused composed scan information FCOM9 and FCOMs are combined into a further fused composed scan information (FCOM10). A final fused composed scan information (FCOMfmai) is determined by combining the fused composed scan information (FCOM10) with the previous determined fused composed scan information (FCOMe) which comprises a combination of the two composed scan information (COM2 and COM3) that includes difference between the plurality of color channels of white light (visible light information) and near-infrared light (i.e. internal structure information). The final fused composed scan information (FCOMfmai) may be displayed as an image or as an overlay to an image that includes a composed scan information, internal structure information or visible light information.
A final fused composed scan information may be determined by combining different fused composed scan information, wherein the different fused composed scan information have been determined by different composed scan information, and different mask signals have been applied to the different fused composed scan information.
A composed scan information and a fused composed scan information may be displayed as an image or as an overlay to an image that includes composed scan information, visible light information, or internal structure information.
Artifacts on the internal structure information may be captured by the image sensor unit. The artifacts may be glare which can be reduced by applying a first polarization to the emitted infrared light which is different from a second polarization of the reflected infrared light that is captured by the image sensor unit.
The artifacts may be glare which can be reduced by applying a second polarization to the captured infrared light, wherein the second polarization is different from a first polarization of the emitted infrared light.
By having different polarization between the emitted and reflected infrared light the amount of glare that may occur in the internal structure information is reduced.
The one or more processors may be configured to identify caries, cracks and fillings based on a contrast profile for an area-of-interest in the composed scan information or fused composed scan information. The area-of-interest overlaps a group of pixels of the composed scan information or fused composed scan information, and the area-of-interest corresponds to an area on a dental object. The one or more processors may be configured to determine the contrast profile based on the pixel values of the group of pixels, and wherein a difference between the pixel values determines the contrast profile of the area- of-interest. The contrast profile includes contrast differences between pixel values of selected pixels of the plurality of pixels of the first composed scan information or the fused composed scan information, wherein the selected pixels are within the area-of- interest.
The one or more processors may be configured to determine a dental feature within the area-of-interest of the dental object by comparing the contrast profile to a reference contrast profile, wherein the reference contrast profile may be stored in a memory unit of the system, and wherein the reference contrast profile corresponds to a type of dental feature.
The one or more processors may be configured to determine a plurality of contrast profiles for different positions of the area-of-interest on a dental object, and to compare each of the plurality of contrast profiles to the reference contrast profile. The one or more processors may be configured to select a contrast profile of the plurality which a correlation coefficient between the reference contrast profile and the selected contrast profile is above a correlation threshold.
The comparing may be based on a shape and/or a size of the contrast profile. The size of the contrast profile may be a difference between a maximum and a minimum pixel value of the group of pixels.
The system may include a displaying unit configured to display the first composed scan information and the second composed scan information in a window of the displaying unit, and wherein the first composed scan information may be displayed having a first main color, and the second composed scan information is displayed having a second main color, and wherein the first main color is different from the second main color.
The one or more processors may be configured to determine a three-dimensional model of the dental object based on one or more of the plurality of color channels.
The system may include a displaying unit configured to display the first composed scan information or the fused composed scan information in a first window of the displaying unit.
The displaying unit may be configured to display the first composed scan information and the 3D model of the dental object in the first window, or, to display the first composed scan information and/or the second composed scan information in a first window of the displaying unit and the 3D model in a second window of the displaying unit.
The second composed scan information or the fused composed scan information may be displayed in the first window or a second window of the displaying unit.
The one or more processors may include a glare reduction algorithm that is configured to remove a glare effect from the internal structure information, wherein the glare reduction algorithm is configured to:
• determine a common object in the internal structure information of the captured internal images and a subsequent internal structure information,
• determine a change in position of the common object in the internal structure information and the subsequent internal structure information, and
• remove the common object from the intraoral structure information and the subsequent internal structure information if the position of the common object has changed.
By removing the glare effect would result in less false identifications of caries, cracks or fillings in the composed scan information or fused composed scan information.
In the determination of the fused composed scan information a plurality of conditional mask signal may be applied to one or more color channels of the plurality of composed scan information that is used for generating the fused composed scan information. The one or more processors may be configured to apply a plurality of conditional mask signals to one or more color channels of a plurality of composed scan information that includes the first composed scan information and at least a second composed scan information. The plurality of conditional mask signals removes pixels of the plurality of composed scan information which do not fulfil the conditions determined by the plurality of conditional mask signals, and then the plurality of composed scan information is combined into a
fused composed scan information. An example of a condition could be to remove a pixel with a pixel value that is above or below a pixel value threshold, or, to remove a pixel with a pixel value that is above or below a pixel value threshold for a certain color channel, or, to remove a group of pixels that are connected based on an area size of the grouped pixels in the first composed scan information or the fused composed scan information.
The one or more processors may be configured to apply a linear minimization algorithm to the plurality of composed scan information to remove pixels that are above a maximum mask pixel value threshold. The maximum mask pixel value threshold may be at level which corresponds to very bright pixel values. The level of the maximum mask pixel value threshold is larger than a pixel value threshold.
The one or more processors may be configured to connect neighbouring pixels of the first composed scan information or a fused composed scan information when neighbouring pixels are above a pixel value threshold. The connected neighbouring pixels may correspond to a dental feature or an artifact inside a tooth that is caused by unwanted reflections of ambient light or emitted light from the intraoral scanning system.
The connected neighbouring pixels may correspond to an area size in the first composed scan information or a fused composed scan information, and wherein the connected neighbouring pixels are categorized into a dental feature.
The connected neighbouring pixels may be highlighted with a specific color in the first composed scan information or a fused composed scan information.
The one or more processors may be configured to remove connected neighbouring pixels from the first composed scan information or a fused composed scan information based on an area size of the connected neighbouring pixels. For example, if the area size is irregular in comparison to an area size of a dental feature then these connected neighbouring pixels should be removed or identified for the purpose of not being visual in the first composed
scan information or the fused composed scan information when displaying the first composed scan information or the fused composed scan information.
BRIEF DESCRIPTION OF THE FIGURES
Aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
FIG. 1 illustrates an intraoral scanning system;
FIG.2 illustrates an example of one or more processors;
FIGS. 3 A and 3B illustrate examples of an intraoral scanning system;
FIGS. 4A, 4B, 4C, and 4D illustrate examples of one or more processors;
FIG. 5 illustrates an example of a contrast enhancement algorithm;
FIG. 6 illustrates different examples of a fused composed scan information and a composed scan information;
FIG. 7 illustrates different examples of a composed scan information and a fused composed scan information;
FIGS. 8A, 8B, 8C, and 8D illustrate a contrast profile of different images;
FIGS. 9A, 9B, 9C, and 9D illustrate a contrast profile of different images; and FIGS. 10A, 10B, and 10C illustrate a contrast profile of different images.
DETAILED DESCRIPTION
The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the devices, systems, mediums, programs and methods are described by various blocks, functional units, modules, components,
circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
The electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
A scanning for providing intra-oral scan data may be performed by a dental scanning system that may include an intraoral scanning device such as the TRIOS series scanners from 3 Shape A/S. The dental scanning system may include a wireless capability as provided by a wireless network unit. The scanning device may employ a scanning principle such as triangulation-based scanning, confocal scanning, focus scanning, ultrasound scanning, x-ray scanning, stereo vision, structure from motion, optical coherent tomography OCT, or any other scanning principle. In an embodiment, the scanning device is capable of obtaining surface information by operated by projecting a pattern and translating a focus plane along an optical axis of the scanning device and capturing a plurality of 2D images at different focus plane positions such that each series of captured 2D images corresponding to each focus plane forms a stack of 2D images. The acquired 2D images are also referred to herein as raw 2D images, wherein raw in this context means that the images have not been subject to image processing. The focus plane position is preferably shifted along the optical axis of the scanning system, such that 2D images captured at several focus plane positions along the optical axis form said stack of 2D images (also referred to herein as a sub-scan) for a given view of the object, i.e. for a given arrangement of the scanning system relative to the object. After moving the scanning
device relative to the object or imaging the object at a different view, a new stack of 2D images for that view may be captured. The focus plane position may be varied by means of at least one focus element, e.g., a moving focus lens. The scanning device is generally moved and angled relative to the dentition during a scanning session, such that at least some sets of sub-scans overlap at least partially, to enable reconstruction of the digital dental 3D model by stitching overlapping 3D subscans together in real-time and display the progress of the virtual 3D model on a display as feedback to the user. The result of stitching is the digital 3D representation of a surface larger than that which can be captured by a single sub-scan, i.e. which is larger than the field of view of the 3D scanning device. Stitching, also known as registration and fusion, works by identifying overlapping regions of 3D surface in various sub-scans and transforming sub-scans to a common coordinate system such that the overlapping regions match, finally yielding the digital 3D model. An Iterative Closest Point (ICP) algorithm may be used for this purpose. Another example of a scanning device is a triangulation scanner, where a time varying pattern is projected onto the dental object and a sequence of images of the different pattern configurations are acquired by one or more cameras located at an angle relative to the projector unit.
Color texture of the dental object may be acquired by illuminating the object using different monochromatic colors such as individual red, green and blue colors or my illuminating the object using multi chromatic light such as white light. A 2D image may be acquired during a flash of white light.
Generally, the process of obtaining surface information in real time of a dental object to be scanned requires the scanning device to illuminate the surface and acquire high number of 2D images. Typically, a high-speed camera is used with a framerate of 300-2000 2D frames pr second dependent on the technology and 2D image resolution. The high amount of image data needed to be handled by the scanning device to eighter directly forward the raw image data stream to an external processing device or performing some image processing before transmitting the data to an external device or display. This process requires that multiple electronic components inside the scanner is operating with a high workload thus requiring a high demand of current.
The scanning device comprises one or more light projectors configured to generate an illumination pattern to be projected on a three-dimensional dental object during a scanning session. The light projector(s) preferably comprises a light source, a mask signal having a spatial pattern, and one or more lenses such as collimation lenses or projection lenses. The light source may be configured to generate light of a single wavelength or a combination of wavelengths (mono- or polychromatic). The combination of wavelengths may be produced by using a light source configured to produce light (such as white light) comprising different wavelengths. Alternatively, the light projector(s) may comprise multiple light sources such as LEDs individually producing light of different wavelengths (such as red, green, and blue) that may be combined to form light comprising the different wavelengths. Thus, the light produced by the light source may be defined by a wavelength defining a specific color, or a range of different wavelengths defining a combination of colors such as white light. In an embodiment, the scanning device comprises a light source configured for exciting fluorescent material of the teeth to obtain fluorescence data from the dental object. Such a light source may be configured to produce a narrow range of wavelengths. In another embodiment, the light from the light source is infrared (IR) light, which is capable of penetrating dental tissue. The light projector(s) may be DLP projectors using a micro mirror array for generating a time varying pattern, or a diffractive optical element (DOF), or back-lit mask signal projectors, wherein the light source is placed behind a mask signal having a spatial pattern, whereby the light projected on the surface of the dental object is patterned. The back-lit mask signal projector may comprise a collimation lens for collimating the light from the light source, said collimation lens being placed between the light source and the mask signal. The mask signal may have a checkerboard pattern, such that the generated illumination pattern is a checkerboard pattern. Alternatively, the mask signal may feature other patterns such as lines or dots, etc.
The scanning device preferably further comprises optical components for directing the light from the light source to the surface of the dental object. The specific arrangement of the optical components depends on whether the scanning device is a focus scanning apparatus, a scanning device using triangulation, or any other type of scanning device. A
focus scanning apparatus is further described in EP 2 442 720 Bl by the same applicant, which is incorporated herein in its entirety.
The light reflected from the dental object in response to the illumination of the dental object is directed, using optical components of the scanning device, towards the image sensor(s). The image sensor(s) are configured to generate a plurality of images based on the incoming light received from the illuminated dental object. The image sensor unit may be a high-speed image sensor such as an image sensor configured for acquiring images with exposures of less than 1/1000 second or frame rates in excess of 250 frames pr. second (fps). As an example, the image sensor may be a rolling shutter (CCD) or global shutter sensor (CMOS). The image sensor(s) may be a monochrome sensor including a color filter array such as a Bayer filter and/or additional filters that may be configured to substantially remove one or more color components from the reflected light and retain only the other non-removed components prior to conversion of the reflected light into an electrical signal. For example, such additional filters may be used to remove a certain part of a white light spectrum, such as a blue component, and retain only red and green components from a signal generated in response to exciting fluorescent material of the teeth.
The network unit may be configured to connect the dental scanning system to a network comprising a plurality of network elements including at least one network element configured to receive the processed data. The network unit may include a wireless network unit or a wired network unit. The wireless network unit is configured to wirelessly connect the dental scanning system to the network comprising the plurality of network elements including the at least one network element configured to receive the processed data. The wired network unit is configured to establish a wired connection between the dental scanning system and the network comprising the plurality of network elements including the at least one network element configured to receive the processed data.
The dental scanning system preferably further comprises a processor configured to generate scan data (such as extra-oral scan data and/or intra-oral scan data) by processing the two-dimensional (2D) images acquired by the scanning device. The processor may be
part of the scanning device. As an example, the processor may comprise a Field- programmable gate array (FPGA) and/or an Advanced RISC Machines (ARM) processor located on the scanning device. The scan data comprises information relating to the three- dimensional dental object. The scan data may comprise any of: 2D images, 3D point clouds, depth data, texture data, intensity data, color data, and/or combinations thereof. As an example, the scan data may comprise one or more point clouds, wherein each point cloud comprises a set of 3D points describing the three-dimensional dental object. As another example, the scan data may comprise images, each image comprising image data e.g. described by image coordinates and a timestamp (x, y, t), wherein depth information can be inferred from the timestamp. The image sensor(s) of the scanning device may acquire a plurality of raw 2D images of the dental object in response to illuminating said object using the one or more light projectors. The plurality of raw 2D images may also be referred to herein as a stack of 2D images. The 2D images may subsequently be provided as input to the processor, which processes the 2D images to generate scan data. The processing of the 2D images may comprise the step of determining which part of each of the 2D images are in focus in order to deduce/generate depth information from the images. The internal depth information may be used to generate 3D point clouds comprising a set of 3D points in space, e.g., described by cartesian coordinates (x, y, z). The 3D point clouds may be generated by the processor or by another processing unit. Each 2D/3D point may furthermore comprise a timestamp that indicates when the 2D/3D point was recorded, i.e., from which image in the stack of 2D images the point originates. The timestamp is correlated with the z-coordinate of the 3D points, i.e., the z-coordinate may be inferred from the timestamp. Accordingly, the output of the processor is the scan data, and the scan data may comprise image data and/or depth data, e.g. described by image coordinates and a timestamp (x, y, t) or alternatively described as (x, y, z). The scanning device may be configured to transmit other types of data in addition to the scan data. Examples of data include 3D information, texture information such as infra-red (IR) images, fluorescence images, reflectance color images, x-ray images, and/or combinations thereof.
FIG. 1 illustrates an example of the intraoral scanning system 1, which includes a handheld intraoral scanning system 10 that is configured to scan a dental object 2 and provide a composed scan information (40, not shown), i.e. an image, that includes
enhanced contrast information about for example caries, cracks and/or fillings. Furthermore, the handheld intraoral scanning device 10 is configured to determine a three- dimensional model of the dental object. In this example, the handheld intraoral scanning device 10 includes a projector unit 7 and an image sensor unit 5. The system 1 includes one or more processors 11 that are operably connected to the image sensor unit 5 and the projector unit 7. The one or more processors 11 may be arranged within the handheld intraoral scanning device 10, an external computer 4, and/or a server 6. The handheld intraoral scanning device 10 may be connected to the external computer 4 and/or the server 6 via a wireless communication link 3 which may be based on WIFI, 60 GHz communication link, Bluetooth low energy, or a combination thereof.
The projector unit 7 is configured to emit light with different wavelengths during time periods onto at least the dental object 2. The projector unit 7 includes a plurality of light sources that is configured to emit the different wavelengths. The plurality of light sources may be arranged within the handheld intraoral scanning device 10 at different locations. For example, the light source of the plurality of light sources that is configured to emit infrared wavelength(s) is arranged in a tip housing which is configured to be inserted partly into a patient’s mouth. The remaining light sources of the plurality of light sources are arranged either in the tip housing or in a main housing which is configured to be handled by a user during at least a scanning session of the patient. The image sensor unit 5 is configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information. The visible light information includes a plurality of color channels. The image sensor unit includes a plurality of pixels which a Bayer filter is arranged in-front of. The Bayer filter includes a plurality of color channels that corresponds to the plurality of color channels in the visible light information. In another example, the Bayer filter includes a plurality of color channels that includes red, green and blue channels, and wherein one or more of the plurality of color channels are configured to receive infrared light, such as internal structure information. For example, a first group of green channels of the plurality of color channels may be configured to receive the infrared light, and a second group of green channels of the plurality of color channels is configured to not receive the infrared light. An infrared filter blocker may be applied on the second group of green channels.
FIG. 2 illustrates an example of the one or more processors 11. The one or more processors is configured to receive 20A the visible light information and the internal light information and to determine 20B internal structure information of the dental object from the internal light information. The one or more processors 11 may be configured to then assign 20C a weight coefficient (W) to each of the plurality of color channels of the visible light information. A first composed scan information is determined 20D based on a difference between the internal structure information and one or more color channels of the plurality of color channels. The one or more processors 11 is configured to enhance 20E the distinguishability of one or more internal structures of the dental object in the first composed scan information 40 by adjusting the weight coefficients of the one or more color channels and/or by applying a contrast enhancement algorithm to the first composed scan information, and/or by applying a mask signel or a conditional mask signal to the first composed scan signal. In one example, the enhancement 11 of the distinguishability in the first composed scan information is determined by minimizing the difference between the internal structure information and the one or more color channels plurality of color channels by adjusting one or more of the weight coefficients for the plurality of color channels. The difference is between pixels of the internal structure information and corresponding pixels of the one or more channels with assigned weight coefficients. The minimized difference is determined based on a least square method of the difference between the internal structure information and the one or more color channels with assigned weight coefficients. In another example, the enhancement of the contrast in the composed scan information is determined by one or more contrast enhancement algorithms, such as one or more linear contrast enhancement algorithms and/or one or more non-linear contrast enhancement algorithms.
FIGS. 3 A and 3B illustrate an example of the intraoral scanning system 1. In both examples, the visible light information 24 which the one or more processors 11 receives from the image sensor unit 5 is assigned weight coefficients 25 to each of the plurality of color channels of the visible light information 24 or to one or more of the plurality of color channels. The weight coefficients 25 may be predetermined and stored in a memory unit of the system 1, or, determined in an iterative 35 manner by minimizing the pixel values in
the composed scan information 40 or the fused composed scan information 65 for the purpose of isolating features that are only present in the internal structure information. The minimizing of the pixel values is provided by adjusting the weight coefficients 25. The weight coefficients 25 may in other examples be replaced by a contrast enhancement algorithm. In FIG. 3 A the composed scan information 40 is determined by subtracting the internal structure information 22 with the weighted one or more color channels 25. In FIG. 3B, the composed scan information includes a sum of the plurality of color channels (Cl - C5) all assigned to a weight coefficient (W1 - W5) that are subtracted from the internal structure information 22. For example, the plurality of color channels includes red, green, blue, fluo-red and fluo-green.
FIGS. 4 A to 4D illustrate different examples on the one or more processors 11 being configured to perform contrast enhancement 42 to improve the contrast in the composed scan information 40. In the examples, a weight coefficient 25 is applied to each of the plurality of color channels for performing a linear minimization of the pixel values in the fused or the composed scan information (40,65). In other examples, the assignment of weight coefficients to the plurality of color channels may be replaced by a contrast enhancement algorithm 42. The contrast enhancement 42 may be based on a contrast enhancement algorithm that is either a linear or a non-linear contrast enhancement algorithm, or, a combination of a linear and a non-linear contrast enhancement algorithm. The linear contrast enhancement algorithm may be one or more of following linear contrast enhancement algorithms or a combination of two or more of the following linear contrast enhancement algorithms:
• Minimum-Maximum Linear Contrast Stretch (MMLC),
• Percentage Linear Contrast Stretch (PLC) and Piecewise
• Linear Contrast Stretch (PWLC).
The contrast enhancement algorithm may be one or more of following non-linear contrast enhancement algorithms or a combination of two or more of the following non-linear contrast enhancement algorithms:
• Histogram distribution (HD),
• Histogram Equalizations (HE),
Adaptive Histogram Equalization (AHE) and
Homomorphic Filters (HF) including a combination of low pass and high pass filtering.
In FIG. 4A, the contrast enhancement algorithm is applied to the plurality of pixels of the composed scan information 40. In FIG. 4B, the contrast enhancement is applied to both the internal structure information (22,42A) and to each of the one or more color channels of the plurality of color channels (C1-C3) 42B, and in FIG. 4C, the composed scan information 40 is further enhanced 42C. In FIG. 4D, the contrast of the internal structure information 22 is being enhanced at least two times (42A,42B) and each of the one or more color channels of the plurality of color channels in the visible light information are also enhanced by a contrast enhancement algorithm 42C. In this example, also the composed scan information 40 is enhanced by a contrast enhancement algorithm 42D. In any of the examples illustrated in FIGS. 4B to 4D, the contrast enhancement algorithms (42A, 42B, 42C, 42D) may be the same or different algorithms. For example, it would be of an advantage to use a contrast enhancement algorithm based on an adaptive histogram equalization for the internal structure information and a contrast enhancement algorithm based on a histogram equalization for the visible light information. Thus, it may be of an advantage to combine histogram equalizations with linear contrast stretch for contrast enhancing the composed scan information 40.
FIG. 5 illustrates an example of a contrast enhancement algorithm 59. This example illustrates a histogram distribution 50 being determined for each color channels of the plurality of color channels, and/or, of the intraoral structure information. The x-axis of the histogram distribution 50 includes pixel values and the y-axis includes pixel quantities. A mean pixel value is determined based on the plurality of pixel values of the plurality of pixels. In 51 the pixels that have a pixel value that is outside a range 54A defined by for example three standard deviations off the mean pixel value will be truncated. In other examples two or more standard deviations may be used. In 51 the range 54A is covering positive pixel values (half histogram distribution). In 52, two ranges (54A, 54B) are defined covering both negative and positive pixel values (full histogram distribution). The remaining pixels that have a pixel value that are within the ranges (54A,54B) are extended
(53, 55) to cover a pixel value range from 0 to 255, 0 to 511 etc. The extended (53, 54) pixels results in an enhanced contrast in the image/information.
FIG. 6 illustrates different examples of fused composed scan information (65A, 65B and 65C) and an example of a composed scan information (61) determined by the one or more processors 11, and an infrared image 60 where the contrast has been improved with a contrast enhancement algorithm. The images (60,65A,65B,65C,61) show the same tooth of a patient. In each image, a caries lesion is seen. In the infrared image 60, the contrast between the caries and the surrounding is not good when comparing to the fused/composed scan information (65A,65B,65C,61). Image 65A includes a combination of two fused composed scan information where a first fused composed scan information (FCOMi) is assigned to red, and a second fused composed scan information (FCOM2) is assigned to green. The two fused composed scan information (FCOMi , FCOM2) are defined as:
FCOMi= NFR - (NWB + NWR) and FCOM2 = NFG - W6*NWG, where NFR = NIR - Wi*Fluo[Red], NFG = NIR - W2*Fluo[Green], NWB = NIR - W3*White[Blue], NWR = NIR - W4*White[Red], and NWG = NIR - W5*White[Green],
In Image 65 A the contrast of both fused composed scan information (FCOMi and FCOM2) have not been enhanced by a contract enhancement algorithm. In another example, the contrast of both fused composed scan information (FCOMi and FCOM2) may be enhanced by a contract enhancement algorithm that uses a histogram distribution with a half histogram distribution, i.e. the negative pixel values are all removed. The purpose of assigning each fused composed scan information to different colors is to explore the advantages of the chromatic differences and similarities there are between the two fused composed scan information. In image 65A, the green channel is being reduced in the NFG, and both blue and red channels are being reduced in NFR. This results in that the caries in 65A obtains a yellowish color and the dentine obtains a more greenish color while the remains are black. Thereby, by combining two fused composed scan information and assign them to different colors would make it easier for a user to identify the caries with a color that is different from the remaining colors on the tooth. Furthermore, despite
the image 65A is converted into a black and white scale, the image 65A still illustrates the contrast of the caries has become more profound than in the infrared image 60.
In image 65B the caries distinguishability has improved significantly. Image 65B includes yet another example of a combination of two fused composed scan information (FCOMi and FCOM2) that once again are assigned to red and green. The contrast of the two fused composed scan information has been improved by a contrast enhancement algorithm. In this example, the contrast enhancement algorithm uses a histogram distribution with a half histogram distribution, i.e. the pixel values below the mean are all removed. The example in image 65B, shows the caries is more or less isolated, meaning that the enamel and dentine of the tooth are more or less gone. The two fused composed scan information (FCOMi , FCOM2) are defined as:
FCOMi= W6 [(NWB-NWR) + (NWB -NWG)] , FCOM2 = NFR - FCOM1 and FCOM3 = NFG - FCOMi , Red = FCOM2 and Green = FCOM3,
NFR = NIR - Wi*Fluo[Red], NFG = NIR - W2*Fluo[Green], NWB = NIR - W3*White[Blue], NWR = NIR - W4*White[Red], NWG = NIR - W5*White[Green],
In image 65B the caries is yet again visualized with a yellowish color and the dentine with a greenish color. Yet again, despite the image 65B is converted into a black and white scale, the image 65B still illustrates the contrast of the caries has become more profound than in the infrared image 60.
Image 61 shows an example of a composed scan information 61 which includes a difference between the internal structure information, i.e. the infrared image 60, and a visible light information wherein the plurality of color channels includes fluorescence green channel. Another important difference is that the full histogram distribution is stretched when doing the contrast adjustment. Using the full distribution is good for displaying the entire tooth rather than looking just for ways to isolate a signal, e.g. a caries lesion, since the half histogram distribution approach tends to leave some areas of the image very dark and in some cases hard to see.
Image 65C includes a combination of image 65B and 61 where it is clearly seen that the contrast of the caries has really improved significantly without generating artifacts on the image 65C that can be misunderstood as being caries. In this example, the one or more processors is configured to:
• determine a fused composed scan information that includes differences between visible light information and internal structure information, wherein the visible light information includes a plurality of color channels, and the plurality of color channels includes red, blue, green, fluorescence red and fluorescence green,
• determine a composed scan information that includes a difference between fluorescence green and internal structure information,
• enhance the contrast of the fused composed scan information by applying a contrast enhancement algorithm with a half-histogram distribution to the fused composed scan information and a conditional mask signal to each of the plurality of color channels of the fused composed scan information and the composed scan information for removing corresponding pixels of the plurality of color channels with pixel values that fullfil a condition determined by the conditional mask signal,
• enhance the contrast of the composed scan information by applying a contrast enhancement algorithm with a full-histogram distribution to the composed scan information, and
• combining the enhanced fused composed scan information with the enhanced composed scan information for improving the brightness of the caries.
By enhancing the contrast of the fused composed scan information based on a contrast enhancement algorithm with a half-histogram distribution results in an improved isolation of the caries in the image, i.e. in the enhanced fused composed scan information.
FIG. 7 illustrates different examples of composed scan information (71A and 71B) and a fused composed scan information 65 determined by the one or more processors 11, and an infrared image 70 (internal structure information) where the contrast has been improved with a contrast enhancement algorithm. Image 71 A includes composed scan information 71 A with difference between internal structure information and each of the plurality of color channels including red, blue and green. The contrast of the internal structure
information has been enhanced two times with a contrast enhancement algorithm, and the composed scan information has been enhanced twice with a contrast enhancement algorithm. In the enhancement of both the internal structure information and the composed scan information a full histogram distribution approach has been utilized. In image 7 IB, weight coefficient of each color channels of the plurality of color channels of the visible light information has been lowered. Image 65 includes a final fused composed scan information FCOMfmai that includes a difference between the internal structure information and the plurality of color channels, and vice versa. The final fused composed scan information FCOMfmai is determined according to the below table. In the example explained in the below table, the projector unit emits white light that is being captured by the image sensor unit. The image sensor unit provides a red channel WR, a blue channel WB and a green channel WG that includes the respective colors of the emitted white light that has been reflected and captured. The blue channel WB is enhanced by removing pixels of the blue channel, WB, with bright highlights, wherein a bright highlight has a pixel value of above a maximum mask pixel value threshold. Furthermore, two composed scan information (COM4 , COM5) are determined, and signal in the two composed scan information (COM4, COM5) have been enhanced by applying a mask signal to a green channel of both composed scan information (COM4 , COM5) for removing pixels of the green channel. For the composed scan information COM4 the removed pixels of the green channel are below a minimum mask pixel value threshold that is determined solely for the green channel of the composed scan information (COM4). For the composed scan information COM5 the removed pixels of the green channel are above a maximum mask pixel value threshold that is determined solely for the green channel of the composed scan information (COM5). The at least two composed scan information are combined into a fused composed scan information (FCOM7). A different mask signal may be applied to the fused composed scan information Mask(FCOMi) and Mask(FCOM2) for removing pixels that fulfill different conditions determined by the different mask signal. For example, pixels of FCOMi which have a pixel value above a first mask pixel value threshold may be removed, and pixels of FCOMi which have a pixel value above a second mask pixel value threshold. Furthermore, a conditional mask signal is applied to the fused composed scan information FCOMx where corresponding pixels from Mask(FCOMi) and Mask(FCOM2) are removed when a condition determined by the conditional mask signal
is met. The fused composed scan information FCOMx is then combined with fused composed scan information FCOM7 into another fused composed scan information (FCOMs) . A conditional mask signal is applied to each of the plurality of color channels of the fused composed scan information (FCOMs). The plurality of color channels includes red, green, blue, and infrared. The conditional mask signal is unique for each of the plurality of color channels, and wherein the conditional mask signal includes a mask pixel value threshold that is determined by a cumulative histogram distribution of the pixel values for each of the plurality of color channels. Corresponding pixels of the plurality of color channels are removed when all corresponding pixels fulfil a condition determined by the conditional mask signal. The condition may be to remove a pixel with a pixel value that is above the mask pixel value threshold. The pixel differences in the fused composed scan information are then linear minimized by a linear minimization algorithm resulting in yet another fused composed scan information (FCOM9). The two fused composed scan information FCOM9 and FCOMs are combined into a further fused composed scan information (FCOM10). A final fused composed scan information (FCOMfmai) is determined by combining the fused composed scan information (FCOM10) with the previous determined fused composed scan information (FCOMe) which comprises a combination of the two composed scan information (COM2 and COM3) that includes difference between the plurality of color channels of white light (visible light information) and near-infrared light (i.e. internal structure information). The final fused composed scan information (FCOMfmai) may be displayed as an image or as an overlay to an image that includes a composed scan information, internal structure information or visible light information.
FIGS. 8A to 8D illustrate a contrast profile 82 of an image 80 that includes internal structure information that has been enhanced by a contrast enhancement algorithm, an image 81 A that includes a composed scan information 81 A and an image 8 IB that includes the composed scan information 8 IB with enhanced contrast. The contrast profile is determined by the one or more processors for an area-of-interest 81 in the images (80,81 A, 8 IB). The contrast profile includes contrast differences between pixel values of selected pixels of the plurality of pixels of the first composed scan information, wherein the selected pixels are within the area-of-interest. In these examples, the area-of-interest 81 is defined by a line. In another example, the area-of-interest 81 may be a box or another shape if a three-dimensional contrast profile is needed. The area-of-interest 81 is placed on a caries and partly on the enamel of a tooth. The contrast profile 82 includes the pixel number versus pixel values, and it is clearly seen that for both composed scan information (81 A, 8 IB) the contrast profile is more box shaped than for the internal structure information 80, i.e. the infrared image 80. Furthermore, comparing the composed scan information 8 IB (incl. enhanced contrast) with the internal structure information 80 (incl enhanced contrast) a larger contrast is obtained between the maximum and minimum pixel value for the composed scan information with enhanced contrast. The contrast profile 82 illustrates clearly the advantage of using a composed scan information for identifying caries.
FIGS. 9 A to 9D illustrate a contrast profile 82 for different teeth where the area-of-interest 81 is applied to the fused composed scan information (FCOM3, 65) seen in FIG. 7. In FIG. 9A and 9B, the contrast profile 82 is determined for the area-of-interest 81 placed across a caries of a tooth. The contrast profile includes a red channel of the fused composed scan information 92 and the internal structure information 91. Once again it is clearly seen that the contrast between maximum and minimum pixel values is more significant for the fused composed scan information 92. In FIG. 9C and 9D, the area of interest 81 is placed across a glare effect, and it is seen that for the red channel 92 that no significant spike which could be misunderstood as being a caries appears in the contrast
profile. In the internal structure information 91 a spike is seen, and which potentially could create a false indication of a caries.
The one or more processors 11 is configured to determine a dental feature within the area-of-interest 81 of the dental object by comparing the contrast profile 82 to a reference contrast profile, wherein the reference contrast profile is stored in a memory unit of the system, and wherein the reference contrast profile corresponds to a type of dental feature. FIGS. 10A to 10C illustrate an example where the one or more processors 11 is configured to change the position of the area-of-interest 81 for the purpose of scanning a dental object for caries or other type of dental features, such as caries, cracks and fillings. The contrast profile 82 illustrate in FIG. 10B shows no caries. The contrast between maximum and minimum pixel values is not large enough to match with a caries. The contrast profile 82 in FIG. 10C clearly shows a caries where both the shape and the contrast between minimum and maximum pixel values have been used for identifying the caries. The one or more processors 11 may automatically identify the caries.
Many modifications and other embodiments of the inventions set forth herein will come to mind of one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.
ITEM
1 A. An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
• a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include an infrared wavelength and a visible wavelength,
• an image sensor unit configured to capture visible light information and internal light information from at least the dental object caused by the visible wavelength and the infrared wavelength, respectively, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors operably connected to the image sensor unit, and the one or more processors is configured to:
• receive the visible light information and the internal light information,
• determine internal structure information of the dental object from the internal light information,
• assign a weight coefficient for each of the plurality of color channels of the visible light information, and
• determine a first composed scan information that includes a minimized difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients, or vice versa.
2. An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
• a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength,
• an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels;
wherein the system further comprises one or more processors operably connected to the image sensor unit, and the one or more processors is configured to:
• receive the visible light information and the internal light information,
• determine internal structure information of the dental object from the internal light information,
• assign a weight coefficient for each of the plurality of color channels of the visible light information,
• determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients or vice versa, and
• enhance distinguishability of one or more internal structures of the dental object in the first composed scan information by adjusting the weight coefficients of the one or more color channels.
3. An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
• a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength,
• an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors at least operably connected to the image sensor unit, and the one or more processors is configured to:
• receive the visible light information and the internal light information,
• determine internal structure information of the dental object from the internal light information,
• assign a weight coefficient to each of the plurality of color channels of the visible light information,
• determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients, and
• enhance distinguishability of one or more internal structures of the dental object in the first composed scan information by applying a channel-based color fusion to the first composed scan information.
4. An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
• a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include a infrared wavelength and a visible wavelength,
• an image sensor unit configured to capture one or more visible images that each includes visible light information and to capture internal images that each includes internal light information, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors at least operably connected to the image sensor unit, and the one or more processors is configured to:
• receive the visible light information and the internal light information,
• determine internal structure information of the dental object from the internal light information,
• determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels, and
• enhance the contrast in the first composed scan information by applying a contrast enhancement algorithm to each of the one or more color channels of the plurality of color channels.
5. The intraoral scanning system according to any of previous items, wherein a contrast enhancement algorithm is applied to the internal structure information and/or the first composed scan information.
6. The intraoral scanning system according to any of previous items, wherein the minimized difference is between pixels of the internal structure information and corresponding pixels of the plurality of color channels with assigned weight coefficients.
7. The intraoral scanning system according to any of previous items, wherein the enhancement of distinguishability of one or more internal structures of the dental object in the first composed scan information is determined by minimizing the difference between the internal structure information and the plurality of color channels by adjusting one or more of the weight coefficients for the plurality of color channels.
8. The intraoral scanning system according to any of previous items, wherein the minimized difference is determined by adjusting the weight coefficient for each of the plurality of color channels.
9. The intraoral scanning system according to any of previous items, wherein the minimized difference is determined based on a least square method of the difference between the internal structure information and the plurality of color channels with assigned weight coefficients.
10. The intraoral scanning system according to any of previous items, wherein the plurality of color channels and the internal structure information includes a plurality of pixels, wherein the one or more processors is configured to enhance contrast of the plurality of color channels and/or in the internal structure information by applying a contrast enhancement algorithm.
11. The intraoral scanning system according to any of previous items, wherein the contrast enhancement algorithm includes:
• determining a histogram distribution of a plurality of pixel values of the plurality of pixels, wherein the plurality of pixel values is within a first pixel value range,
• determining a mean pixel value based on the plurality of pixel values,
• truncating one or more of the plurality of pixels which has a pixel value that is outside at least three standard deviations off the mean pixel value, wherein the at least three standard deviations include positive pixel values or both positive and negative pixel values, and
• extending pixel values of the truncated histogram distribution such that the pixel values of the truncated histogram distribution are within a second pixel value range, and wherein the second pixel value range is larger than the first pixel value range.
12. The intraoral scanning system according to any of previous items, wherein the first composed scan information includes a plurality of pixels, and the one or more processors is configured to enhance contrast in the first composed scan information based on the contrast enhancement algorithm.
13. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to determine a contrast profile for an area-of-interest in the composed scan information or a fused composed scan information.
14. The intraoral scanning system according to any of previous items, wherein the contrast profile includes contrast differences between pixel values of selected pixels of the plurality of pixels of the first composed scan information, wherein the selected pixels are within the area-of-interest. d
15. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to determine a dental feature within the area-of-interest of the dental object by comparing the contrast profile to a reference contrast profile, wherein the reference contrast profile is stored in a memory unit of the system, and wherein the reference contrast profile corresponds to a type of dental feature.
16. The intraoral scanning system according to any of previous items, wherein the comparing is based on a shape and/or a size of the contrast profile.
17. The intraoral scanning system according to any of previous items, wherein the system includes a Bayer filter arranged in front of the image sensor unit, such that the visible light information and internal light information is forwarded to the image sensor unit by the Bayer filter, and wherein the plurality of color channels of the visible light information includes red, blue and green wavelengths filtered by the Bayer filter.
18. The intraoral scanning system according to any of previous items, wherein the plurality of color channels of the visible light information includes fluorescence red and fluorescence green wavelengths filtered by the Bayer filter.
19. The intraoral scanning system according to any of previous items, wherein the plurality of color channels includes red, blue, green, fluorescence red and fluorescence green wavelengths.
20. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to determine a second composed scan information that includes a difference between the internal structure information and one or more of the plurality of color channels with assigned weight coefficients, and wherein the one or more of the plurality of color channels of the first composed scan information is different from the one or more of the plurality of color channels of the second composed scan information.
21. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to determine a fused composed scan information that includes a difference or summation between the first composed scan information and the second composed scan information.
22. The intraoral scanning system according to any of previous items, wherein the contrast in the fused composed scan information is adjusted by the contrast enhancement algorithm.
23. The intraoral scanning system according to any of previous items, comprising a displaying unit configured to display the first composed scan information and the second composed scan information in a window of the displaying unit, and wherein the first composed scan information is displayed having a first main color, and the second composed scan information is displayed having a second main color, and wherein the first main color is different from the second main color.
24. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to determine a three-dimensional model of the dental object based on one or more of the plurality of color channels.
25. The intraoral scanning system according to any of previous items, comprising a displaying unit configured to display the first composed scan information or the fused composed scan information in a first window of the displaying unit.
26. The intraoral scanning system according to any of previous items, wherein the displaying unit is configured to display the first composed scan information and the 3D model of the dental object in the first window, or, to display the first composed scan information and/or the second composed scan information in a first window of the displaying unit and the 3D model in a second window of the displaying unit.
27. The intraoral scanning system according to any of previous items, and wherein the second composed scan information or the fused composed scan information is displayed in the first window or a second window of the displaying unit.
28. The intraoral scanning system according to any of previous items, wherein the infrared wavelength is emitted at a first polarization, and the captured internal images include a second polarization, wherein the first polarization is orthogonal to the second polarization.
29. The intraoral scanning system according to any of previous items, wherein the visible wavelength is emitted at a third polarization, wherein the third polarization is similar to the first polarization.
30. The intraoral scanning system according to any of previous items, wherein the glare reduction algorithm is configured to:
• determine a common object in the internal structure information of the captured internal images and a subsequent internal structure information,
• determine a change in position of the common object in the internal structure information and the subsequent internal structure information, and
• remove the common object from the intraoral structure information and the subsequent internal structure information if the position of the common object has changed.
31. The intraoral scanning system according to any of previous items, wherein the channel-based color fusion includes:
• determine a plurality of color channels of the first composed scan information or a fused composed scan information,
• determine a mean pixel value threshold for each of the plurality of color channels and the internal structure information based on a histogram distribution of pixel values of each of the plurality of color channels and the internal structure information,
• remove corresponding pixels of each of the plurality of color channels and the internal structure information where the respective pixel values are above the mean pixel value threshold for each of the pluralty of color channels and the internal structure information, and
• combine the plurality of color channels into a new composed scan information or a new fused composed scan information.
32. The intraoral scanning system according to item 31, wherein the combination of the plurality of color channels include corresponding pixels which for the plurality of color channels has a pixel value that is larger than zero.
33. The intraoral scanning system according to item 31, wherein the histogram distribution is a cumulative histogram distribution, wherein a percentage of remaining pixels of each of the plurality of color channels and the internal structure information determines the level of the mean pixel value threshold.
34. The intraoral scanning system according to item 33, wherein the percentage of remaining pixels is above 10%, 40 %, 60%, or 90%.
35. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to apply a plurality of conditional mask signals to one or more color channels of a plurality of composed scan information that includes the first composed scan information and at least a second composed scan information, wherein the plurality of conditional mask signals removes pixels of the plurality of composed scan information which do not fulfil the conditions determined by the plurality of conditional mask signals, and then the plurality of composed scan information is combined into a fused composed scan information.
36. The intraoral scanning system according to item 35, wherein the one or more processors is configured to apply a linear minimization algorithm to the plurality of composed scan information to remove pixels that are above a maximum mask pixel value threshold.
37. The intraoral scanning system according to any of previous items, wherein the one or more processors is configured to connect neighbouring pixels of the first composed scan information or a fused composed scan information when neighbouring pixels are above a pixel value threshold.
38. The intraoral scanning system according to item 37, wherein the connected neighbouring pixels corresponds to an area size in the first composed scan information, a plurality of composed scan information, or a fused composed scan information, and wherein the connected neighbouring pixels are categorized into a dental feature.
39. The intraoral scanning system according to item 37 or 38, wherein the connected neighbouring pixels are highlighted with a specific color in the first composed scan information or a fused composed scan information.
40. The intraoral scanning system according to any of items 37 to 39, wherein the one or more processors is configured to remove connected neighbouring pixels from the first composed scan information or a fused composed scan information based on an area size of the connected neighbouring pixels.
Claims
1. An intraoral scanning system configured to provide a first composed scan information, the system includes a handheld intraoral scanning device that includes:
• a projector unit configured to emit light with different wavelengths during time periods onto at least a dental object, wherein the different wavelengths include an infrared wavelength and a visible wavelength,
• an image sensor unit configured to capture one or more visible images that each includes visible light information from reflected visible wavelength and to capture internal images that each includes internal light information from reflected infrared wavelength, and wherein the visible light information includes a plurality of color channels; wherein the system further comprises one or more processors at least operably connected to the image sensor unit, and the one or more processors is configured to:
• receive the visible light information and the internal light information,
• determine internal structure information of the dental object from the internal light information,
• assign a weight coefficient to each of the plurality of color channels of the visible light information,
• determine a first composed scan information that includes a difference between the internal structure information and one or more color channels of the plurality of color channels with assigned weight coefficients, and
• enhance distinguishability of one or more internal structures of the dental object in the first composed scan information by adjusting one or more of the weight coefficients of the one or more color channels.
2. The intraoral scanning system according to claim 1, wherein the distinguishability in the first composed scan information is enhanced by minimizing the difference between the internal structure information and the one or more color channels by adjusting one or more of the weight coefficients for the one or more color channels.
3. The intraoral scanning system according to claim 2, wherein the minimized difference is determined based on a least square method of the difference between the internal structure information and the one or more color channels with assigned weight coefficients.
4. The intraoral scanning system according to any of the previous claims, wherein the plurality of color channels and the internal structure information includes a plurality of pixels, wherein the one or more processors is configured to enhance contrast of the one or more color channels and/or in the internal structure information by applying a contrast enhancement algorithm.
5. The intraoral scanning system according to claim 4, wherein the contrast enhancement algorithm includes:
• determining a histogram distribution of a plurality of pixels, wherein the histogram distribution includes numbers of pixels of the plurality of pixels with a plurality of pixel values, wherein the plurality of pixel values is within a first pixel value range,
• determining a mean pixel value based on the plurality of pixel values,
• truncating one or more of the plurality of pixels which has a pixel value that is outside at least two standard deviations off the mean pixel value, and
• extending positive pixel values of the truncated histogram distribution, or, both negative and positive pixel values of the truncated histogram distribution, such that the plurality of pixel values of the truncated histogram distribution are within a second pixel value range, and wherein the second pixel value range is larger than the first pixel value range.
6. The intraoral scanning system according to claim 5, wherein the first composed scan information includes a plurality of pixels, and the one or more processors is configured to enhance contrast in the first composed scan information based on the contrast enhancement algorithm.
7. The intraoral scanning system according to any of the previous claims, wherein the system includes a Bayer filter arranged in front of the image sensor unit, such that the visible light information and internal light information is forwarded to the image sensor unit by the Bayer filter, and wherein the plurality of color channels of the visible light information includes red, blue and green wavelengths filtered by the Bayer filter.
8. The intraoral scanning system according to any of the previous claims, wherein the one or more processors is configured to determine a second composed scan information that includes a difference between the internal structure information and one or more of the plurality of color channels with assigned weight coefficients, and wherein the one or more of the plurality of color channels of the first composed scan information is different from the one or more of the plurality of color channels of the second composed scan information.
9. The intraoral scanning system according to claim 8, wherein the one or more processors is configured to determine a fused composed scan information that includes a difference or summation between the first composed scan information and the second composed scan information.
10. The intraoral scanning system according to claim 9 and any of claims 4 to 6, wherein the contrast in the fused composed scan information is adjusted by the contrast enhancement algorithm.
11. The intraoral scanning system according to any of the previous claims, wherein the infrared wavelength is emitted at a first polarization, and the captured internal images include a second polarization, wherein the first polarization is orthogonal to the second polarization.
12. The intraoral scanning system according to any of previous claims, wherein the one or more processors include a glare reduction algorithm that is configured to remove a glare effect from the internal structure information, wherein the glare reduction algorithm is configured to:
• determine a common object in the internal structure information of the captured internal images and a subsequent internal structure information,
• determine a change in position of the common object in the internal structure information and the subsequent internal structure information, and
• remove the common object from the intraoral structure information and the subsequent internal structure information if the position of the common object has changed.
13. The intraoral scanning system according to any of previous claims, wherein the one or more processors is configured to determine a three-dimensional model of the dental object based on one or more of the plurality of color channels.
14. The intraoral scanning system according to any of the previous claims, wherein a further enhancement of distinguishability of one or more internal structures of the dental object in the first composed scan information is provided by applying a channel -based color fusion to the first composed scan information.
15. The intraoral scanning system according to claim 14, wherein the channel-based color fusion includes:
• determine a plurality of color channels of the first composed scan information or a fused composed scan information,
• determine a mean pixel value threshold for each of the plurality of color channels and the internal structure information based on a histogram distribution of pixel values of each of the plurality of color channels and the internal structure information,
• remove corresponding pixels of each of the plurality of color channels and the internal structure information where the respective pixel values are above the mean pixel value threshold for each of the pluralty of color channels and the internal structure information, and
• combine the plurality of color channels into a new composed scan information or a new fused composed scan information.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| DKPA202470093 | 2024-03-27 | ||
| DKPA202470093 | 2024-03-27 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025202065A1 true WO2025202065A1 (en) | 2025-10-02 |
Family
ID=95151569
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/EP2025/057816 Pending WO2025202065A1 (en) | 2024-03-27 | 2025-03-21 | An intraoral scanning system for improving composed scan information |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025202065A1 (en) |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2442720B1 (en) | 2009-06-17 | 2016-08-24 | 3Shape A/S | Focus scanning apparatus |
| US20210128282A1 (en) * | 2016-07-27 | 2021-05-06 | Align Technology, Inc. | Methods and apparatuses for forming a three-dimensional volumetric model of a subject's teeth |
| WO2025078622A1 (en) * | 2023-10-11 | 2025-04-17 | 3Shape A/S | System and method for intraoral scanning |
-
2025
- 2025-03-21 WO PCT/EP2025/057816 patent/WO2025202065A1/en active Pending
Patent Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP2442720B1 (en) | 2009-06-17 | 2016-08-24 | 3Shape A/S | Focus scanning apparatus |
| US20210128282A1 (en) * | 2016-07-27 | 2021-05-06 | Align Technology, Inc. | Methods and apparatuses for forming a three-dimensional volumetric model of a subject's teeth |
| WO2025078622A1 (en) * | 2023-10-11 | 2025-04-17 | 3Shape A/S | System and method for intraoral scanning |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP5814698B2 (en) | Automatic exposure control device, control device, endoscope device, and operation method of endoscope device | |
| JP5968944B2 (en) | Endoscope system, processor device, light source device, operation method of endoscope system, operation method of processor device, operation method of light source device | |
| CN106456292B (en) | System, method, apparatus for collecting color information about an object undergoing 3D scanning | |
| JP6478984B2 (en) | Intraoral imaging method and system using HDR imaging and removing highlights | |
| CN113499160A (en) | Intraoral scanner with dental diagnostic capability | |
| CN114445388B (en) | Multispectral image recognition method, device and storage medium | |
| CN112752535B (en) | Medical image processing device, endoscope system, and method for operating medical image processing device | |
| JP7335399B2 (en) | MEDICAL IMAGE PROCESSING APPARATUS, ENDOSCOPE SYSTEM, AND METHOD OF OPERATION OF MEDICAL IMAGE PROCESSING APPARATUS | |
| WO2022014235A1 (en) | Image analysis processing device, endoscopy system, operation method for image analysis processing device, and program for image analysis processing device | |
| JP7231139B2 (en) | External light interference elimination method | |
| WO2025202065A1 (en) | An intraoral scanning system for improving composed scan information | |
| WO2025078622A1 (en) | System and method for intraoral scanning | |
| JP7214886B2 (en) | Image processing device and its operating method | |
| JP2025509666A (en) | Computerized Dental Visualization | |
| WO2024260907A1 (en) | An intraoral scanning system for determining a visible colour signal and an infrared signal | |
| US20250390995A1 (en) | System and method for optimizing a three-dimensional model of a dental object | |
| WO2024146786A1 (en) | An intraoral scanning system for determining composed scan information | |
| WO2024260743A1 (en) | An intraoral scanning system for determining an infrared signal | |
| WO2025202067A1 (en) | An intraoral scanning system with improved scan sequence schedules | |
| US20250387075A1 (en) | Method for determining optical parameters to be displayed on a three-dimensional model | |
| WO2024260906A1 (en) | Volumetric measurements of an inner region of a dental object | |
| WO2025125551A1 (en) | An intraoral scanning system with aligned focused images to a 3d surface model | |
| EP4684361A1 (en) | Intraoral scanner system and method for superimposing a 2d image on a 3d model | |
| WO2024261153A1 (en) | A graphical representation of an inner volume of a dental object | |
| WO2024256722A1 (en) | System and method for intraoral scanning |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25714312 Country of ref document: EP Kind code of ref document: A1 |