[go: up one dir, main page]

WO2019178617A1 - Système et méthode de diagnostic et de traitement de trouble cognitif clinique dento-craniofacial - Google Patents

Système et méthode de diagnostic et de traitement de trouble cognitif clinique dento-craniofacial Download PDF

Info

Publication number
WO2019178617A1
WO2019178617A1 PCT/US2019/023504 US2019023504W WO2019178617A1 WO 2019178617 A1 WO2019178617 A1 WO 2019178617A1 US 2019023504 W US2019023504 W US 2019023504W WO 2019178617 A1 WO2019178617 A1 WO 2019178617A1
Authority
WO
WIPO (PCT)
Prior art keywords
visual
report
images
ray
dento
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2019/023504
Other languages
English (en)
Inventor
Budi KUSNOTO
Ahmed KABOUDAN
Christoph Bourauel
Sameh Mohamed Talaat Taha MOHAMED
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digibrain4 Inc
Original Assignee
Digibrain4 Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digibrain4 Inc filed Critical Digibrain4 Inc
Publication of WO2019178617A1 publication Critical patent/WO2019178617A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/51Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/501Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of the head, e.g. neuroimaging or craniography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/52Devices using data or image processing specially adapted for radiation diagnosis
    • A61B6/5211Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data
    • A61B6/5217Devices using data or image processing specially adapted for radiation diagnosis involving processing of medical diagnostic data extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30036Dental; Teeth

Definitions

  • the present system and method provide artificial intelligence systems and methods for automatic identification, localization, recognition, understanding, labelling, analyzing, assessing, deciding and creating a report deriving an outcome for patient treatment and consultation related to Dento-Craniofacial Visual Assets (‘DCVA’) media.
  • DCVA Dento-Craniofacial Visual Assets
  • the present DCVA system and method may be capable of automatic identification, understanding, localization, recognition, labelling and metadata generation of DCVA, including anatomical modifiers like upper and lower, right side and left side for creating a report for patient treatment and consultation.
  • the DCVA present system and method may be capable of recognizing and rejecting non-relevant dento-craniofacial visual assets for creating a report for patient treatment and consultation.
  • the present DCVA system and method also may utilize a unique artificial intelligence search engine capable of accurately recognizing and picking a correct dento-craniofacial visual assets from large repositories for creating a report for patient treatment and consultation.
  • the present DCVA system and method may also utilize a unique artificial intelligence system,“ DCVA Classifier”, capable of auto-labelling and auto-generating of metadata and persisting the results in query-friendly formats, like RDBMS and noSQL databases and structured text formats for creating a report for patient treatment and consultation.
  • the reports may be printed physical reports in an embodiment.
  • the data may further be used to teach other artificial intelligence systems.
  • the present DCVA system and method may also utilize a unique artificial intelligence search engine filter called a“ DCVA Search Engine Booster Filter” which may be capable of filtering and boosting the efficiency of the results returned from other World Wide Web search engines, such as Google ®. Assets matching the searched terms are selected, while all other non-relevant to the searched terms are discarded.
  • the results may be used for creating a report deriving an outcome for patient treatment and consultation.
  • the present DCVA system and method may utilize a unique artificial intelligence system called a“ Dental Insurance Treatment Auto-Authorizer” which may be capable of auto-generation of dental-insurance treatment authorization, by auto-generation of a Handicapping Labio-Lingual Deviation Index (HLD) score report, by automatic identification, localization, recognition, understanding, analyzing, and assessing quantitively and qualitatively a patient’s multiple DCVA for creating a report for patient treatment and consultation.
  • HLD Labio-Lingual Deviation Index
  • the present DCVA system and method may utilize a unique artificial intelligence system called a“ Landmarks Localizer” which may be capable of auto-identification and localization of dento-craniofacial landmarks, including performing quantitative analysis based on landmarks coordinates and relative positions for creating a report for patient treatment and consultation.
  • the reports may be physical reports in an embodiment.
  • the present DCVA system and method may also utilize a unique artificial intelligence system called an“ Ectopic Eruption Discoverer” which may be capable of discovering and localizing ectopically erupted and impacted teeth, by auto-inspection of panoramic x-rays for creating a report for patient treatment and consultation.
  • an“ Ectopic Eruption Discoverer” which may be capable of discovering and localizing ectopically erupted and impacted teeth, by auto-inspection of panoramic x-rays for creating a report for patient treatment and consultation.
  • the present DCVA system and method may utilize unique artificial intelligence systems called a“ Smart Composer” and“ Smart Decomposer” which may be capable of auto-decomposing standard Orthodontics composite clinical images into constituent component images for creating a report for patient treatment and consultation.
  • the Smart Composer and Smart Decomposer systems may also be capable of reversing the operation, by correctly selecting the proper clinical images and creating a standard Orthodontics composite clinical image for creating a report for patient treatment and consultation.
  • the present DCVA system and method may utilize a unique artificial intelligence system called a“ Smart Anonymizer” which may be capable of anonymizing patient’s data, by removing all textual and facial identifications from visual assets for creating a report for patient treatment and consultation.
  • DCVA dento-craniofacial visual assets
  • the multiple artificial intelligence systems and methods in the present system and method are aimed to solve all the forgoing problems.
  • This present system and method provide artificial intelligence systems and methods for automatic identification, localization, recognition, understanding, labelling, analyzing, assessing, deciding and planning related to dento-craniofacial visual assets (‘DCVA’) for creating a report deriving an outcome for patient treatment and consultation.
  • the reports may be physical reports in one embodiment.
  • the Dental Classifier portion of the present system and method includes the steps of automatically: (1) typing DCVA such as, for example, as x-rays and clinical images of a patient, into a computer; (2) categorizing each discovered type of asset, relative to its nature (I.E.: intra-oral, extra-oral, etc.); (3) classifying items pertaining to each type and category (I.E.: bitewing, panoramic, etc.); (4) auto-correcting the orientation of the relevant asset according to standards; (5) recognizing anatomical modifiers (I.E.: upper, lower, right, left.) of the patient; (6) auto-generating of accurate metadata relative to each asset; and finally (7) saving metadata to different types of query-ready formats for creating a report for patient treatment and consultation.
  • DCVA such as, for example, as x-rays and clinical images of a patient
  • the Search Engine Filter Booster portion of the present system and method includes the steps of: (1) integrating and boosting generic web search engines; (2) automatically filtering results returned from generic search engines, like the Google ® search engine, and presenting only the proper results to the end user; and (3) installing the results of the search locally to accurately query any existing DCVA repositories for creating a report for patient treatment and consultation.
  • the Smart Decomposer portion of the present system and method may include the steps of: (1) recognizing composite images of a patient; (2) discovering the constituent images presented in the composite image; (3) extracting (decomposing) each individual image from the composite image; and (4) saving each extracted images in its proper folder, using the proper identification for creating a report for patient treatment and consultation.
  • the Smart Composer portion of the present system and method may include the steps of: (1) locating the proper image views within any computer folders; (2) creating a new composite image containing the proper views according to the required type of composite image; and (3) saving the newly created composite image in its proper folder, using the proper identification for creating a report for patient treatment and consultation.
  • the Smart Anonymizer portion of the present system and method may include the steps of: (1) recognize textual and facial identifiers in DCVA; (2) discarding the recognized identifiers; (3) creating a new asset, free from any textual or facial identifier; and (4) saving the newly created anonymized asset in its proper folder, using the proper identification for creating a report for patient treatment and consultation.
  • the Dental Insurance Treatment Auto-Authorizer portion of the present system and method may include the steps of: (1) accepting patient lateral cephalometric x-ray; (2) accepting patient panoramic x-ray; (3) accepting patient composite image including five or eight clinical views; (4) feeding each type of presented patient assets to the proper present system and method AI Engine (I.E.- (i) Landmarks localizer, (ii) Ectopic Eruption Discoverer (iii) Dental Arch Inspector); (5) analyzing the proper asset and produce the proper sections of the HLD Score Sheet; (6) consolidating all results in a single detailed report; (7) generating a summary report listing “Accepted” and“Rejected cases; (8) providing the information via downloadable or non- downloadable versions; and (9) saving the consolidate report in its proper folder, using the proper identification for creating a report for patient treatment and consultation.
  • AI Engine I.E.- (i) Landmarks localizer, (ii) Ectopic Eruption Discoverer (iii) Dental Arch Inspector)
  • the Landmarks Localizer portion of the present system and method may include the steps of: (1) recognizing and differentiating between right Lateral Cephalometric x-rays (standard view) and left view; (2) correcting the left view, by mirroring it, to the standard right view; (3) discovering and localizing major cephalometric landmarks related to dental insurance; (4) marking (drawing) each localized landmark; (5) performing the proper quantitative analysis on the localized points; (6) generating the HLD score report relative to the findings; (7) consolidating the results with the output results of other system and method engines to produce the final “Acceptance/Rejection” report; and (8) saving the generated report in its proper folder, using the proper identification, along with the marked x-rays for creating a report for patient treatment and consultation.
  • the Ectopic Eruption Discoverer portion of the present system and method may include the steps of: (1) recognizing panoramic x-rays; (2) analyzing the panoramic x-ray and localizing (i) Ectopic Eruptions (ii) Impactions (iii) Mixed Dentition; (3) marking (drawing) the area for each localized occurrence; (4) labelling each panoramic x-ray with the proper metadata; and (5) saving the generated report in its proper folder, using the proper identification, along with the marked x-rays for creating a report for patient treatment and consultation.
  • the present system and method does not rely on dental arch inspector analysis.
  • FIG. 1 illustrates a schematic diagram depicting how the datasets (DCVA) are acquired, and digitized into machine-readable format, if required.
  • FIG. 2 illustrates a schematic diagram of the computer system infrastructure, including all relevant layers.
  • FIG. 3 illustrates a block diagram summarizing the process of data preparation, labelling, one-hot-encoding, randomizing and dividing of the datasets into three functional groups of the present system and method.
  • FIG. 4 illustrates a schematic representing the hierarchy of DCVA classes, categories and types, including anatomical modifiers of the present system and method. Seventeen classes are depicted and are used to label the datasets, and translate the final inferenced results into English dental terms equivalent to the internal one -hot-encoded identifiers.
  • FIG.5 illustrates schematics representing the technical building blocks of the neural network designed and technical concepts, implemented and used in one embodiment of the present system and method.
  • FIG. 6 illustrates a continuation of FIG. 5, showing additional technical concepts used and their explanation in one embodiment of the present system and method.
  • FIGS. 7, 8 and 9 illustrate a detailed listing of all layers of the neural network designed, implemented and used in one embodiment of the present system and method for the DCVA Classifier, Booster Filter, Decomposer, Composer, and Anonymizer.
  • FIG. 10 illustrates a detailed diagram of the neural network designed, implemented and used in one embodiment of the present system and method for the DCVA Localizer and Ectopic Discoverer.
  • FIG. 11 illustrates one embodiment of a block diagram describing the DCVA classifier mode of action, including input and output data of the present system and method.
  • FIG. 12 illustrates one embodiment of a block diagram describing the Google search-engine filter-booster, including input and output.
  • FIG. 13 illustrates one embodiment of a block diagram describing dental insurance treatment authorizer process, all tasks, and input/output data of the present system and method.
  • FIG. 14 illustrates one embodiment of a block diagram describing landmarks localizer process, tasks, and input/output data of the present system and method.
  • FIG. 15 illustrates one embodiment of a block diagram describing ectopic eruption discoverer process, tasks, and input/output data of the present system and method.
  • FIG. 16 illustrates one embodiment of a block diagram describing the smart decomposer process, tasks, and input/output data of the present system and method.
  • FIG. 17 illustrates one embodiment of a block diagram describing the smart composer and smart anonymizer processes, tasks, and input/output data of the present system and method.
  • the present system and method provide artificial intelligence systems and methods for automatic identification, localization, recognition, understanding, labelling, analyzing, assessing, deciding, planning related to dento-craniofacial visual assets (‘DCVA’).
  • object refers to any physical item that may be suitable for scanning and imaging with, for example, an x-ray scanner.
  • examples of objects may include, but are not limited to, portions of the body of a human or animal, or models that correspond to the body of the human or animal.
  • objects include the interior of a mouth of the patient, a negative dental impression formed in compliance with the interior of the mouth, and a dental image of the teeth in occlusion.
  • the term“element” refers to a portion of the object, and an object comprises one or more elements.
  • at least one element is referred to as a“static” or“reference” element that remains in a fixed location relative to other elements in the object.
  • a“dynamic” element is another type of element that may move over time in relation to other elements in the object.
  • the palate is an example of a static element
  • the teeth are examples of dynamic elements.
  • the present system and method 100 is illustrated as a schematic diagram depicting how the dento-craniofacial visual assets (DCVA) are generated.
  • the DCVA are generally either x-rays or images (such as photographs).
  • X-rays 150 are generally generated from a dental x-ray machine, a panoramic x-ray machine, a cephalometric x-ray machine CT, CBCT and/or an MRI machine 152. Old modalities generally produced only printed films, which would then have to be digitized using an x-ray scanner (x-ray digitizer) 158 and stored in proper file formats 160. Images, collected from cameras 154, mobile cameras 156 or other sources, may also be digitized 159 to acceptable format 160, if they are not already in machine-readable formats.
  • the present method and system utilizes a computer 200.
  • the computer 200 may include processors 210, random access memory (RAM) 211, a non volatile data storage device (disk) 240, an output display device, and one or more input devices 220.
  • the processor 210 may include a central processing unit (CPU) 212 and multiple graphical processing units (GPU) 214.
  • the CPU 212 may be a fast processor from, for example, the Intel x86 family, with 12 virtual cores.
  • Two GPUs 214 may include digital processing hardware that is configured for floating point number crunching, in a massively parallel configuration.
  • Each GPU 224, NVIDIA VOLTA may include 5120 CUDA cores, 640 Tensor cores and 12 GB HBM2 RAM, for a total of, for example, 10240 CUDA cores, 1280 Tensor cores and 24 GB HBM2 RAM.
  • the CPU 212 and the two GPUs 214 may be discrete components that communicate using an input-output (I/O) interface, PCI express data bus.
  • the processor 210 may be operatively connected to the storage disk 240 to store and retrieve digital data from the storage disk 240 during operation of the present system and method.
  • the storage disk 240 may be a solid-state data storage device, backed by magnetic disks suitable devices that stores digital data for storage and retrieval by the processor 210.
  • the storage disk 240 may be a non-volatile data storage device that retains stored data in the absence of electrical power. While the storage disk 240 is depicted in the computer 210, some or all of the data stored in the storage disk 240 may be optionally stored in one or more external data storage devices that are operatively connected to the computer 200 by, for example, multiple universal serial bus units (USB) and/or through local area network (LAN).
  • USB universal serial bus units
  • LAN local area network
  • the present software applications 244 operate in conjunction with an underlying operating system (OS) 249 and software libraries 248.
  • the storage disk 240 may also store digitized data (dento-craniofacial visual assets) 241 and/or labels 242 that the computer 200 receives from different sources 100.
  • the storage disk 240 may store programing languages 246 used in aspect of the current system and method, mainly C++ compiler and Python 3.5 interpreters.
  • the storage disk 240 may additionally store the frameworks 248 used to build the neural networks of the current system and method, mainly TensorFlow 1.10.
  • the frameworks may be built over different libraries 248, mainly CUDA, CUDNN and other NVIDIA GPU based libraries. All these libraries may be also stored on the storage disk 240.
  • the present RAM 211 of the system of method may include one or more volatile data storage devices having dynamic RAM devices.
  • the processor 210 may be operatively connected to the RAM 211 to, for example, enable storage and retrieval of digital data.
  • the CPU 212 and the GPU 214 are each connected to separate RAM devices.
  • the processor 210 and data processing devices in the computer 200 store and retrieve data from the RAM 211.
  • both the RAM 211 and the storage disk 240 are referred to as a“memory” and program data, scanned sensor data, graphics data, and any other data processed in the computer 200 are stored in either or both of the storage disk 240 and RAM 211 during operation.
  • the display device is operatively connected to the one of the GPU 214 in the processor 210 and is configured to display textual and graphics objects and elements in the objects.
  • the process 300 depicted in FIG. 3, outlines the different tasks required in order to generate the proper datasets for training, validation and testing.
  • the first task/step in the present system and method is the preparation of data 300 by data inspection 310.
  • Each DCVA may be inspected to insure it is conforming to the standards required by the present system and method, and insures it belongs to one of the targeted classes.
  • the data may then be labeled 312, using the English (or alternative) terms identifiers from FIG. 4.
  • the English terms identifiers are then translated to one-hot-encoding 316 and saved to storage disk 240 in comma separated values (CSV) format 314.
  • CSV comma separated values
  • one-hot-encoding 316 is performed.
  • One-hot-encoding 316 a process by which categorical identifiers (variables) are converted into a form that could be provided to Deep Learning algorithms in a mathematical form to do a better job in the training and prediction.
  • One-hot-encoding translates each identifier into a group of bits among which the legal combinations of values are only those with a single high (1) bit and all the others low (0).
  • the full-inspected datasets may then be randomized 320 and distributed into, for example, three datasets 321.
  • the training 326 dataset may be 85% of the total assets, while the validation 324 dataset may be 10% and the testing 322 dataset may be 5% of the total assets.
  • CSV files 327, 325, 323 are generated for each group dataset and saved to the storage disk 240.
  • FIG. 4 in an embodiment, a schematic representing the hierarchy of DCVA classes, categories and types, including anatomical modifiers is provided. Sixteen classes are depicted and are used to label the datasets, and translate the final inference results into English dental terms equivalent to the internal one-hot-encoded identifiers.
  • FIG. 5 in an embodiment, a schematic representation of all the technical building blocks of the neural network designed, implemented and used in the aspect of the current system and method is provided.
  • Block 501 represents the building block of convolutional neural networks (CNN) designed and implemented in an embodiment of the present system and method.
  • CNN convolutional neural networks
  • Convolutional networks were inspired by biological processes [1] in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.
  • a CNN consists of an input and an output layer, as well as multiple hidden layers.
  • the hidden layers of a CNN typically consist of convolutional layers, RELU 502, 604 layer (activation function), pooling layers 608, fully connected layers and normalization layers.
  • the graph 604 shows plots the behavior of the ReLu function.
  • Pooling 504, 505, combine the outputs of neuron clusters at one layer into a single neuron in the next layer [2]. For example 608, max pooling uses the maximum value from each of a cluster of neurons at the prior layer, while average pooling, which uses the average value from each of a cluster of neurons at the prior layer.
  • Fully connected layers 505 connect every neuron in one layer to every neuron in another layer. It is in principle the same as the traditional multi-layer perceptron neural network (MLP).
  • Dropout 506 refers to dropping out units (both hidden and visible) in a neural network. Dropout is a regularization technique for reducing overfitting in neural networks by preventing complex co-adaptations on training data. An example showing the original network 530 before dropout, and the network 532 after dropout is shown in FIG. 5.
  • Cross-entropy loss 602 measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label. So predicting a probability of .012 when the actual observation label is 1 would be bad and result in a high loss value. A perfect model would have a log loss of 0.
  • the SoftMax function 507 takes an un-normalized vector, and normalizes it into a probability distribution. That is, prior to applying SoftMax, some vector elements could be zero, or greater than one; and might not sum to one; but after applying SoftMax, each element becomes in the interval [0, 1]. An illustration of the SoftMax function 606 is shown in FIG. 6.
  • Class bins 508 are the last nodes in the network, they are exactly equal to the number of classes the network is able to predict. After prediction, each bin will contain the probability of that class under test.
  • our designed and implemented network in the aspect of the current system and method is conceived as a deep network of more than 40 layers deep. Standard implementation of convolutional neural networks of such depth will result in approximately 60 million trainable parameters, increasing the computational and time requirements for training to a prohibiting level.
  • the present system and method implemented a compression repeating-block 520, 522, inspired by a network in Network Model [4] and the SqueezeNet Model [5]. This design resulted in a reduction of the total number of trainable parameters from 60 million approximately down to 729,165 trainable parameters. The total memory requirements and training time fell within the capabilities of the computer 200, processor 210, and the two NVIDIA Volta GPUs 214.
  • FIG. 7, 8, and 9 are a full listing of the architectural model of the full neural network designed and implemented in the respect of the present system and method.
  • Layer names are printed, as well as their types and operations.
  • the input layer expects square images of 227 pixel per side, and channel depth of three.
  • the present system and method use, for example, three color channels to insure that the system and method can deal with color and gray scale images equally easy.
  • subsequent layers are mostly composed by a repeating structure 512, 522, FIG. 5, inspired by the Network In Network model [4] and the SqueezeNet model [5].
  • Each repeating structure starts with an X number of channels of lxl filters convolutions, followed by two parallel convolutions, the first is 4X channels lxl filters, the second is 4X channels 3x3 filters. This is followed by the concatenation of results of both parallel convolutions in an 8X channels structure.
  • the variable X range from 16 to 128, resulting in the following structure composition depth, starting at 16, 64, 128, and ending at 64, 256, 512 depth channels.
  • Rectified linear units (ReLu) 502, 604 is used throughout all relevant layers. Two pooling functions were implemented, maximum pooling (MaxPooling), implemented all over the network, except for the layer before the last, where we used average pooling (AvgPooling).
  • Average pooling is more meaningful and interpretable as it enforces correspondence between feature maps and categories, which is made possible by a stronger local modeling [4] .Dropout 506 was used only at the full depth of the network. Soft max 507, 606 is applied at the last layer, feeding the final class bins 508.
  • the neural network in FIG. 7, 8, and 9 designed and implemented in the respect of the present system and method was trained using a dataset composed in the average about 3,000 x-ray/image for each type/category/class, including the same number of miscellaneous images from the Common Objects in Context collection [6], for a total of about 40,000 image.
  • Original images have a width within the range of 1000-2000 pixel and height within 1000-2000 pixels. All images were converted to three channel, JPG formats.
  • the neural network in FIG. 7, 8, and 9 designed and implemented in the respect of the present system and method was trained for 500 epochs.
  • the present system and method use the Root Mean Square Propagation (RMSprop) optimizer in order to adapt the learning rate each of the parameters.
  • RMSprop Root Mean Square Propagation
  • Our start-learning rate is 0.0001.
  • Weight are initialized using the glorot uniform initializer, which draws samples from a uniform distribution within [-limit, limit] where limit is SquareRoot(6 / M + L)) where M is the number of input units in the weight tensor and L is the number of output units in the weight tensor.
  • the neural network designed and implemented in the respect of the present system and method achieved an almost perfect accuracy score, loss 0.0120, categorical accuracy 0.9961, top k categorical accuracy 1.0000, validation loss 0.1282, validation categorical accuracy 0.9749, and validation top k categorical accuracy 0.9985.
  • the trained neural network in FIG. 7, 8, and 9, designed and implemented in the respect of the present system and method, is saved Saves to a Hierarchical Data Format version 5 (HDF5) file on disk 240.
  • the saved model contains the model's configuration (topology), the model's weights, and optimizer's state.
  • the saved model can be re-instantiated in the exact same state, without any of the code used for model definition or training. This file is used to seed all subsequent predictions.
  • Block diagram shown in FIG. 11, summarize the process of predicting the correct type/category/class of DCVA, which can range from a single asset up to any number of assets 1102 that can be stored and fed from disk storage.
  • the neural network in FIG. 7, 8, and 9, designed, implemented and trained in the respect of the present system and method, is retrieved from the HDF5 file 1103 saved after the end of training, and containing the model's configuration (topology), the model's weights, and optimizer's state. Once the file may then be loaded into memory 211 (RAM) it represents a re-instantiation of the trained model in the exact same state, ready for prediction. Average prediction time is less than 0.02 sec.
  • the re-instantiated classifier 1104 in respect of the present system and method, will predict the correct type/category/class for each asset, and automatically generates the corresponding metadata, to be save in many formats, mainly CSV files 1110, Relational Database 1112 or NoSQL Database 1114.
  • the re-instantiated classifier 1104, in respect of the present system and method will recognize and reject all non- relevant assets 1108.
  • classifier 1104 may be used to search for a specific DCVA class from with any repositories, returning only the assets pertaining to the searched-for class.
  • the present search-engine filter-booster shown in FIG. 12 may be a plugin capable of filtering the results returned from generic search engines.
  • the filtration process will return only the assets pertaining to the searched-for terms.
  • the search engine will submit the search phrase (terms) to Google search engine 1202 transparently in the background, then collect the results by Google search engine.
  • the returned results 1204 will be automatically submitted to search-engine filter-booster, in respect of the present system and method 1206, which is re-instantiated from the training HDF5 file 1103.
  • the booster will filter Google results and only returns the searched-for assets as per the search phrase (terms) 1208.
  • the neural network diagram depicted in Fig. 10 represent the deep network used in the landmark localization and ectopic eruption discoverer.
  • the neural network architecture 1002 is inspired by the object detection and localization network described in [7] with slight modifications.
  • An important feature is the use of the residual information collected and abstracted at the depth of the convolutional layers and injected directly into the dense layers at the bottom 1006 of the network.
  • FIG. 13 is a block diagram of the process implemented by the dental insurance treatment authorizer.
  • Two modes of input 1302, 1303 are provided to upload the required DCVA, (1) a single case-by-case web or desktop page 1302 for the user to upload single case assets, or (2) a batch mode where the user can upload all DCVA relative to any number of patient cases.
  • the remaining tasks of the process are the same. The only difference is that for mode (1) results for a single patient is returned, in contrast for mode (2) the results for the full batch of patients will be returned.
  • the dental insurance treatment authorizer 1304 will recognize each asset type, category and class automatically and may then direct each asset type to the proper present system and method AI engines: (1) the landmarks localizer 1306, (2) the ectopic eruption discoverer 1308 and (3) the smart arch inspector 1310 (not disclosed in the current document). Blocks 1306 and 1308 will be detailed later, in the present system and method.
  • the AI engines 1306, 1308 and 1310 may work in parallel and each will produce its own results. All results are then consolidated in a single report 14314, detailing the authorization status. Another summary report may also be generated as depicted in FIG. 13, box 1314. Reports for each patient, in addition to the summary reports, may all be saved in a computer file in a selected location 1320.
  • Fig. 14 may be a block diagram depicting the process and tasks of the Landmarks Localizer engine 1306 of the present system and method.
  • the Landmark Localizer engine 1306 may accept both single lateral cephalometric x-ray 1302 and a batch 1303 of any number of lateral cephalometric x-rays.
  • the Classifier portion 1104 of the present system and method may recognize and discover the lateral cephalometric x-ray orientation.
  • Right side lateral cephalometric x-rays 1404 will be accepted without modification.
  • Left side lateral cephalometric x-rays 1406 may be horizontally auto-flipped 1408 to the standard right side view 1404, using common image processing techniques.
  • the proper right side lateral cephalometric x-rays 1404 may then be fed to the localizer 1306 AI engine, which may then scan the full x-ray, recognizing and localizing the required landmarks 1410 for dental insurance treatment authorization. Localized Landmarks are then drawn 1412 and overlaid on a copy of the original x-ray.
  • the localized landmarks will be used to calculate in millimeters: (1) Inter- incisal angle, (2) Overjet, (3) Overbite, (3) Cross bite, (4) Anterior teeth inclination, (5) Lower incisors inclination relative to the mandibular plane, (5) 7 degrees Frankfurt horizontal, (8) Mandibular angle, (9) Maxillary and Mandibular protrusion/retrusion.
  • Results for each x-ray are generated and saved in the proper location 1420, with patient identification along with overlaid x-rays copies.
  • FIG. 15 is a block diagram depicting the process and tasks of the Ectopic Eruption Discoverer 1308 of the present system and method.
  • the Ectopic Eruption Discoverer 1308 may accept both single panoramic x-ray 1302 and a batch 1303 of any number of panoramic x-rays.
  • the present Ectopic Eruption Discoverer 1308 portion of the present system and method may then scan the full x-ray, localizing the areas elected to have any of the conditions: (1) Ectopic Eruption, (2) Impacted teeth and (3) Mixed dentition cases.
  • the Ectopic Eruption Discoverer 1308 may then draw the discovered anomalies and overlay the drawings 1512 on a copy of the original x-ray. Results for each x-ray are generated and saved in the proper location 1510, with patient identification along with overlaid x-rays copies. Ectopic Eruption Discoverer 1308 may then correctly interpret normal panoramic x-rays without searched-for anomalies and will not annotate those 1520.
  • FIG. 16 is a block diagram for the Smart Decomposer 1301 process of the present system and method.
  • the Smart Decomposer 1301 may accept both (1) single patient composite image and (2) batch of composite images of any number.
  • the composite images may recognize and decompose both: (1) Five image views 1602 and (2) Eight image views 1603.
  • Eight image composites may or may not contain textual patient demographic data. Regardless of the composite type, component images will be recognized, decomposed and saved as individual image files 1604.
  • FIG. 17 is a block diagram for the Smart Composer and Smart Anonymizer portion of the present system and method. Both the Smart Composer and Smart Anonymizer portion may accept single 1702 and batch 1703 composite images. The decomposer may recognize and decompose the image, through Classifier 1104 portion of the present system and method, into its constituent’s views 1706. Non-relevant assets 1730 may then be discarded, and only the proper views 1734 will be selected, including textual demographics.
  • the Smart Composer 1708 may then collect the properly identified image views, through present system and method Classifier 1104, and create a new composite image 1710 showing only the required image views in the proper places. The Smart Composer 1708 may then save the newly created composites 1710 in the selected location 1720.

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Biomedical Technology (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Optics & Photonics (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Epidemiology (AREA)
  • Neurosurgery (AREA)
  • Neurology (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Primary Health Care (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Physiology (AREA)
  • Medical Treatment And Welfare Office Work (AREA)

Abstract

La présente invention concerne un système et un procédé qui fournissent des systèmes d'intelligence artificielle et des procédés pour l'identification, la localisation, la reconnaissance, la compréhension, le marquage, l'analyse, l'évaluation, la décision et la planification automatiques associés à des actifs visuels dento-craniofaciaux (« DCVA ») pour créer un rapport pour la consultation et le traitement du patient.
PCT/US2019/023504 2018-03-15 2019-03-22 Système et méthode de diagnostic et de traitement de trouble cognitif clinique dento-craniofacial Ceased WO2019178617A1 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201816354446A 2018-03-15 2018-03-15
US16/354,446 2018-03-15
US201862648284P 2018-03-26 2018-03-26
US62/648,284 2018-03-26

Publications (1)

Publication Number Publication Date
WO2019178617A1 true WO2019178617A1 (fr) 2019-09-19

Family

ID=67907265

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2019/023504 Ceased WO2019178617A1 (fr) 2018-03-15 2019-03-22 Système et méthode de diagnostic et de traitement de trouble cognitif clinique dento-craniofacial

Country Status (1)

Country Link
WO (1) WO2019178617A1 (fr)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190295710A1 (en) * 2018-03-26 2019-09-26 Budi Kusnoto Dento-craniofacial clinical cognitive diagnosis and treatment system and method
CN111387938A (zh) * 2020-02-04 2020-07-10 华东理工大学 一种基于特征重排一维卷积神经网络的病人心衰死亡风险预测系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130217996A1 (en) * 2010-09-16 2013-08-22 Ramot At Tel-Aviv University Ltd. Method and system for analyzing images
US20150320320A1 (en) * 2014-05-07 2015-11-12 Align Technology, Inc. Identification of areas of interest during intraoral scans
US20150359614A1 (en) * 2001-04-13 2015-12-17 Orametrix, Inc. Unified three dimensional virtual craniofacial and dentition model and uses thereof
US20170200064A1 (en) * 2009-09-28 2017-07-13 D.R. Systems, Inc. Rules-based rendering of medical images
US20180061054A1 (en) * 2016-08-29 2018-03-01 CephX Technologies Ltd. Automated Cephalometric Analysis Using Machine Learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150359614A1 (en) * 2001-04-13 2015-12-17 Orametrix, Inc. Unified three dimensional virtual craniofacial and dentition model and uses thereof
US20170200064A1 (en) * 2009-09-28 2017-07-13 D.R. Systems, Inc. Rules-based rendering of medical images
US20130217996A1 (en) * 2010-09-16 2013-08-22 Ramot At Tel-Aviv University Ltd. Method and system for analyzing images
US20150320320A1 (en) * 2014-05-07 2015-11-12 Align Technology, Inc. Identification of areas of interest during intraoral scans
US20180061054A1 (en) * 2016-08-29 2018-03-01 CephX Technologies Ltd. Automated Cephalometric Analysis Using Machine Learning

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190295710A1 (en) * 2018-03-26 2019-09-26 Budi Kusnoto Dento-craniofacial clinical cognitive diagnosis and treatment system and method
US10878954B2 (en) * 2018-03-26 2020-12-29 Digibrain4, Inc. Dento-craniofacial clinical cognitive diagnosis and treatment system and method
CN111387938A (zh) * 2020-02-04 2020-07-10 华东理工大学 一种基于特征重排一维卷积神经网络的病人心衰死亡风险预测系统

Similar Documents

Publication Publication Date Title
US10878954B2 (en) Dento-craniofacial clinical cognitive diagnosis and treatment system and method
Johnson et al. MIMIC-CXR-JPG, a large publicly available database of labeled chest radiographs
Qayyum et al. Medical image retrieval using deep convolutional neural network
Kadry et al. Automated segmentation of leukocyte from hematological images—a study using various CNN schemes
EP3734604A1 (fr) Procédé et système d'aide à la prise de décisions médicales
Kumar et al. Lungcov: A diagnostic framework using machine learning and Imaging Modality
Alshayeji et al. Lung cancer classification and identification framework with automatic nodule segmentation screening using machine learning
Hatua et al. Early detection of diabetic retinopathy from big data in hadoop framework
Xue et al. Using deep learning for detecting gender in adult chest radiographs
Khayatian et al. Histopathology image analysis for gastric cancer detection: a hybrid deep learning and catboost approach
Cheng et al. Instance-level medical image classification for text-based retrieval in a medical data integration center
CN118072969A (zh) 一种基于大语言模型的临床文本中医疗事件自动提取系统和方法
WO2019178617A1 (fr) Système et méthode de diagnostic et de traitement de trouble cognitif clinique dento-craniofacial
Shinde et al. Deep learning for COVID-19: COVID-19 Detection based on chest X-ray images by the fusion of deep learning and machine learning techniques
Truong et al. Exploring AI-based System Design for Pixel-level Protected Health Information Detection in Medical Images
Lee et al. Attention-based automated chest CT image segmentation method of COVID-19 lung infection
Chhabra et al. Image pattern recognition for an intelligent healthcare system: An application area of machine learning and big data
Alhassan Thresholding Chaotic Butterfly Optimization Algorithm with Gaussian Kernel (TCBOGK) based segmentation and DeTrac deep convolutional neural network for COVID-19 X-ray images
US9646138B2 (en) Bioimaging grid
CN110738266A (zh) 一种医疗影像特征的提取与检索方法
CN101517584A (zh) 利用解剖形状信息访问医学图像数据库
Kumar et al. CSR-NeT: lung segmentation from chest radiographs using transfer learning technique
CN114943695A (zh) 医学序列影像的异常检测方法、装置、设备及存储介质
Moirangthem et al. Content based medical image retrieval (CBMIR): A survey of region of interest (ROI) and perceptual hash values
JADHAV et al. Bone fracture detection using image processing techniques.

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19766617

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19766617

Country of ref document: EP

Kind code of ref document: A1