[go: up one dir, main page]

CN119138832A - Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method - Google Patents

Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method Download PDF

Info

Publication number
CN119138832A
CN119138832A CN202411632710.6A CN202411632710A CN119138832A CN 119138832 A CN119138832 A CN 119138832A CN 202411632710 A CN202411632710 A CN 202411632710A CN 119138832 A CN119138832 A CN 119138832A
Authority
CN
China
Prior art keywords
image
oral
treatment
matched
professional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202411632710.6A
Other languages
Chinese (zh)
Other versions
CN119138832B (en
Inventor
李珍
郑颖
李倩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xingguang Future Medical Technology Co ltd
Original Assignee
Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking Union Medical College Hospital Chinese Academy of Medical Sciences filed Critical Peking Union Medical College Hospital Chinese Academy of Medical Sciences
Priority to CN202411632710.6A priority Critical patent/CN119138832B/en
Publication of CN119138832A publication Critical patent/CN119138832A/en
Application granted granted Critical
Publication of CN119138832B publication Critical patent/CN119138832B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/24Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00064Constructional details of the endoscope body
    • A61B1/00071Insertion part of the endoscope body
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/06Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
    • A61B1/07Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements using light-conductive means, e.g. optical fibres
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/24Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth
    • A61B1/247Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor for the mouth, i.e. stomatoscopes, e.g. with tongue depressors; Instruments for opening or keeping open the mouth with means for viewing areas outside the direct line of sight, e.g. dentists' mirrors
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • A61C19/041Measuring instruments specially adapted for dentistry for measuring the length of the root canal of a tooth
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • A61C19/043Depth measuring of periodontal pockets; Probes therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/06Implements for therapeutic treatment
    • A61C19/063Medicament applicators for teeth or gums, e.g. treatment with fluorides
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C8/00Means to be fixed to the jaw-bone for consolidating natural teeth or for fixing dental prostheses thereon; Dental implants; Implanting tools
    • A61C8/0089Implanting tools or instruments
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/20ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Epidemiology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Molecular Biology (AREA)
  • Primary Health Care (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Endoscopes (AREA)

Abstract

The invention relates to the technical field of oral treatment instruments, and provides an oral multi-professional endoscope and an image artificial intelligent processing auxiliary diagnosis method. The oral multi-professional endoscope comprises an insertion part, an operation part, a signal wire and an optical fiber, wherein the front end of the operation part is connected with the insertion part, an image acquisition element is arranged at the front end of the insertion part and connected with the signal wire, the signal wire is connected with a host, a light source is arranged in the operation part or the host, when the light source is positioned on the host, light rays are transmitted to the insertion part through the optical fiber, and the shape of a lens tube of the insertion part is in a hook shape, a bent shape, a straight rod shape or other shapes which can be observed. The embodiment of the invention can directly check the microstructure of dental tissues such as dental pulp cavity root canal, periodontal, implant cavity, salivary gland duct and the like.

Description

Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method
Technical Field
The invention relates to the technical field of oral treatment instruments, in particular to an oral multi-professional endoscope and an image artificial intelligent processing auxiliary diagnosis method.
Background
The oral cavity is a part of the whole body organs, and the oral cavity diseases not only affect the exertion of the functions of the oral cavity organs, but also can cause or aggravate lesions of other organs of the whole body, thereby obviously affecting the whole body health. Caries and periodontal disease are two most common diseases which are harmful to the oral health of residents in China, and can damage hard tissues of teeth and supporting tissues around the teeth, influence the functions of chewing, speech, beauty and the like, and other pathogenic microorganisms exist in the oral cavity for a long time, so that certain systemic diseases such as coronary heart disease, diabetes and the like can be caused or aggravated, and the systemic health is endangered.
Identifying oral diseases, definitive diagnosis, and implementing accurate treatment are particularly important in oral clinical work. The oral cavity inspection mouth mirror is an oral cavity inspection instrument which is adopted earlier and frequently applied in daily life, utilizes the reflection light of a mouth mirror glass mirror surface to illuminate, increases the visual field, uses a simple glass mirror surface as an observation tool, has a simple structure, and can not view the oral cavity tissues and the fine structures in the oral cavity. The international ninety-year dental operation microscope is used for oral clinic, is widely applied to stomatology at present, and provides illumination and enlarges the field of view by using the operation microscope, thereby improving the accurate operation of oral treatment. However, dental surgical microscopes also suffer from disadvantages in use:
1. the necessity of imaging by special microscopy, which is a medical consumable, increases medical expenditure and has limitations in the operating area. The straight line propagation of the light causes the microscope to have a position which cannot be observed, such as a bent root canal, the palatine side of the root tip during the root tip operation, and a doctor cannot observe the detailed structure by using the microscope.
2. The operation time is longer, the magnification factor is required to be increased and the light intensity is increased when the fine lesions are observed, and the eyes of doctors are easy to fatigue, dizziness, nausea and other symptoms occur.
3. The patient has large opening degree, poor matching degree, difficulty in maintaining stable body position, once the body position is moved, the microscope imaging visual field of the lesion position is blurred or vanished, and a doctor needs to readjust, so that time is wasted, and the operation efficiency and the treatment effect are reduced.
The oral cavity endoscope is an important tool for oral cavity examination, but the existing oral cavity endoscope probe is large, has limited focusing and low imaging definition, stays in the stage of primary medical instruments of an oral cavity digital observer, cannot meet the requirements of dental pulp root canal, periodontal pocket, implantation cavity and salivary gland examination and treatment, and cannot be called as a medical grade oral cavity endoscope in a strict sense.
The invention relates to an oral multi-professional endoscope, which is a miniature camera system for placing an image acquisition element in an oral cavity. The system stores the signals in a computer, conveniently changes the focal length and the camera shooting range according to the needs, clearly observes the microstructure of dental tissues such as dental pulp root canal, periodontal pocket, planting cavity, salivary gland and the like, finds the pathological changes of the dental tissues, and takes therapeutic measures in time. The invention solves the problem to be solved in clinic, and the oral multi-professional endoscope is a further extension of dentists' eyes. The oral cavity has complex structure, various diseases and complex illness states, and the existing oral cavity endoscope has insufficient clinical value and can not meet the clinical examination and treatment requirements. In view of this, the present invention has been proposed.
Disclosure of Invention
The invention aims to provide an oral multi-professional endoscope and an image artificial intelligence processing auxiliary diagnosis method so as to solve the problems.
In one aspect, the embodiment of the invention provides an oral multi-professional endoscope, which comprises an insertion part, an operation button, an image acquisition element, a signal wire and an optical fiber;
The front end of the operation part is connected with the insertion part, the front end of the insertion part is provided with an image acquisition element, the image acquisition element is connected with the signal wire, and the signal wire is connected with the host;
The operation part or the host is provided with a light source, and when the light source is positioned on the host, light rays are transmitted to the insertion part through the optical fiber;
the lens tube of the insertion part is in a hook shape, a bent shape, a straight rod shape or other shapes which can be observed.
On the other hand, the embodiment of the invention provides an image artificial intelligence processing auxiliary diagnosis method, which is based on the oral multi-professional endoscope and comprises the following steps:
acquiring an oral image of a patient;
And processing the oral cavity image by using an AI algorithm model and a medical image algorithm, and automatically identifying oral cavity diseases including dental pulp root canal, mucous membrane, periodontal, salivary gland and implant cavity diseases.
On the other hand, the embodiment of the invention provides an image artificial intelligence processing method, which is based on the oral multi-professional endoscope and comprises the following steps of:
Acquiring an image of a treatment area before treatment as a standard image;
adjusting light source parameters after treatment, and shooting images of the treatment area under different parameters to serve as a plurality of images to be matched;
calculating the histogram matching degree of each image to be matched with the standard image;
and selecting an image to be matched with highest matching degree, carrying out histogram matching with the standard image, and comparing the matched image with the standard image and/or displaying the treatment effect.
The embodiment of the invention has the beneficial effects that:
1. in the dental pulp root canal treatment, the endoscope image acquisition element can be used for directly observing the microstructure of the dental pulp root canal, finding hidden cracks of the fossa, finding caries parts, judging caries tissue range, finding a pulp-penetrating hole, assisting in removing the top to clean pulp cavity calcification matters when opening the pulp, finding root canal orifices, observing the state of the dental pulp and the root canal, and probing the pathological change positions and pathological change ranges of soft tissues and hard tissues outside the root tip in the root tip operation, thereby meeting the clinical requirements in the examination and treatment of dental diseases.
2. In periodontal treatment, the present endoscope can clearly detect periodontal conditions such as periodontitis, dental plaque in the root and root bifurcation, and abnormal conditions such as cracks, pits, hyperplasia, and external absorption on the outer surface of the root, and can be used for treatment such as rinsing and administration.
3. In the dental implant, for the person who can finish the maxillary dental implant by lifting the maxillary sinus, the endoscope can go deep into the deep part of the implant cavity to detect the condition of the mucosa at the bottom of the maxillary sinus, safely and efficiently finish the maxillary sinus lifting operation and assist in treating peri-implant inflammation.
4. When the oral salivary gland is in obstruction and inflammatory diseases, the endoscope can go deep into the salivary gland duct to probe the pathological structure position, and can be used for treatment such as flushing, drug administration and the like by means of the endoscope.
5. The embodiment of the invention can automatically identify the oral diseases according to the oral images by utilizing an artificial intelligence technology, and plays the roles of auxiliary diagnosis and intelligent diagnosis.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of the overall structure of an oral multi-professional endoscope according to an embodiment of the present invention.
Fig. 2 is a schematic view of the front end of the insertion portion in fig. 1.
Fig. 3 is a schematic view of an insertion portion lens tube according to an embodiment of the present invention.
Fig. 4 is a schematic view of another shape of an insertion portion lens tube according to an embodiment of the present invention.
Fig. 5 is a schematic view of another shape of an insertion portion lens tube according to an embodiment of the present invention.
Fig. 6 is a schematic diagram of the overall structure of another oral multi-professional endoscope according to an embodiment of the present invention.
Fig. 7 is a schematic view of the structure of the front end of the insertion portion in fig. 6.
FIG. 8 is a flowchart of an image artificial intelligence processing aided diagnosis method according to an embodiment of the present invention.
FIG. 9 is a flowchart of an image artificial intelligence processing method according to an embodiment of the present invention.
Wherein, the reference numerals in the figures are as follows:
1-an insertion part, 11-an image acquisition element, 12-an optical fiber and 13-a lens tube;
2-an operation part;
3-operating a button;
4-signal lines;
5-coat;
6-multipurpose hollow channels.
Detailed Description
In order to make the technical problems, technical schemes and beneficial effects to be solved more clear, the invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
It will be understood that when an element is referred to as being "mounted" or "disposed" on another element, it can be directly or indirectly on the other element. When an element is referred to as being "connected to" another element, it can be directly or indirectly connected to the other element. The directions or positions indicated by the terms "upper", "lower", "left", "right", "front", "rear", "vertical", "horizontal", "top", "bottom", "inner", "outer", etc. are directions or positions based on the drawings, and are merely for convenience of description and are not to be construed as limiting the present technical solution. The terms "first," "second," and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features. The meaning of "a plurality of" means two or more, and the meaning of "a number" is any number including one, unless specifically defined otherwise.
With reference to fig. 1 and 2, an embodiment of the present application provides an oral multi-professional endoscope, which can realize oral multi-professional examination and treatment, and can be used for dental pulp root canal, periodontal pocket, implant cavity and salivary gland examination and treatment. The oral multi-professional endoscope comprises an insertion part 1, an operation part 2, a signal wire 4 and an optical fiber 12, wherein the front end of the operation part 2 is connected with the insertion part 1, an image acquisition element 11 is arranged at the front end of the insertion part 1 or in the operation part 2, the image acquisition element 11 is connected with the signal wire 4, the signal wire 4 is connected with a host (not shown in the figure), a light source is arranged in the operation part 2 or in the host, light rays are transmitted into the insertion part 1 through the optical fiber 12, and in order to meet different requirements of oral clinical multi-profession, the shape of a lens tube 13 of the insertion part 1 is in a bent shape, a straight rod shape or other shapes capable of being observed. The endoscope can directly check various fine structures such as dental pulp root canal, periodontal, implant cavity, salivary gland duct and the like.
Specifically, fig. 3 shows a hook-shaped insertion portion tube shape, such as a probe shape, fig. 4 shows a bent insertion portion tube shape, the angle of bending is between 60 ° and 150 °, and fig. 5 shows a straight rod-shaped insertion portion tube shape.
Optionally, the oral multi-professional endoscope is made of hard materials, a jacket 5 can be additionally arranged on the outer side of the insertion part 1, and the jacket 5 is made of flexible materials. In clinical practice, the outer sheath 5 may act to prevent cross-infection.
Optionally, an operation button 3 is disposed on the operation part 2, and the operation button 3 is used for adjusting parameters of the light source and controlling shooting operation.
Optionally, an image processing unit (not shown in the figure) for implementing image processing is added to the operation section 2 according to clinical needs.
Alternatively, the image operation buttons on the operation part 2 may be placed on the host computer or in a foot-operated mode according to clinical needs.
Alternatively, the image capturing element 11 may be a small-sized image capturing element, which is a small-sized CCD element or a small-sized CMOS element, and is disposed at the distal end of the insertion portion 1.
Optionally, in combination with fig. 6 and 7, the operation part 2 may further have a multipurpose hollow channel 6, the multipurpose hollow channel 6 penetrates from the operation part 2 to the insertion part 1, and the multipurpose hollow channel 6 is used for injecting irrigation liquid and/or medicine into oral tissues or performing treatment on an affected part through a channel by using laser fiber.
The oral multi-professional endoscope of the embodiment adopts a miniature image acquisition element 11 as an image receiving device, wherein the image acquisition element 11 and an optical fiber 12 are packaged in an insertion part 1, and the image acquisition element 11 is connected with an image processing system or a host computer through a signal wire 4 and obtains images on a monitor. The operation part2 is provided with an operation button 3, and a doctor can grasp the affected part by operating the button 3 to reserve image data.
In summary, the present embodiment provides an ultra-clean ultra-fine endoscope for oral multi-professional examination and treatment, which is developed by summarizing years of clinical work experience, is a micro imaging system, can store signals in a computer, can change focal distance and imaging range according to needs, can be used for examination and treatment of oral tissues such as dental pulp root canal, periodontal pocket, implant cavity, salivary gland and mucous membrane according to clinical needs, and has AI processing and auxiliary diagnosis functions of images. The endoscope can clearly see the microstructure of oral tissues such as oral cavity, tooth surface, dental pulp cavity root canal, periodontal pocket, implant cavity, salivary gland, mucous membrane and the like, is further extension of dentist eyes, has the functions of image AI processing, auxiliary diagnosis and intelligent diagnosis, can effectively lighten the burden of doctors, and assists the accurate diagnosis and treatment of the doctors. Specifically, the oral multi-professional endoscope can realize the following beneficial effects under different professions of the oral cavity:
1. When the endoscope is used for dental pulp root canal examination and treatment, the endoscope can be used for directly observing various hidden cracks of pit and sulcus on a dental body, the range of caries tissues, a marrow hole is penetrated, and when a marrow is opened, a top is opened in an auxiliary manner to clean calcification in a pulp cavity, a root canal orifice is found, and the shape and the pulp state of the root canal below the root canal orifice are observed. The pathological change range of the soft tissue and the hard tissue outside the root tip is explored in the root tip operation so as to meet the application of various diseases of dental pulp profession in the treatment of traditional Chinese medicine. By means of clear imaging of an endoscope, doctors can accurately diagnose the disease types of patients, accurately distinguish caries and dental pulp diseases and accurately judge the following conditions of caries tissue removal degree, dental pulp exposure condition, pulp cavity calcification residue condition, root orifice position and inflammatory tissue residue condition around after root interception in root tip operation. The doctor can carry out clear and accurate diagnosis by means of the endoscope and can carry out accurate operation by means of the endoscope, so that a successful treatment effect can be obtained.
2. When the endoscope of the embodiment is used for periodontal examination and treatment, periodontal disease change of a patient is clearly observed through the endoscope, and targeted treatment is performed. The endoscope is used to enter the periodontal pocket to clearly detect the dental calculus conditions of the tooth root and the root bifurcation area, and the external surface of the tooth root has the abnormal conditions of cracks, pits, hyperplasia, external absorption and the like. The multipurpose hollow channel of the endoscope can be used for injecting flushing liquid and medicines into the periodontal pocket or guiding laser optical fibers and the like for accurate treatment. The doctor can use the endoscope and the curet, the tooth Zhou Gua device and other instruments to clean and curet the periodontal of the patient, remove dental calculus and dental plaque in the periodontal pocket and treat periodontitis. The doctor can carry out operation treatments such as flap surgery, bone grafting surgery, guided tissue regeneration surgery and the like on the periodontal of the patient through the endoscope, thereby helping the periodontal of the patient to recover health. A doctor can evaluate the periodontal treatment effect by means of an endoscope, and only the treatment is thorough, so that the periodontal treatment can be ensured to obtain a good effect.
3. In the dental implant operation, the endoscope can go deep into the bottom of the implant cavity, the deep condition is detected, whether the nerve and the blood vessel are touched or not is judged, for a person who can finish the maxillary dental implant only by lifting the maxillary sinus through the alveolar ridge top window, the endoscope lens can go deep into the deep of the implant cavity, the condition of the mucosa at the bottom of the maxillary sinus is directly looked at, the mucosa at the bottom of the sinus is accurately lifted, the perforation at the bottom of the iatrogenic maxillary sinus is avoided, and the implant operation is efficiently finished.
4. When the salivary gland is checked and treated, and the salivary gland is blocked and inflammatory diseases, the endoscope can go deep into the salivary gland duct to probe the pathological change position and state, and the multipurpose hollow channel is used for flushing, dosing, sucking and other operations on the salivary gland duct, so that the duct is effectively prevented from being penetrated laterally, the duct is accurately treated and dredged, and the inflammation of the gland is effectively treated. The endoscope is combined with the surgical instruments such as the basket, the holding forceps, the laser optical fiber and the like to carry out operations such as stone crushing, grabbing of the stone taking basket, negative pressure suction and the like, and the stones in the salivary gland duct are taken out. The endoscope has the function of image AI processing auxiliary diagnosis, and can assist diagnosis of diseases such as catheter stones, deformity, inflammation and the like.
Based on the oral multi-professional endoscope, fig. 8 is a flowchart of an image artificial intelligence processing auxiliary diagnosis method provided by an embodiment of the invention. The method can be executed by an AI auxiliary diagnosis module in the endoscope, and can also be executed by other image processing equipment to play the auxiliary diagnosis and intelligent diagnosis functions. As shown in fig. 8, the method includes the steps of:
S110, acquiring an oral cavity image of a patient.
After receiving the oral cavity image of the patient, firstly preprocessing and enhancing the image to improve the quality of the oral cavity image. Optionally, operations such as denoising, smoothing, contrast enhancement and the like can be performed on the oral image.
S120, processing the oral cavity image by using an AI algorithm model and a medical image algorithm, and automatically identifying oral cavity diseases including diseases of dental pulp root canal, mucous membrane, salivary gland, periodontal and implant cavity.
The embodiment adopts a lightweight AI algorithm recognition model obtained by a knowledge distillation and transfer learning method, is input into a preprocessed oral cavity image, and is output into an oral cavity tissue type, a lesion area block diagram and a disease type in the image, for example, the recognition result of a certain caries image is that a lesion position is marked in the image through the block diagram, and caries is noted beside the block diagram.
In one embodiment, the training method of the model includes:
And S1, data preparation, namely collecting oral cavity image data and constructing an oral cavity image data set.
S1-1, the oral image dataset comprises a plurality of oral tissue types and a plurality of pictures, a professional oral doctor marks each oral picture with the real oral tissue type and disease type, and each picture corresponds to two labels.
S1-2, preprocessing and expanding the data, namely normalizing the oral cavity image constructed in the step S1, and converting the original data into a corresponding standard form.
S1-3, dividing the data set, namely dividing the image data set into a training set and a verification set, wherein the training set is used for training a network model, and the verification set is used for verifying the model identification effect.
S2, determining a teacher model and a student model. Optionally, the backbone network of the teacher model can select YOLOv models with higher accuracy, better stability and larger models, and the student model can select the yolo_nano network with smaller models.
And S3, training the teacher model by using a transfer learning method to obtain a proper teacher model. Optionally, the teacher model firstly loads the pre-training weight on the large public data set, reserves the network structure before the last layer, and adds a multi-layer perceptron after the network structure, so that the final output of the multi-layer perceptron comprises two classification labels of the oral tissue type and the disease type. And then, training the multi-layer perception by using the oral image data set constructed in the S1 to obtain a trained teacher model.
And S4, initializing a student model. The method also adopts a migration learning method, loads the pre-training weight on the large-scale public data set, reserves the network structure before the last layer, and adds a multi-layer perceptron after the network structure, so that the final output of the multi-layer perceptron comprises two classification labels of the oral tissue type and the disease type.
And S5, guiding the student model by using a knowledge distillation method and using the trained teacher model to obtain an optimal student model.
And S6, inputting the images in the verification set into a student model for testing, and obtaining the identification result through forward propagation calculation. The trained student model can be integrated in a chip and small equipment, is convenient for the equipment to be used anytime and anywhere, and is particularly suitable for the situation that the AI auxiliary diagnosis module is deployed in the mobile equipment and the small equipment.
In a specific embodiment, in conjunction with fig. 9, in some situations where the images before and after treatment need to be compared to evaluate the treatment effect, the embodiment further provides an image artificial intelligence processing method, which realizes the treatment effect comparison or display function through the following steps:
S210, acquiring an image of the treatment area before treatment as a standard image.
The treatment area refers to an oral cavity area to be treated by a patient, particularly to dental treatment, and the treatment area can refer to treatment parts such as a dental facial fossa, a medullary cavity and the like of the patient. Before treatment, an stomatist can adjust parameters of the light source through the image system, and an image with better visual effect is taken as a standard image under a certain parameter combination. Wherein the parameters include at least one of power, brightness, color temperature, etc. of the light source, and each value of the single parameter is considered as a parameter combination when only one adjustable parameter is included.
S220, adjusting light source parameters after treatment, and shooting images of the treatment area under different parameters to serve as a plurality of images to be matched.
After treatment, the parameters of the light source can be continuously adjusted, and one treated image is respectively shot under different parameter combinations to be used as a plurality of images to be matched.
S230, calculating the histogram matching degree of each image to be matched and the standard image.
And respectively calculating the histogram matching degree of each image to be matched with the standard image, wherein the matching degree is used for measuring the similarity degree of the visual effect of the image to be matched with the standard image. Alternatively, the two images can be respectively converted into gray images, and the histogram matching degree of the two gray images can be calculated, or the histogram matching degree of the two images on the red, green and blue channels can be respectively calculated, and the three histogram matching degrees are averaged to be used as the final histogram matching degree.
In particular, the histogram matching degree may be represented by the correlation of histogram data, chi-square test, intersection detection, or pasteurization distance. The specific representation method is the prior art, taking correlation as an example, two histograms can be respectively marked as H 1 and H 2, and then the histogram matching degree d (H 1,H2) of the two images is as follows:
Wherein,
Wherein u=1 or 2 is used to represent the index of the histogram, N represents the number of gray level partitions of each histogram, I and J each represent the gray level partition index, and H u (I) and H u (J) each represent the corresponding values of the gray level partitions I and J in the histogram with index u.
S240, selecting an image to be matched with the highest histogram matching degree, carrying out histogram matching on the image to be matched with the standard image, and comparing the matched image with the standard image and/or displaying the treatment effect.
After the histogram matching degree of each image to be matched and the standard image is calculated, selecting one image to be matched with the highest histogram matching degree. The histogram of the image is similar to that of the standard image, and the histogram is matched with the standard image by using the image, so that the large-amplitude adjustment of the histogram can be reduced, and the detail distortion caused by the histogram matching is reduced.
In particular, histogram matching serves to substantially match the histogram distribution of one image to another, thereby maintaining a consistent visual effect, such as a substantially consistent hue, for both images, and thereby facilitating a comparison of treatment effects. The basic principle of histogram matching is as follows:
First, an image to be matched and a standard image are respectively converted into gray images, which are respectively called a gray image to be matched and a standard gray image. Equalizing the histogram of the gray image to be matched to obtain a transformation function s k=T(rk), wherein s k is the equalized gray level, r k is the original gray level, k represents the gray level index of the histogram of the gray image to be matched before and after equalization, and T () represents the transformation function. Meanwhile, the histogram of the standard gray image is equalized to obtain a transformation function v q=G(zq), wherein v q is the gray level after equalization, z q is the gray level of the standard image, q represents the gray level index of the histogram of the standard gray image before and after equalization, and G () represents the transformation function.
Ideally, the above are all equalizations for the same treatment area, the result should be equal:
sk=vq
due to v q=G(zq
The following formula is obtained by the inverse transformation and the deformation:
Where G -1 () represents the inverse transform of the transform function G ().
As an intermediate result, the mapping relationship between the original gray level r k and the specified gray level z q can be obtained by equalizing in this way. In practical applications, the inverse transformation of G is not required, and since the gray level is an integer, for example, the gray level of an 8-bit image is 0-255, and the q=0, 1,2 is calculated by using the formula v q=G(zq, all values when the values of L-1 (l=255) are simpler, the following steps can be adopted to achieve histogram matching:
Step one, equalizing the gray level histogram of the image to be matched, rounding s k to be an integer in [0, L-1], and equalizing the histogram of the standard image, rounding the value of v q to be an integer in [0, L-1 ].
Step two, for each s k value, a corresponding v q value is found, so that v q is closest to s k, and then a mapping from s k to z q can be obtained. When more than one z q value of s k is satisfied, the smallest z q value is selected for mapping. And according to the obtained mapping relation between s k and z q, mapping the gray level of the image to be matched after gray histogram equalization into the gray level of the standard image, and forming the matched image.
The above is the histogram matching algorithm, and it can be seen that although the histogram matching can make the visual effects of the images before and after treatment tend to be consistent, so as to be convenient for comparing the treatment effects, some detail distortion is necessarily caused due to the introduction of the conversion between gray levels. Therefore, in this embodiment, the histogram matching degree of the images before and after treatment is improved by adjusting the light source parameters, and then the images with the highest matching degree after treatment and before treatment are selected to perform histogram matching, so that the adjustment amplitude of the histogram can be reduced, and the detail distortion caused by directly adopting the images after treatment and the images before treatment to perform histogram matching is reduced.
Further, during treatment, a histogram of the standard image may also be recorded for each treatmentAnd a light source parameter R 1, a histogram of the first image to be matched takenAnd the light source parameter R 2 and the finally determined adjustment value delta R of the light source parameter of the image to be matched with the highest matching degree relative to the first image to be matched are used as historical data. At the same time, will [, R1, R 2 is used as a state variable, delta R is used as an action variable, a historical state variable and a historical action variable in the same treatment are used as samples, and a sample library is constructed to learn the relation between the state variable and the action variable. After learning, a new standard image can be shot before treatment, a new image to be matched is shot after treatment, a new state variable is formed by the histograms of the two images and the light source parameters during shooting, the new state variable is substituted into the learned relation to obtain an optimal action variable, the light source parameters during shooting the new image to be matched are adjusted according to the optimal action variable, the adjusted light source parameters are the optimal light source parameters, and the image to be matched shot under the parameters is the image to be matched with the new standard image with the highest matching degree. And carrying out histogram matching on the image and the new standard image, wherein the matched image can keep consistent visual effect with the new standard image, so that the effects before and after treatment can be conveniently compared. The time span of "one treatment" herein may be longer or shorter, and may refer to one continuous treatment operation by a doctor, or may refer to one treatment course, and the embodiment is not limited in particular.
Alternatively, the problem of insufficient samples is easily caused during learning because the accumulation of the treatment data takes a long time. In order to overcome the problem, the embodiment learns the relation between the state variables and the action variables through the clustering characteristic of the sample set, wherein the clustering function has the advantages of simplifying the relation between the state variables and the action variables, namely simplifying the relation between each state variable and each action variable into the relation between each type of state variable and each type of action variable so as to reduce the learning difficulty, and enabling each type of variable to comprise a certain amount of historical samples after clustering so as to reduce the adverse effect caused by the vacancy or the unbalance of a certain type of samples. In a specific embodiment, taking the adjusted light source parameters as an example of brightness and color temperature (other light source parameters are similar in combination and are not described in detail), the learning of the relationship between the state variable and the action variable by using the clustering characteristic of the sample set may include the following steps:
Step one, clustering various value combinations of the action variables to obtain a plurality of clustering clusters. Compared with the state variables, the number of parameters (comprising 2 parameters of brightness adjustment value and color temperature adjustment value) of the motion variables is smaller, and the range of the motion space is limited, so that the embodiment clusters the motion variables. Optionally, the brightness and color temperature range can be divided into different sections, and the most commonly used brightness and color temperature in the historical data can be selected, the sections are divided according to the rules of middle density and sparse at two ends by taking the commonly used values as the center, the middle points of the brightness sections and the middle points of the color temperature sections are respectively combined, each combination is a value combination of the light source parameters, so that all value combinations of the light source parameters are obtained, two optional combinations in all value combinations of the light source parameters are subjected to difference (brightness and brightness difference and color temperature difference) to obtain one value combination of the action variables, and all two combinations of the exhaustive light source parameters can obtain all value combinations of the action variables. Clustering all valued combinations of the action variables to obtain a plurality of clustering clusters, wherein a specific clustering method can be selected Clustering,Clustering and the like, and the characteristics of minimizing the intra-class combination difference and maximizing the inter-class combination difference are satisfied among the clusters after clustering.
And step two, constructing a full connection layer, wherein the full connection layer is used for extracting the implicit characteristics of the state variables. Due to the problems of insufficient sample number or interference among elements in the state variables, the historical state variables corresponding to the historical action variables belonging to the same cluster may not meet the same clustering relation, i.e. under the condition that the action variables A 1、A2…Aw are similar and can be clustered into one type, the historical state variables S 1、S2…Sw corresponding to the action variables do not show similar characteristics and cannot be clustered into one type directly. Therefore, the embodiment constructs a full connection layer, and aims to mine the implicit characteristics in the historical state variables by means of characteristic expansion or dimension reduction, and learn the clustering rule among the state variables by the implicit characteristics.
Step three, the action variables included in each cluster are respectively input into a full-connection layer in the historical state variables corresponding to the sample set to obtain the hidden characteristics of each historical state variable, the full-connection layer is trained by minimizing the hidden characteristic difference corresponding to the same cluster and maximizing the hidden characteristic difference corresponding to different clusters, so that the hidden characteristics obtained by the trained full-connection layer can show the clustering relation corresponding to the action variables, namely if the action variables A 1、A2…Aw are clustered into one type, the hidden characteristics of the state variables S 1、S2…Sw corresponding to the action variables can be clustered into one type. Specifically, the input data of the fully connected layer is a state variable [, R1, R 2 ], can be usedAndThe values in the N gray level intervals are respectively arranged into a sequence according to the increasing sequence of the gray level(U=1 or 2, representing the histogram index), two parameters of R 1 and R 2 are also arranged as a two-dimensional sequence [ [ delta ] L u,⊿Tu ], wherein delta ] L u and delta ] T u represent the brightness adjustment value and the color temperature adjustment value in R u, respectively, and then the four sequences are followed [ delta ] L u,⊿Tu ], R1, The order of R 2 is arranged as a state variable [,⊿L1,⊿T1, Delta L 2,⊿T2 ]. For any cluster, a plurality of historical action variables falling in the value combination range included in the cluster can be searched from a sample set, and then the historical state variables corresponding to the searched historical actions are extracted from the sample set. And obtaining a historical state variable set of each cluster, and selecting a part of the historical state variable set as a training set and a part of the historical state variable set as a test set. The historical state variables in the training set are input into the full-connection layer in batches to train the parameters in the layer, and the parameters are updated through the following loss functions:
+
Wherein Loss represents a Loss function value, n and m are indexes of a cluster, m is not equal to n, i and j are indexes of historical motion variables in the same cluster, Represents the implicit characteristics of the historical state variable corresponding to the historical action variable with index i in the cluster with index n,AndMeaning similar to that of (a); Representation of AndIs used to determine the degree of similarity of vectors,Meaning similar. Through the minimization of Loss, the hidden characteristic difference corresponding to the same cluster can be minimized, and the hidden characteristic difference corresponding to different clusters can be maximized, so that the clustering relation among the state variables is reflected through the hidden characteristics.
And step four, after training, respectively inputting the historical state variables corresponding to each cluster into a trained full-connection layer to obtain final hidden features of each historical state variable, wherein each final hidden feature corresponding to the same cluster can form a new cluster, and the cluster centers of the new clusters are fixed. In order to facilitate distinguishing and description, in this embodiment, the cluster of the action variable in the first step is referred to as a first cluster, the cluster of the implicit feature in the first step is referred to as a second cluster, the second cluster reflects the cluster characteristic of the state variable through the implicit feature, each first cluster corresponds to a second cluster, and the corresponding relationship and the trained full-connection layer jointly reflect the relationship between the state variable and the action variable. Optionally, after training the full-connection layer in the third step, the test set may be divided into a plurality of batches, and each history state variable in each batch is input into the trained full-connection layer to obtain each hidden feature, where the hidden features correspond to a plurality of second clusters, and the cluster centers of each second cluster together form a group of cluster centers of the batch. Comparing a group of clustering centers of each batch with the clustering centers of each hidden feature obtained after each history state variable in the training set is input into the trained full-connection layer, and if the differences of the clustering centers of each group are controlled within a set range, indicating that the full-connection layer extracts the clustering features more accurately, then the full-connection layer can be used. Otherwise, retraining is required.
After the cluster centers of the second cluster and the trained full-connection layer are obtained in the mode, the cluster centers and the full-connection layer are fixed, a new standard image can be shot before treatment in new treatment, a new state variable can be obtained by the new standard image and the new image to be matched only by shooting a new image to be matched after treatment, and the new state variable is input into the trained full-connection layer to obtain new hidden characteristics. And then, calculating the distance between the new implicit feature and the cluster center of each second cluster, and taking the second cluster closest to the new implicit feature as the cluster. And marking a first cluster corresponding to the cluster as A, and extracting the value combination of the action variable closest to the cluster center of the A as the optimal action variable.
In summary, in this embodiment, the histogram and the light source parameter of the standard image, the histogram and the light source parameter of the first image to be matched, and the adjustment value of the light source parameter with the highest matching degree are recorded, the first four items are used as state variables, the last item is used as an action variable, and the relationship between the state variables and the action variables is learned. After learning, in the new treatment, the best histogram matching image can be obtained by directly obtaining the light source adjustment according to the first four items of data. In particular, in order to overcome the problem of insufficient medical data samples, the embodiment simplifies the relation between the state variable and the action variable in a clustering mode, and enables limited samples to be distributed and balanced among various variables so as to improve learning accuracy, thereby improving accuracy of light source parameter control and image histogram matching.
The foregoing description of the preferred embodiments of the invention is not intended to be limiting, but rather is intended to cover all modifications, equivalents, and alternatives falling within the spirit and principles of the invention.

Claims (10)

1. An oral multi-professional endoscope is characterized by comprising an insertion part (1), an operation part (2), an operation button (3), an image acquisition element (11), a signal wire (4) and an optical fiber (12);
The front end of the operation part (2) is connected with the insertion part (1), an image acquisition element (11) is arranged at the front end of the insertion part (1), the image acquisition element (11) is connected with the signal wire (4), and the signal wire (4) is connected with a host;
A light source is arranged in the operation part (2) or the host, and when the light source is positioned in the host, light rays are transmitted to the insertion part (1) through the optical fiber (12);
The lens tube (13) of the insertion part (1) is in a hook shape, a bent shape, a straight rod shape or other shapes which can be observed.
2. The oral multi-professional endoscope according to claim 1, wherein the oral multi-professional endoscope is composed of a hard material, wherein a casing (5) is attached to the outside of the insertion portion (1), and wherein the casing (5) is composed of a flexible material.
3. The oral multi-professional endoscope according to claim 1, wherein an operation button (3) is provided on the operation portion (2), and the operation button (3) is used for adjusting a light source parameter and controlling a photographing operation.
4. Oral multi-professional endoscope according to claim 1, characterized in that the operating portion (2) is provided with a multipurpose hollow channel (6), which multipurpose hollow channel (6) extends through to the insertion portion (1);
The multipurpose hollow channel (6) is used for injecting flushing liquid and/or medicine into oral tissues or carrying out treatment on the affected part by laser optical fibers through the channel.
5. Oral multi-professional endoscope according to claim 1, characterized in that an image processing unit is added to the operating section (2) according to clinical need, said image processing unit being adapted to process images.
6. An image artificial intelligence processing aided diagnosis method, characterized in that, based on the oral multi-professional endoscope according to any one of claims 1-5, the method comprises:
acquiring an oral image of a patient;
And processing the oral cavity image by using an AI algorithm model and a medical image algorithm, and automatically identifying oral cavity diseases including dental pulp root canal, mucous membrane, periodontal, salivary gland and implant cavity diseases.
7. The image artificial intelligence processing aided diagnosis method of claim 6, wherein before said processing said oral image by using AI algorithm model and medical image algorithm, automatically identifying oral disease, further comprising:
collecting oral cavity image data and constructing an oral cavity image data set;
Training a teacher model by using a transfer learning method;
According to the knowledge distillation method, a trained teacher model is utilized to guide a student model, the trained student model is used as a final AI algorithm model, and the model is output as an oral tissue type, a lesion area block diagram and a disease type.
8. A method of image artificial intelligence processing, characterized in that it is based on an oral multi-professional endoscope according to any of claims 1-5, said method comprising:
Acquiring an image of a treatment area before treatment as a standard image;
adjusting light source parameters after treatment, and shooting images of the treatment area under different parameters to serve as a plurality of images to be matched;
calculating the histogram matching degree of each image to be matched with the standard image;
and selecting an image to be matched with highest matching degree, carrying out histogram matching with the standard image, and comparing the matched image with the standard image and/or displaying the treatment effect.
9. The image artificial intelligence processing method of claim 8, further comprising:
Taking a histogram and a light source parameter of a standard image in the same treatment, a histogram and a light source parameter of a first shot image to be matched as state variables, taking a light source parameter adjusting value of the image to be matched with the highest final matching degree relative to the first image to be matched as an action variable, taking a historical state variable and a historical action variable in each historical treatment as samples, and utilizing a sample set to learn the relation between the state variable and the action variable;
In the new treatment, a new standard image is shot before the treatment, a new image to be matched is shot after the treatment, a new state variable is formed together, the new state variable is substituted into the learned relation to obtain an optimal action variable, the light source parameter is regulated according to the optimal action variable, another new image to be matched is shot, the histogram matching is carried out on the another new image to be matched and the new standard image, and the matched image is used for comparing with the new standard image and/or displaying the treatment effect.
10. The method of image artificial intelligence processing according to claim 9, wherein learning the relationship between the state variable and the action variable using the sample set comprises:
clustering various value combinations of the action variables to obtain a plurality of first clustering clusters;
Respectively inputting the historical state variables corresponding to the action variables included in each first cluster in the sample set into a full-connection layer to obtain the hidden characteristics of each historical state variable, and training the full-connection layer by minimizing the hidden characteristic differences corresponding to the same first cluster and maximizing the hidden characteristic differences corresponding to different first clusters;
Respectively inputting the historical state variables corresponding to each first cluster into a trained full-connection layer to obtain final hidden characteristics of each historical action variable, and taking each final hidden characteristic corresponding to the same first cluster as a second cluster;
the corresponding relation between each second cluster and each first cluster and the trained full-connection layer are commonly used for representing the relation between the state variable and the action variable.
CN202411632710.6A 2024-11-15 2024-11-15 Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method Active CN119138832B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411632710.6A CN119138832B (en) 2024-11-15 2024-11-15 Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411632710.6A CN119138832B (en) 2024-11-15 2024-11-15 Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method

Publications (2)

Publication Number Publication Date
CN119138832A true CN119138832A (en) 2024-12-17
CN119138832B CN119138832B (en) 2025-04-04

Family

ID=93815488

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411632710.6A Active CN119138832B (en) 2024-11-15 2024-11-15 Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method

Country Status (1)

Country Link
CN (1) CN119138832B (en)

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN2617308Y (en) * 2003-04-28 2004-05-26 南开大学 Digital optical fibre root canal microendoscope device
CN1541608A (en) * 2003-04-28 2004-11-03 南开大学 Digital Fiber Optic Root Canal Microendoscope Device
CN1889894A (en) * 2003-12-08 2007-01-03 株式会社摩利塔制作所 Dental diagnostic device
JP2007296249A (en) * 2006-05-03 2007-11-15 Microdent:Kk Dental disease confirmation device
JP2010075269A (en) * 2008-09-24 2010-04-08 Fujinon Corp Endoscope system and assisting tool
CN109414153A (en) * 2016-05-26 2019-03-01 口腔智能镜公司 Dental-mirrors and its application with integrated camera
CN110198656A (en) * 2017-01-25 2019-09-03 株式会社吉田制作所 Pole thin diameter endoscope
CN112316287A (en) * 2020-11-27 2021-02-05 马梅伍 Accurate intelligent medicine feeding device for department of stomatology
CN113017868A (en) * 2021-02-26 2021-06-25 西安交通大学口腔医院 Orthodontic anterior-posterior skull side film registration method and orthodontic anterior-posterior skull side film registration equipment
CN113288023A (en) * 2021-04-30 2021-08-24 傅建华 Wearing formula oral cavity pathological change tissue observation apparatus
JP2021168845A (en) * 2020-04-17 2021-10-28 キヤノンメディカルシステムズ株式会社 Medical information processing device, medical information processing method and medical information processing program
CN215959954U (en) * 2020-09-27 2022-03-08 谈斯聪 Remote and autonomously controlled oral disease data collection, diagnosis and treatment robotic device
CN116228639A (en) * 2022-12-12 2023-06-06 杭州电子科技大学 Dental caries segmentation method based on semi-supervised multi-level uncertainty perception
CN219538241U (en) * 2023-03-13 2023-08-18 四方众联医疗科技(北京)有限公司 Periodontal inspection system assembly
CN118212474A (en) * 2024-04-18 2024-06-18 华中科技大学同济医学院附属同济医院 Caries three-dimensional image automatic classification and auxiliary decision-making method based on deep learning
CN118648860A (en) * 2024-05-24 2024-09-17 深圳大学 An oral endoscope for real-time visualization of dental plaque and a dental plaque identification method

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1541608A (en) * 2003-04-28 2004-11-03 南开大学 Digital Fiber Optic Root Canal Microendoscope Device
CN2617308Y (en) * 2003-04-28 2004-05-26 南开大学 Digital optical fibre root canal microendoscope device
CN1889894A (en) * 2003-12-08 2007-01-03 株式会社摩利塔制作所 Dental diagnostic device
JP2007296249A (en) * 2006-05-03 2007-11-15 Microdent:Kk Dental disease confirmation device
JP2010075269A (en) * 2008-09-24 2010-04-08 Fujinon Corp Endoscope system and assisting tool
CN109414153A (en) * 2016-05-26 2019-03-01 口腔智能镜公司 Dental-mirrors and its application with integrated camera
CN110198656A (en) * 2017-01-25 2019-09-03 株式会社吉田制作所 Pole thin diameter endoscope
JP2021168845A (en) * 2020-04-17 2021-10-28 キヤノンメディカルシステムズ株式会社 Medical information processing device, medical information processing method and medical information processing program
CN215959954U (en) * 2020-09-27 2022-03-08 谈斯聪 Remote and autonomously controlled oral disease data collection, diagnosis and treatment robotic device
CN112316287A (en) * 2020-11-27 2021-02-05 马梅伍 Accurate intelligent medicine feeding device for department of stomatology
CN113017868A (en) * 2021-02-26 2021-06-25 西安交通大学口腔医院 Orthodontic anterior-posterior skull side film registration method and orthodontic anterior-posterior skull side film registration equipment
CN113288023A (en) * 2021-04-30 2021-08-24 傅建华 Wearing formula oral cavity pathological change tissue observation apparatus
CN116228639A (en) * 2022-12-12 2023-06-06 杭州电子科技大学 Dental caries segmentation method based on semi-supervised multi-level uncertainty perception
CN219538241U (en) * 2023-03-13 2023-08-18 四方众联医疗科技(北京)有限公司 Periodontal inspection system assembly
CN118212474A (en) * 2024-04-18 2024-06-18 华中科技大学同济医学院附属同济医院 Caries three-dimensional image automatic classification and auxiliary decision-making method based on deep learning
CN118648860A (en) * 2024-05-24 2024-09-17 深圳大学 An oral endoscope for real-time visualization of dental plaque and a dental plaque identification method

Also Published As

Publication number Publication date
CN119138832B (en) 2025-04-04

Similar Documents

Publication Publication Date Title
ES2992946T3 (en) Medical image processing device, medical image processing system, medical image processing method, and program
CN109948671B (en) Image classification method, device, storage medium and endoscopic imaging equipment
JP7789745B2 (en) Digital Image Optimization for Ophthalmic Surgery
Herr Max Nitze, the cystoscope and urology
US8666135B2 (en) Image processing apparatus
KR20200026135A (en) The method for measuring microcirculation in cochlea and the apparatus thereof
CN109893258A (en) The outer visor laparoscope system of integration
CN114903634A (en) An operating microscope diagnosis and treatment system
CN119138832B (en) Oral multi-professional endoscope and image artificial intelligent processing auxiliary diagnosis method
CN109893092B (en) Laparoscope external vision mirror device capable of scanning abdominal cavity
CN215219313U (en) Operating microscope
Engelke et al. In vitro visualization of human endodontic structures using different endoscope systems
US12070195B2 (en) Systems and methods for design and 3-D fabrication of laryngoscopes, pharyngoscopes, and oral cavity retractors
CN216090895U (en) Surgical microscope diagnosis and treatment system
CN119026474A (en) A facial morphology intelligent prediction method and system based on a dual-discriminant generative adversarial network model
CN118975765A (en) A multi-fluorescence imaging endoscope system
CN109349988A (en) Portable ENT endoscope
CN114903635B (en) A dental microscopic diagnostic system
CN109965987A (en) Visor outside a kind of robot with common focus point migration function
TWI703961B (en) Oral image analysis system and method
CN111449614A (en) Clinical examination system for oral cavity department
JP7600247B2 (en) LEARNING DEVICE, LEARNING METHOD, PROGRAM, TRAINED MODEL, AND ENDOSCOPE SYSTEM
CN118948180B (en) Disposable confocal microscopic imaging probe catheter, use method and imager
CN115273591B (en) A training system and method for quantifying interventional operation behavior
CN2717385Y (en) Electronic endoscope

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20251027

Address after: 100176 No. 99, Kechuang 14th Street, Beijing Economic-Technological Development Zone, Daxing District, Beijing Building 33, D Building, 5th Floor, Room 528

Patentee after: Beijing Xingguang Future Medical Technology Co.,Ltd.

Country or region after: China

Address before: 100730 Beijing city Dongcheng District Wangfujing Park No. 1

Patentee before: PEKING UNION MEDICAL COLLEGE Hospital

Country or region before: China