[go: up one dir, main page]

CN116916807A - Method for endoscopic diagnostic support and system for endoscopic diagnostic support - Google Patents

Method for endoscopic diagnostic support and system for endoscopic diagnostic support Download PDF

Info

Publication number
CN116916807A
CN116916807A CN202180093283.7A CN202180093283A CN116916807A CN 116916807 A CN116916807 A CN 116916807A CN 202180093283 A CN202180093283 A CN 202180093283A CN 116916807 A CN116916807 A CN 116916807A
Authority
CN
China
Prior art keywords
lesion
image
data
canvas
endoscopic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202180093283.7A
Other languages
Chinese (zh)
Inventor
野里博和
河内祐太
坂无英德
村川正宏
池田笃史
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
University of Tsukuba NUC
Original Assignee
National Institute of Advanced Industrial Science and Technology AIST
University of Tsukuba NUC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National Institute of Advanced Industrial Science and Technology AIST, University of Tsukuba NUC filed Critical National Institute of Advanced Industrial Science and Technology AIST
Publication of CN116916807A publication Critical patent/CN116916807A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000096Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope using artificial intelligence
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00002Operational features of endoscopes
    • A61B1/00004Operational features of endoscopes characterised by electronic signal processing
    • A61B1/00009Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope
    • A61B1/000094Operational features of endoscopes characterised by electronic signal processing of image signals during a use of endoscope extracting biological structures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10068Endoscopic image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images
    • G06V2201/031Recognition of patterns in medical or anatomical images of internal organs
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H15/00ICT specially adapted for medical reports, e.g. generation or transmission thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Pathology (AREA)
  • Epidemiology (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Primary Health Care (AREA)
  • Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Optics & Photonics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Quality & Reliability (AREA)
  • Signal Processing (AREA)
  • Endoscopes (AREA)

Abstract

Provided is an endoscopic diagnosis support method capable of clearly distinguishing between an inspected area and an unchecked area. After the preparation step S1 of the observation canvas is performed in advance, the frame marking step S2, the key point calculation step S3, the front-rear frame displacement amount calculation step S4, and the front-rear frame marking step S5 are performed, whereby the observation record is performed. In the image diagnosis support step IDS, it is determined whether or not a lesion is present in the organ based on a plurality of position data obtained by adding markers for a plurality of frames to the observation canvas data and the endoscopic images in the respective frames, to provide support for image diagnosis.

Description

Method for endoscopic diagnostic support and system for endoscopic diagnostic support
Technical Field
The present invention relates to an endoscopic diagnosis support method and an endoscopic diagnosis support system for targeting an organ having a cavity such as a bladder.
Background
For example, it is said that the recurrence rate of bladder cancer after TURBT surgery is 50% for two years. This is because small lesions or flat lesions located around the raised lesions are not completely removed. This is mainly due to "ignorance" (where the bladder to be observed is not observed) and "unnoticed" (where observation is made but no lesions are identified at the time of examination). The accuracy of the inspection is actually dependent on the skill and experience of the inspector. In order to reduce the recurrence rate, it is important to improve the accuracy of detecting lesions in cystoscopy, and it is necessary to improve the diagnostic accuracy by supplementing skills and experience with the support of digital techniques.
In view of the above, as a technique for recording a state in endoscopy, there are known a system for recording by pasting an endoscopic image onto a model image of an organ based on information of a position and a direction acquired by a sensor attached to a distal end portion of an endoscope disclosed in JP5771757B (patent document 1), a program for estimating a future state of a target site based on a degradation amount between map data and historical map data generated by 3D texture mapping based on distance information or an image of a body part derived from the endoscopic image disclosed in JP6704095B (patent document 2), and a method for generating a panoramic image of a cavity of a target organ by connecting the endoscopic images disclosed in JP2017-534322A (patent document 3). As a technique for supporting image diagnosis by artificial intelligence, there are known an endoscopic observation support apparatus disclosed in JP2019-180966A (patent document 4) that detects and tracks a predetermined lesion based on an endoscopic image, and an image diagnosis support apparatus disclosed in JP2020-73081A (patent document 5) that evaluates the kind and position of a lesion present in a gastrointestinal tract endoscopic image and accuracy information thereof by using a convolutional neural network trained on a lesion predetermined in a plurality of gastrointestinal tract tumor endoscopic images as training data. Further, in non-patent document 1, a self-position estimation technique is disclosed in which a map is created from information of a camera or a sensor of a mobile body and used to estimate where you are in the map.
CITATION LIST
Patent literature
PTL 1:JP5771757B
PTL 2:JP6704095B
PTL 3:JP2017-534322A
PTL 4:JP2019-180966A
PTL 5:JP2020-73081A
Non-patent literature
NPL 1: sumikura, s., shibiuya, m., & Sakurada, k.). OpenVSLAM: a Versatile Visual SLAM framework of procedures of the 27th ACM International Conference on Multimedia (multifunction visual SLAM framework. 27th international multimedia conference corpus): 2292-2295, 2019.
Disclosure of Invention
Technical problem
In the endoscopy of the related art, a doctor as an operator determines a lesion based on diagnosis of direct observation of the inside of a target organ while operating an endoscope, taking an endoscopic image of a suspicious site, and recording the image as an inspection report. Surgery, medical treatment, etc. are performed based on the examination report on the premise that the inside of the target organ is thoroughly examined and where the photographed image is imaged is correctly recorded. However, in the existing endoscope system, whether or not all of the portion of the target organ to be observed is observed and where the captured image is imaged is recorded in the report by means of the memory or notes of the doctor at the time of examination, and the accuracy of the examination depends on the skill and experience of the doctor. For artificial intelligence, its diagnostic accuracy is often improved depending on the quality and quantity of training data; however, collecting high quality medical image training data is costly. The quality of medical images requires that the image quality be good and that the image be combined with accurate annotation information of the physician. Therefore, it is essential to collect a large number of images and to add accurate annotation information to each image. However, in medical images, in japan, there are examinations such as gastrointestinal endoscopy and X-ray examination (in which the number of examinations is relatively large) included in an item of medical examination, and also examinations such as cystoscopy in which the number of examinations and the number of patients are both an order of magnitude smaller than that of gastrointestinal endoscopes and it is difficult to collect images. Therefore, even if there is artificial intelligence for supporting diagnosis that can discriminate the presence of a lesion in an image with high accuracy by preparing a large number of medical images and annotation information for training, such artificial intelligence cannot be applied to an organ examination such as cystoscopy in which it is difficult to collect training data. Even when applicable, it is difficult to reveal all lesions of the patient in the examination and to correctly convey the examination results as information used at the time of surgery unless it is possible to correctly record where in the organ the acquired image was taken and whether the entire interior of the organ was imaged. In order to realize diagnosis support by artificial intelligence with high accuracy in actual endoscopy, two problems need to be solved: correctly recording observations, and achieving highly accurate artificial intelligence even when a sufficient amount of training data is not collected to train the artificial intelligence.
An object of the present invention is to provide an endoscopic diagnosis support method and an endoscopic diagnosis support system capable of clearly distinguishing an inspected area from an unchecked area.
In addition to the above object, another object of the present invention is to provide an endoscopy supporting method and an endoscopy supporting system capable of improving the diagnosis accuracy without newly adding training data even when the training data is small.
Solution to the problem
The present invention provides an endoscopic diagnosis support method for supporting when an imaging device provided at a front end portion of an endoscope is inserted into a cavity of an organ of a subject, and diagnosing the presence of a lesion in the organ by using a computer based on a plurality of frames including an endoscopic image taken by the imaging device. In the present invention, the computer executes the following first to sixth steps by means of an installed computer program.
In a first step, viewing canvas data of a viewing canvas for an endoscopic image of a cavity in an organ is prepared. As the observation canvas, a simulation-expanded observation canvas in which the positions of more than one opening and the top (anatomical structure) in the cavity of the organ are generally specified and one opening is arranged in the center may be used. The viewing canvas data is made by converting the viewing canvas into electronic data.
In a second step, a keyframe containing at least one anatomical structure that can specify a location in the organ cavity in the frame is determined, and keyframe location data for the keyframe is marked in the viewing canvas data. Here, the key frame corresponds to at least one anatomical structure (in the case of a bladder, two ureter openings, a urethral opening or a top in which air bubbles accumulate) to be a reference point when determining the relative position in the cavity. The keyframe location data is data related to a location of at least one anatomical structure to be a reference point on the viewing canvas. In particular, when the viewing canvas is a simulated expanded viewing canvas, the keyframe location data is data related to a location of at least one anatomical structure determined by location coordinates on the simulated expanded viewing canvas. The tagging of key frame location data for key frames means that location information and frame numbers are stored in association with the viewing canvas data.
In the third step, the key frame is set as the first previous frame, and three or more key points respectively existing on the previous frame and the next frame among the plurality of frames are determined to calculate coordinates of these key points in the endoscopic image. Here, the calculation of coordinates of these key points in the endoscopic image is performed by using image feature points used in a known self-position estimation technique disclosed in non-patent document 1.
In the fourth step, the displacement amount between the preceding frame and the following frame is calculated based on the coordinates of three or more key points in the endoscopic image. Here, the displacement amount includes a direction and an angle in which three or more key points in the previous frame move, and a distance between three or more key points in the previous frame and the subsequent frame.
In a fifth step, the determined position data of a plurality of subsequent frames is marked in the viewing canvas data based at least on the displacement amount, the first key frame position data marked first in the second step, and the next key frame position data marked subsequently in the second step. In this step, the plurality of temporary position data is used as temporary position data for a plurality of subsequent frames until next key frame position data is determined, and when the next key frame position data is determined, the determined position data for the subsequent frames is marked so that the temporary position data for the plurality of subsequent frames fit between the first key frame position data and the next key frame position data. Here, the determined position data includes absolute position information and a frame number using the center of the observation canvas as an origin. This is because when the first key frame position data and the next key frame position data are determined, the relative intervals of a plurality of subsequent frames existing between the two key frame positions are determined. The temporary location data of the subsequent frame includes relative location information and frame numbers relative to the location data of the first key frame. For example, the relative position information is formed by adding symbols representing coordinate position data and types determined by using the coordinate positions of segments, wherein one anatomical structure to be a key frame is used as a reference point in a matrix formed by aligning a plurality of segments having the same size and the same shape assumed on a simulation-expanded viewing canvas. When such a matrix is used, the relative position information is easily plotted sequentially on the unfolded view; thus, an advantage can be obtained in that which part of the organ has been observed can be easily recorded without complicated processing such as joining together the observation images or three-dimensional mapping. Then, when the determined position data is marked in the observation canvas data for the inner wall of the target organ, the inspected area and the non-inspected area can be clearly distinguished; therefore, the inside of the target organ can be thoroughly observed, and it is possible to correctly record where the photographed image is obtained.
In the sixth step, support for image diagnosis of the presence of the lesion in the organ is made based on the plurality of position data marked in the observation canvas data and the endoscopic image in the plurality of frames, in parallel with or after the second to fifth steps are performed for the plurality of frames.
For example, the sixth step may be implemented by at least one of the following trained diagnostic imaging models. Specifically, the sixth step may be performed by using a trained image diagnosis model obtained from training data recorded in an endoscopic image database as training data. When the endoscopic image database is an endoscopic image database including image data with annotation information, the expanded annotation information obtained by expanding the annotation information using the annotation expansion model is included in the endoscopic image database, so that the diagnostic accuracy can be improved without newly adding training data even when the training data is small.
As the annotation expansion model, an annotation expansion model based on an automatic encoder configured by an encoder and a decoder is preferably used. The annotated expansion model is trained as follows. By using the image diagnosis model as the feature extractor, features extracted from intermediate layers of the image diagnosis model (thereby using a lesion endoscopic image recorded from an endoscopic image database as an input) are input into the encoder. A set of annotation information corresponding to the lesion endoscopic image is also entered into the encoder. The latent variable output from the encoder and the feature are then inverse calculated by the decoder to assume annotation expansion information. When such an annotation expansion model is used, useful expanded annotation information can be newly obtained from an endoscope image in the original endoscope image database without newly adding annotation information even when training data is small.
When creating the annotation expansion model, training is preferably performed to reduce cross entropy between annotation information input to the encoder and expanded annotation information. In this way, it is advantageously possible to learn fuzzy annotation criteria in order to reproduce as much of the annotation information of the original endoscope database as possible. In addition, ambiguity at the interface between lesions and normal is also reproduced in the expanded annotation information to be generated.
Preferably, the expanded annotation information is randomly expanded in the annotation expansion model. Random expansion does not mean that all the obtained expanded annotation information is employed, but that expanded annotation information randomly selected from the obtained expanded annotation information is employed. Thus, balanced post-expansion annotation information can be obtained without adding more than necessary post-expansion annotation information.
The endoscopic image database may further include an expanded data set containing expanded data obtained by expanding data of the lesion endoscopic image recorded in the endoscopic image database using a data expansion technique and expanded annotation information. When the expanded data set is included in the endoscopic image database, training accuracy can be further improved with little data without newly adding training data.
In the sixth step, diagnosis support may be performed by detecting a region in which the possibility of a lesion in the endoscopic image is high and determining whether the region in which the possibility of a lesion is high is a lesion by using a trained image diagnosis model obtained by using training data recorded in an endoscopic image database as training data. The endoscopic image includes both a portion assumed to be normal and a portion assumed to be diseased. Therefore, whether a normal portion or a lesion is diagnosed by setting a region in which the probability of a lesion is high as a target to be evaluated, thereby improving the diagnosis accuracy as compared with evaluating the entire region.
In the above-described case, it is preferable to use a trained image diagnosis model configured to extract image features in all pixels from an endoscopic image to specify a region in which the possibility of a lesion is high from the endoscopic image, calculate lesion features in the region in which the possibility of a lesion is high by using image features of a plurality of pixels located in the region in which the possibility of a lesion is high, and classify the region in which the possibility of a lesion is high into a normal portion and a lesion according to the lesion features. Preferably, the trained diagnostic imaging model is configured to include: detecting an image diagnosis model of the trained lesion area; a binarization processing section that creates a lesion candidate mask by performing binarization processing on the lesion accuracy map; a region defining feature calculation section that calculates a region defining feature defined to a region in which a possibility of a lesion is high, based on the image feature and the lesion candidate mask; a lesion candidate feature calculation section that calculates a lesion candidate feature at a region where a likelihood of a lesion is high by averaging the region defining features; and a lesion classification image diagnosis model that classifies a region in which a probability of a lesion is high into a normal portion and a lesion based on the lesion candidate feature. When such a trained image diagnosis model is used, the determination accuracy at the region where the possibility of lesions is high can be improved. The image features in this case are preferably obtained from the intermediate layer of the lesion area detection image diagnostic model.
It is also preferable to display at least one of the following on a display screen of the display device: a viewing position display that displays a plurality of viewing areas on a view similar to a viewing canvas; a lesion location display displaying an observation area in which lesions are present on a view similar to an observation canvas; a diagnosis result display that displays the malignancy and type of a lesion in an observation area in which the lesion exists; and display of a chart for the subject. Thereby, the observation result and the diagnosis result can be checked on the display screen.
In another aspect, the present invention may be understood as an endoscopic diagnostic support system.
Drawings
Fig. 1 is a flowchart showing an outline when an endoscope diagnosis support method according to the present invention is executed by using a computer.
Fig. 2 is a flowchart showing an algorithm of the observation recording step.
FIG. 3 illustrates an exemplary viewing canvas in a situation in which the bladder is targeted for viewing.
Figure 4 shows a matrix-like viewing canvas data.
FIG. 5 illustrates temporary markup of first keyframe location data on viewing canvas data.
Fig. 6 (a) to (D) show an endoscopic image in which an anatomical structure in the bladder to be a key frame is captured.
Fig. 7 (a) and (B) show examples of key points calculated from the previous frame and the subsequent frame.
Fig. 8 (a) to (C) explain an example of temporary marks of position data of a plurality of subsequent frames.
Fig. 9 (a) to (C) explain that the relative intervals between a plurality of subsequent frames are determined when the next key frame position data is determined.
FIG. 10 (A) illustrates exemplary viewing canvas data.
Fig. 10 (B) shows the markup state of the viewing canvas corresponding to the viewing canvas data.
Fig. 11 is a diagram showing a basic configuration of the image diagnosis support system.
Fig. 12 is a diagram showing a configuration of an image diagnosis support system with annotation expansion.
Fig. 13 is a diagram showing a flow for training the annotation information expansion model.
Fig. 14 (a) and (B) show annotation information corresponding to the bladder endoscope image and annotation information after expansion.
FIG. 15 is a graph comparing diagnostic accuracy before and after annotation expansion
Fig. 16 shows a flowchart for creating a lesion classification image diagnostic model in an image diagnostic model having defined lesion areas.
Fig. 17 (a) and (B) show the results of lesion classification before and after using the lesion classification image diagnosis model in the image diagnosis model having a defined lesion region.
Fig. 18 shows an exemplary display screen on which the observation results and the diagnosis results are displayed.
Fig. 19 shows another exemplary display screen on which observation results and diagnosis results are displayed.
FIG. 20 illustrates an exemplary output report.
Detailed Description
Hereinafter, embodiments of an endoscopic diagnosis support method and an endoscopic diagnosis support system according to the present invention will be explained with reference to the drawings. In the endoscopic diagnosis support method and system according to the present invention, an imaging device mounted at the distal end portion of an endoscope is inserted into a cavity of an organ of a subject to perform support for diagnosing the presence of a lesion in the organ by using a computer based on a plurality of frames including an endoscopic image taken by the imaging device. Fig. 1 is a flowchart showing an outline of a plurality of steps performed when the endoscope diagnosis support method of the present invention is implemented by using a computer. As shown in fig. 1, an endoscopic image E1 obtained from an existing endoscope system ES is processed at an observation recording step ORS and an image diagnosis support step IDS. In the observation record step ORS, an observation record is obtained from the endoscopic image EI, and the record is stored as observation record information ORI in a storage device of a computer. In the image diagnosis support step IDS, support information used when diagnosing the presence of lesions in the organ based on the endoscopic image EI is stored as an image diagnosis result IDR in a storage device of the computer. The diagnosis support information display step SID, which is implemented by a computer, outputs a diagnosis report including at least one of the observation record information ORI and the image diagnosis result IDR to a screen, a graph, or the like of the display device. The format of the output is arbitrary. For example, the output may be performed by video-displaying the image diagnosis result IDR on the screen.
[ observation recording step ]
Fig. 2 shows a specific process flow in the observation recording step ORS in the embodiment of the endoscope supporting method according to the present invention. In the observation recording step ORS in the endoscopy diagnosis supporting method of this embodiment, a preparation step (first step) S1 of an observation canvas is performed in advance. Then, a frame marking step (second step) S2, a key point calculation step (third step) S3, a front-rear frame displacement amount calculation step (fourth step) S4, and a front-rear frame marking step (fifth step) S5 are performed, whereby observation recording is performed. In an embodiment of the endoscopy support system according to the present invention, a computer program for executing the above steps is installed in a computer, and a plurality of means for executing the respective steps are implemented inside the computer. Furthermore, the computer program for the endoscopy support system is configured by including an algorithm for realizing the above steps.
In a preparation step S1 (first step) of an observation canvas, observation canvas data related to an observation canvas for an endoscopic image of a cavity in an organ is prepared as electronic data in a memory of a computer. The viewing canvas data is made by converting the viewing canvas into electronic data. As the observation canvas, an observation canvas SOC simulating expansion may be used in which the positions of a plurality of openings and the top in the organ cavity are specified by a general method, and one opening is arranged in the center.
FIG. 3 illustrates an exemplary simulated expanded viewing canvas in the case where the bladder is used as an organ to be viewed. As a virtual bladder corresponding to the inner wall of the bladder to be observed by cystoscope, the bladder is assumed as a sphere, and two hemispheres of the front wall side (abdomen) and the rear wall side (back) of the bladder are prepared in a circular shape, respectively. In this viewing canvas, left and right ureter openings (e.g., a position 3/8 and left and right 1/4 from below in the upper circle), an inter-ureteral ridge between the openings, a urethral opening as an opening (at the center of the junction of the two circles), and a top portion (uppermost portion of the upper circle, lowermost portion of the lower circle) are depicted.
Fig. 4 is a conceptual diagram in the case where the observation canvas SOC of the simulation expansion of fig. 3 is set as the observation canvas data. In the example of fig. 4, the observation canvas data is formed by adding symbols representing the presence and type of position data to a matrix MX formed by arranging a plurality of segments (square segments in the example) having the same size and the same shape assumed on the simulation-expanded observation canvas. A simulated expanded viewing canvas may also be used, in which the two hemispheres are horizontally aligned. Instead of hemispheres, ellipses may also be arranged to serve as a viewing canvas simulating a spread. As shown in fig. 5, in the area of all segments sg in the matrix MX of the observation canvas data, a flag (0) indicating that no position data is observed or present is set to an initial value. In the corresponding segment sg, position information (coordinates) in a two-dimensional array in which the urethral opening is set as the origin in the simulated expanded observation canvas SOC is individually added. In the example of fig. 5, a flag (1) indicating that the left ureter opening was observed (but not determined) is added to the respective segment.
In a frame tagging step (second step) S2, a keyframe containing at least one anatomical structure capable of specifying a location in an organ cavity in a frame is determined and keyframe location data for the keyframe is tagged in the viewing canvas data. Here, a key frame is a frame in which at least one anatomical structure (in the case of a bladder, two ureter openings, a urethral opening, or a top in which air bubbles accumulate) to be a reference point in determining the relative position in the cavity is imaged. Here, the position data is absolute position information with respect to an origin in the observation canvas data or relative position information with respect to a reference point and a frame number.
Fig. 6 (a) to (D) show a practical example of an endoscopic image in a frame in which left and right ureter openings, urethral openings, and the top in which bubbles are gathered are imaged. The keyframe location data relates to a location of at least one anatomical structure to be a reference point on the viewing canvas. In particular, when the viewing canvas is a simulated expanded viewing canvas, the keyframe location data relates to a location of at least one anatomical structure determined by location coordinates on the simulated expanded viewing canvas. The tagging of the key frame location data of the key frame means that the location information (coordinates of the segment sg) and the frame number of the image are stored in association with the viewing canvas data.
For example, a frame in which any one of the left and right ureter openings is captured in the frame image taken by the cystoscope shown in fig. 6 is determined as a start key frame, and a corresponding region on the observation canvas is marked as a start key frame position [ coordinates of the segment sg of the mark (1) in fig. 5 are added ]. Here, the markup means making a flag in a corresponding region (section) on the observation canvas a flag (1) indicating an observation candidate, and its position information is associated with a frame image as shown in fig. 5. In the example of fig. 5, an initial value [ flag (1) ] in the area where the marking is performed is given to a segment included in a circle of 1/10 of the circle of the observation canvas as a measure of the size of the circle that does enter the field of view when observed through the cystoscope.
In the key point calculation step (third step) S3, the start key frame is set as the first preceding frame, and three or more key points respectively existing on the preceding frame and the following frame in the multi-frame are determined to calculate the coordinates of the key points in the endoscopic image. Here, the key points correspond to pixels indicating the same positions on the inner wall of the organ imaged in successive front and rear frames. The calculation of coordinates of key points in an endoscopic image can be performed by using image feature points used by a known self-position estimation technique disclosed in Visual SLAM (synchronous localization: mapping and localization/self-position estimation) applied to automatic driving or robot vision. A number of unique portions imaged in a frame are identified as feature points, and coordinates of key points in an endoscopic image are calculated by using the feature points as common portions.
In the front-rear frame displacement amount calculation step (fourth step) S4, the displacement amounts between three or more key points in the previous frame and three or more key points in the subsequent frame are calculated based on the coordinates of the three or more key points in the endoscopic image. Here, the displacement amount includes a direction and an angle in which three or more key points in the previous frame move, and respective distances of the three or more key points between the previous frame and the subsequent frame. Based on the displacement amount, the relative position information of the subsequent frame calculated from the position information of the previous frame marked on the viewing canvas is calculated, the position information is associated with the frame image, and the subsequent frame is marked as the subsequent frame continued from the previous frame. The observed candidate marker (1) is marked on the observation canvas while the above process is repeated until the next keyframe is detected.
Fig. 7 (a) and (B) show endoscopic images of a previous frame and a subsequent frame obtained by imaging the inner wall of the bladder. In this example, three points A, B and C in the previous frame correspond to three points a ', B ' and C ' in the subsequent frame. Here, when the coordinates of the three points A, B and C are defined as a (x A ,y A )、B(x B ,y B ) And C (x) C ,y C ) And coordinates of the three points a ', B', and C 'are defined as a' (x) A ′,y A ′)、B′(x B ′,y B ') and C' (x) C ′,y C ') the moving distance and direction in which the three points move between the preceding and following frames can be calculated as vectors of differences (G', -G) between the centers of gravity G of the three points A, B and C in the preceding frame and the centers of gravity G 'of the three points a', B ', and C' in the following frame calculated by the following formula (1) of the center of gravity G.
[ formula 1]
The dimensional displacement from the previous frame to the next frame can be calculated by the difference between the average values of the distances from the three points to the center of gravity, as shown in the following formula (2).
[ formula 2]
Further, the rotation may be calculated from an average value of angles formed by vectors from three points to the center of gravity, as shown in the following formula (3).
[ formula 3]
In the front-rear frame marking step (fifth step) S5, the position data of the plurality of the following frames determined are marked in the observation canvas data based on at least the displacement amount, the first key frame position data marked first in the second step, and the next key frame position data marked next in the second step. As shown in fig. 8 (a) to (C), the temporary position data of the plurality of subsequent frames are marked as temporary position data of the plurality of subsequent frames until the next key frame position data is determined in this step. As shown in fig. 8 (B) and (C), the flag in the corresponding segment indicating the relative position of the corresponding frame is still the observation candidate (1). In the specific example of fig. 8, when an area in which a frame subsequent to a key frame is marked overlaps any one of the key frame areas on the observation canvas, it is determined that the frame has not yet reached the key frame, and the relative position of the frame to which the observation candidate flag (1) is added on the observation canvas is corrected so that the mark area size and the moving distance become 1/2 of the distance from the starting key frame that has been marked up to that time in the relative information. Further, the coefficients of the calculation formulas of the moving distance and the region size are set to 1/2, thereby freeing up space in the observation canvas to continue the following marking step. Here, the coefficient of "1/2" is conveniently determined in order to prevent a plurality of subsequent frames as observation candidates from being out of the matrix, and is not limited to this value.
Then, as shown in fig. 9 (a) to (C), when next key frame position data (position data of the top) is determined, the relative positions of a plurality of temporary subsequent frames are adjusted so that the position data of the temporary (as observation candidates) subsequent frame fits between the first key frame position data (position data of the left ureter opening) and the next key frame position data, and the flag of the position data of the subsequent frame is determined. When determining the first key frame position data and the next key frame position data, the relative positions of a plurality of subsequent frames existing between the two key frame positions are determined, and the absolute positions thereof are determined. As shown in fig. 9 (B) and (C), the marker information of the consecutive plurality of frames from the first key frame to the next key frame is corrected to be arranged between the key frames on the observation canvas, and the markers of the segments corresponding to the absolute positions on the observation canvas are changed from the observation candidates (1) to the observation candidates (2) for determining the absolute positions. In this embodiment, the position data of the corresponding frame to be observed is determined to include absolute position information (coordinates) of a plurality of segments sg of the observation position and a frame number. In this embodiment, absolute position information is represented by coordinates determined by rows and columns in an array using the urethral orifice as the origin, on a matrix MX formed by arranging a plurality of segments sg having a common size and the same shape assumed on a simulated spread observation canvas.
When the above second to fifth steps are repeated while changing the position of the endoscope with respect to the inner wall of the organ, the flag in the plurality of segments on the observation canvas data becomes the flag (0) indicating that the marking has not been performed or the flag (2) indicating that the marking has been performed, as shown in fig. 10 (a). Therefore, when the areas in the observation canvas are classified by color and displayed, the areas of the plurality of segments of the mark (0) are caused to be displayed in black and the areas of the plurality of segments of the mark (2) are caused to be displayed in gray, as shown in fig. 10 (B), the presence and position of the area not observed by the endoscope (gray area) can be clearly displayed.
Regarding the frame determined to have a possibility of a lesion in the later-described image diagnosis support step (sixth step), the imaging position of the frame including the lesion may be specified by using absolute position information of the segment corresponding to the corresponding frame and frame number in the preceding and following frame marking step (fifth step) S5. Thus, in the case where a subsequent detailed examination or operation is performed, the exact location of the lesion can be given.
[ image diagnosis support step (sixth step) ]
The image diagnosis support step IDS (sixth step) shown in fig. 1 is performed in parallel with the second to fifth steps performed for a plurality of frames, and supports image diagnosis of the presence of a lesion in an organ based on a plurality of position data marked in the observation canvas data and an endoscopic image in a plurality of frames. However, the presence of a lesion in an organ may be diagnosed based on a plurality of position data marked on the observation canvas data and the endoscopic images in a plurality of frames after the second to fifth steps are performed.
In the image diagnosis support step IDS (sixth step) shown in fig. 1, image diagnosis support is performed by using the trained image diagnosis model as follows. The trained diagnostic imaging model is implemented in a computer and together with a database is configured with so-called artificial intelligence.
[ image diagnosis support System Using trained image diagnosis model ]
As shown in fig. 11, the image diagnosis support step IDS can be realized by an image diagnosis support system having artificial intelligence using the trained image diagnosis model TDM generated by training the trained image diagnosis model DM using the data recorded in the endoscope image database DB as training data (sixth step). As training data in the endoscopic image database DB. In the example of fig. 11, the endoscope image database DB records normal endoscope image data, lesion endoscope image data including a lesion, and comment information data (which is provided by a doctor) indicating information representing whether a lesion is included in an image. The sixth step (support system) is implemented for observing the endoscopic image EI by using a training image diagnostic model TDM obtained by training the image diagnostic model DM with these data. In general, a deep learning model such as GoogleNet, inception model, U-Net, resNet, YOLO, SSD, or the like for image classification or object detection may be used as an image diagnosis model. The diagnostic accuracy of artificial intelligence is improved based on the quality and quantity of training data. In order to collect training data of medical images with good quality, it is preferable to combine not only good image quality but also accurate annotation information of the doctor. Thus, in this example, the normal endoscopic image and the lesion endoscopic image and the annotation information are recorded in the endoscopic image database.
[ support for image diagnosis by annotation expansion ]
There are many examinations such as cystoscopy, in which the number of examinations and the number of patients are each an order of magnitude smaller than that of gastrointestinal endoscopy, and it is difficult to collect an examination image to be a target for diagnostic support as training data. Therefore, even when a large amount of medical images and annotation information are collected for training, thereby creating artificial intelligence for diagnosis support capable of discriminating the presence of lesions in the images with high accuracy, it is difficult to directly apply the artificial intelligence to diagnosis support for an organ or an examination in which it is difficult to collect training data, such as cystoscopy. In order to realize image diagnosis support through highly accurate artificial intelligence in actual examination, the above-described problems must be solved.
Fig. 12 shows one configuration of an image diagnosis support system that uses the annotation expansion model AEM to perform the image diagnosis support step IDS (sixth step) in order to solve the above-described problems. In this system, an annotation expansion model AEM is provided in an endoscopic image database DB to create expanded annotation information. Fig. 13 shows a specific flow for training the annotation expansion model AEM. The annotation expansion model AEM shown in fig. 13 is an annotation expansion model based on an automatic encoder constituted by an encoder E and a decoder D. The automatic encoder learns the parameters such that the input information and the output information return to the same information when decoded by the decoder D by using the latent variable obtained by the encoder E compressing the dimension of the input information once. The annotation expansion model calculates features (H, W, M) corresponding to each pixel from the middle layer of the image diagnosis model DM as follows. The image diagnosis model DM serves as a feature extractor FE, and the lesion endoscopic image LEI recorded in the endoscopic image database is input into the image diagnosis model. The obtained features (H, W, M) are input into the encoder E together with annotation information (H, W, C) corresponding to the lesion endoscopic image LEI. Then, the latent variable (1, l) output from the encoder E and the feature (H, W, M) obtained from the feature extractor FE are subjected to inverse computation by the decoder D to generate expanded annotation information (H, W, c') as new annotation information. Here, the potential variable is a variable that affects interpretation of a relationship between variables, which is, for example, a variable that affects habits or the like of annotation operations for an endoscopic image in the endoscopic image database. The feature "H" corresponds to "feature map height", which is a feature in the height direction of the pixel array in the feature map of the convolutional neural network. "W" corresponds to "a feature map width", which is a feature in the width direction in the pixel array of the feature map in the convolutional neural network. "M" corresponds to "feature map depth", which is a feature in the depth direction of a pixel in the feature map of the convolutional neural network. "C" corresponds to "the number of categories", which is the number of categories assigned to each pixel in the annotation information. "L" corresponds to "potential vector length", which is the potential vector length of the potential variable. When creating the annotation expansion model AEM, training is preferably performed to reduce cross entropy between the annotation information input to the encoder E and the expanded annotation information output from the decoder D. That is, in this embodiment, the annotation expansion model AEM is trained to reduce cross entropy between the probability distribution of the annotation information (H, W, C) and the probability distribution of the expanded annotation information (H, W, C').
When the above-described annotation expansion model AEM is used, useful expanded annotation information can be retrieved from an endoscope image in the original endoscope image database without adding new annotation information and without forcing a doctor to work, even if training data is small. GAN (generated antagonism network) can also be used as the annotation expansion model AEM. In this embodiment, an annotated extension model AEM based on VAEs (variational automatic encoders) that use probability distributions as potential variables is used. Fig. 14 (a) and (B) respectively show examples of a plurality of pieces of expanded annotation information generated from a trained annotation expansion model according to respective endoscopic images to be targeted. As can be seen from these views, multiple expanded annotation information can be generated from the same endoscopic image that approximates the original annotation information added by the physician.
Preferably, the expanded annotation information is randomly expanded in the annotation expansion model AEM. Expanded annotation information is generated from probability distributions defined by potential variables in the trained annotation expansion model. Random auto-expansion does not mean that all expanded annotation information is generated according to a probability distribution, but means that expanded annotation information is generated by randomly selecting potential variables. Specifically, the examples of fig. 14 (a) and (B) mean that one piece of expanded annotation information is randomly generated from 5 pieces of expandable annotation information, for example. Thus, balanced post-expansion annotation information can be obtained without adding more than necessary post-expansion annotation information. Further, when the existing data expansion of M times and the annotation expansion of L times are performed on the data sets of N pieces by combining the existing data expansion method, the expanded data sets are expanded to the data sets of l×m×n pieces.
Fig. 15 shows the diagnostic accuracy F in the case where the expansion model is trained by using training data in which expansion data expanded by using existing data is added to the existing training data, and in the case where the expansion model is trained by using training data in which expansion data obtained by expanding the existing data and data obtained by annotating the expansion are added to the existing training data. The horizontal axis represents the ratio of training data, where "baseline 1.0" shows a case in which training is performed using the expansion data obtained by the existing data expansion with respect to 100% of the training data in the endoscope image database, and "student 1.0" shows a case in which training is performed using the expansion data obtained by the existing data expansion and the expansion data obtained by the annotation expansion with respect to 100% of the training data in the endoscope image database. From fig. 15, it is found that the diagnostic accuracy F improves in the case where training is performed using annotation expansion. Furthermore, when annotation expansion is performed while reducing the amount of original training data by 10%, accuracy is better than the case where data expansion is performed with training data of 100% amount; thus, the annotation expansion method was found to improve training accuracy with small data.
According to this embodiment, in order to efficiently train using a limited data set without increasing data for training to improve diagnostic accuracy, annotation information is newly generated for an endoscopic image of the data set for training (rather than by a doctor) by using an annotation expansion model AEM trained using the data set for training, and the expanded annotation information is combined with original annotation information to be used as an expanded data set, thereby further improving training accuracy of the image diagnostic model DM in the case of small data.
[ image diagnosis support determined by region-defined lesions ]
In the image diagnosis support system that performs the sixth step, diagnosis may be supported by detecting a region in which the possibility of a lesion in the endoscopic image is high and by diagnosing whether the region in which the possibility of a lesion is high is a lesion by using a trained image diagnosis model that has been trained by using data recorded in the endoscopic image database as training data. The endoscopic image contains a portion assumed to be normal and a portion assumed to be diseased. Therefore, when diagnosing whether a normal portion or a lesion by taking a region in which the possibility of the lesion is high as a target to be evaluated, as compared with evaluating the entire image, the diagnosis accuracy can be improved.
Fig. 16 shows one specific example of creating a trained image diagnosis model (a trained lesion region detection image diagnosis model and a trained lesion classification image diagnosis model) for image diagnosis support according to region-defined lesion classification in the sixth step. The lesion area detection image diagnostic model LADM used in fig. 16 is an image diagnostic model for extracting image features (H, W, M) of all pixels from the lesion endoscopic image LEI and the normal endoscopic image NEI. In such an image diagnosis model, by using image features of a plurality of pixels in a region in which the possibility of a lesion is high, a region in which the possibility of a lesion is high and features in which the possibility of a lesion is high (region restriction features: H, W, mx0/1) are specified from the endoscopic images LEI and NEI. Next, lesion candidate features (1, avg (mx1)) are calculated from the region defining features (H, W, mx0/1). The lesion candidate feature (1, avg (mx1)) is an average value in a region in which the feature M of each pixel is a part of "1" of the lesion candidate mask (H, W, 0/1). Then, by using the lesion classification image diagnostic model LCDM, a region in which the probability of a lesion is high is classified into a normal portion and a lesion according to the lesion candidate feature (1, avg (mx 1)).
More specifically, the image diagnosis model for creating the trained image diagnosis model for performing image diagnosis support is constituted by a lesion region detection image diagnosis model LADM, a binarization processing section BP, a region defining feature calculating section ALFC, a lesion candidate feature calculating section LFC, and a lesion classification image diagnosis model LCDM.
The lesion area detection image diagnosis model LADM creates a lesion accuracy map (H, W, L) from image features (H, W, M) of all pixels in one image and an endoscopic image. Note that Resnet 50, which is a convolutional neural network with a 50-layer depth, can detect an image diagnostic model LADM with a lesion area. The binarization processing section BP creates a lesion candidate mask (H, W, 0/1) by performing binarization processing on the lesion accuracy map (H, W, L). As the binarization processing section BP, an "Otus method" as an image binarization method can be used. The region defining feature calculating section ALFC calculates a region defining feature (H, W, mx0/1) defined to a region in which the likelihood of a lesion is high by multiplying the image feature (H, W, M) by the lesion candidate mask (H, W, 0/1). The lesion candidate feature calculating section LFC calculates a lesion candidate feature (1, avg (mx 1)) at a region where the probability of a lesion is high by averaging portions of the defined region of the region defining feature (H, W, mx 0/1). Then, the lesion classification image diagnostic model LCDM classifies a region in which the probability of a lesion is high into a normal portion and a lesion based on the lesion candidate feature (1, avg (mx 1)). As the lesion classification image diagnostic model LCDM, a multi-layer perceptron method (MLP method) provided with a soft maximum function (Softmax) as an activation function may be used. In this example, the image feature (H, W, M) of each pixel is obtained from the intermediate layer of the lesion area detection image diagnosis model LADM.
In the image diagnosis support system that performs the sixth step, a trained image diagnosis model that uses the trained lesion area detection image diagnosis model and the trained lesion classification image diagnosis model obtained by training the lesion area detection image diagnosis model LADM and the lesion classification image diagnosis model LCDM is used. When such a trained image diagnosis model is used, the determination accuracy at the region where the possibility of lesions is high can be improved.
Fig. 17 (a) and (B) show a lesion determination result in the case where the region is not defined and a lesion determination result in the case where the region is defined. In fig. 17 (a) and (B), the vertical axis represents the evaluation index (IoU) and the horizontal axis represents the size of the lesion region. IoU.gtoreq.0.4 indicates that a lesion determination is correctly made, while IoU <0.4 indicates that the lesion is ignored. When comparing fig. 17 (a) and (B), in the case where the region is not defined, there are 11 ignores of the smallest lesions (regions: 0-100); however, when the region is defined, the neglect of the smallest lesions is reduced to 6. From the above results, it was found that the determination accuracy can be improved by detecting a region in which the possibility of a lesion is high in an endoscopic image and determining whether or not the region in which the possibility of a lesion is high is a lesion.
[ display Screen of display device ]
Fig. 18 shows an example of a display screen of a display device of the image diagnosis support system that performs the image diagnosis support method. In this example, chart information D1 of the patient is displayed on a display screen, an observation position display D2 in which a plurality of observation regions and a position where a lesion is detected are displayed on a view similar to an observation canvas, a diagnosis result display D3 on which the malignancy and type of the lesion in the observation region in which the lesion exists are displayed, an original endoscopic image D4 including the lesion, and an endoscopic image diagnosis support image D5 obtained by superimposing the result of image diagnosis support at the time of determining the lesion on the endoscopic image. Based on the display, the observation result and the diagnosis result can be confirmed on the display screen.
Fig. 19 shows another example of a display screen of the display device. Also in this example, patient chart information D1, an observation position display D2 that displays a plurality of observation regions and a position where a lesion is detected on a view similar to an observation canvas, a diagnosis result display D3 on which the malignancy and type of the lesion in the observation region in which the lesion exists, an original endoscopic image D4 including the lesion, and an endoscopic image diagnosis support image D5 obtained by superimposing the result of image diagnosis support at the time of determining the lesion on the endoscopic image are displayed on the display screen. In this example, a process state display D6 for further displaying the process state and a lesion candidate thumbnail image D7 are also displayed. In the processing state display D6, the observation time and the presence of lesions at the time of processing are sequentially displayed. Therefore, when the vertical line in the processing state display D6 is clicked, the endoscopic image diagnosis support image at this time is displayed as the lesion candidate thumbnail image D7. On the lesion candidate miniature image D7, an image displayed on the endoscopic image diagnosis support image D5 obtained as a diagnosis result at the time of observation is displayed as a lesion candidate through the miniature image. When the thumbnail is selected, the display of D1, D2, and D3 changes according to the selected thumbnail. FIG. 20 illustrates an exemplary output report. An image corresponding to the inspection state in the check box of the lesion candidate miniature image D7 in fig. 19 is displayed. The example of the output report is not limited to the example of fig. 20.
The following will exemplify constituent features of the arrangements of a plurality of applications disclosed in the present specification.
[1] An endoscopic diagnosis support system, wherein an imaging device provided at a distal end portion of an endoscope is inserted into a cavity of an organ of a subject, and presence of lesions in the organ is diagnosed by using a computer based on a plurality of frames including an endoscopic image taken by the imaging device, wherein the computer is configured to perform: a first step of preparing observation canvas data of an observation canvas for an endoscopic image of the cavity, a second step of determining a key frame containing at least one anatomical structure that can specify a position in the cavity of the organ in a frame and marking key frame position data of the key frame in the observation canvas data, a third step of setting the key frame as a first previous frame, and determining three or more key points respectively existing on a previous frame and a next frame in the plurality of frames to calculate coordinates of the key point in the endoscopic image, a fourth step of calculating a displacement amount between the previous frame and the next frame based on the coordinates of the three or more key points in the endoscopic image, a fifth step of marking position canvas of the determined plurality of next frames in the observation canvas data based on at least the displacement amount, the first key frame position data marked first in the second step, and the next key frame position data marked later in the second step, and performing a diagnosis canvas for the plurality of frames in the observation canvas based on the coordinates of the three or more key points in the endoscopic image, and performing a diagnosis for the plurality of the fifth step in parallel to the fifth step.
[2] The endoscopy supporting system of [1] above, wherein the computer further executes the steps of: displaying on a display screen of a display device at least one of: a viewing position display on which a plurality of viewing areas are displayed on a view similar to the viewing canvas; a lesion location display on which a plurality of viewing areas in which lesions exist are on a view similar to the viewing canvas; the diagnosis result shows that the malignancy and type of the lesion in the observation area in which the lesion exists are displayed thereon; and display of a chart for the subject.
[3] An endoscopic diagnosis support system that supports such that an imaging device provided at a distal end portion of an endoscope is inserted into a cavity of an organ of a subject, and diagnoses presence of a lesion in the organ by using a computer based on a plurality of frames including an endoscopic image captured by the imaging device, wherein a region in which a possibility of the lesion is high is detected in the endoscopic image, and determines whether the region in which the possibility of the lesion is high is a lesion by using a trained image diagnosis model obtained by using training data recorded in an endoscopic image database as training data.
[4] A computer program for an endoscopy support system for implementing the endoscopy support system that supports such that an imaging device provided at a front end portion of an endoscope is inserted into a cavity of an organ of a subject, and diagnoses presence of a lesion in the organ by using a computer based on a plurality of frames including an endoscopic image captured by the imaging device, wherein the computer program is recorded in a computer-readable recording medium and installed in the computer, the computer being configured to perform: a first step of preparing observation canvas data for an observation canvas of an endoscopic image of the lumen, a second step of determining a key frame containing at least one anatomical structure that can specify a position in the lumen of the organ in a frame and marking key frame position data of the key frame in the observation canvas data, a third step of setting the key frame as a first previous frame, and determining three or more key points respectively existing on a previous frame and a next frame in the plurality of frames to calculate coordinates of the key point in the endoscopic image, a fourth step of calculating a displacement amount between the previous frame and the next frame based on coordinates of the three or more key points in the endoscopic image, a fifth step of marking position canvas of a plurality of next frames determined in the observation canvas data based on at least the displacement amount, first key frame position data marked first in the second step, and next key frame position data marked later in the second step, and performing a diagnosis for the plurality of next frames in the observation canvas based on the coordinates of the three or more key points in the endoscopic image, a fourth step of calculating a displacement amount between the previous frame and the next frame, and performing a diagnosis for the plurality of the fifth step in parallel to the fifth step based on the fifth step and the fifth step: a first support system that performs diagnosis support by using a trained image diagnosis model obtained by using training data including image data and annotation information recorded in an endoscopic image database as training data; and a second support system that performs diagnosis support by detecting a region in which the possibility of a lesion is high and determining whether the region in which the possibility of a lesion is high is a lesion by using the trained image diagnosis model obtained by recording training data in the endoscopic image database as training data.
Industrial applicability
According to the invention, marking is carried out aiming at the observation canvas data of the inner wall of the target organ; therefore, the examined region and the non-examined region can be clearly distinguished, the inside of the target organ can be thoroughly observed, and where the captured image was taken can be correctly recorded. Further, support in image diagnosis of the presence of lesions in an organ can be made based on a plurality of position data marked in the observation canvas data and endoscopic images in a plurality of frames.
REFERENCE SIGNS LIST
ES: endoscope system
EI: endoscopic image
IDS: image diagnosis support step
ORS: observation and recording step
SID: diagnosis support information display unit SOC: simulation unfolding observation canvas
MX: matrix array
DB: endoscopic image database
DM: image diagnosis model
TDM: trained diagnostic imaging model
AEM: annotating an expansion model
E: encoder with a plurality of sensors
D: decoder
And (3) TADM: training-based normal additional training type image diagnosis model
TDM 1: trained diagnostic imaging model
DM 1: image diagnosis model
TDM 2: trained diagnostic imaging model
LADM: diagnostic model for detecting image of lesion area
LCDM: lesion classification image diagnostic model
BP: binarization processing unit
ALFC: region definition feature calculation unit
LFC: lesion candidate feature calculation unit
LSEDM: similar image determination model
Claim (modification according to treaty 19)
1. An endoscopic diagnosis support method for supporting endoscopic diagnosis by using a computer, wherein an imaging device provided at a distal end portion of an endoscope is inserted into a cavity of an organ of a subject, and presence of a lesion in the organ is diagnosed by using the computer based on a plurality of frames including an endoscopic image taken by the imaging device, the computer performing the steps of:
a first step of: preparing viewing canvas data for a viewing canvas of the endoscopic image of the cavity,
and a second step of: determining a keyframe containing at least one anatomical structure capable of specifying a location in a lumen of the organ in a frame, and tagging keyframe location data of the keyframe in the viewing canvas data,
and a third step of: setting the key frame as a first previous frame, and determining three or more key points existing on previous and subsequent frames defined in the plurality of frames, respectively, to calculate coordinates of the key points in the endoscopic image,
Fourth step: the displacement amount between the preceding frame and the following frame is calculated based on coordinates of the three or more key points in the endoscopic image,
fifth step: marking position data of the determined plurality of subsequent frames in the viewing canvas data based at least on the displacement amount, first key frame position data marked first in the second step, and next key frame position data marked subsequently in the second step, and
sixth step: in parallel with performing the second to fifth steps for the plurality of frames or after performing the second to fifth steps for the plurality of frames, supporting an image diagnosis of the presence of the lesion in the organ based on a plurality of the determined position data marked in the observation canvas data and the endoscopic images in the plurality of frames.
2. The endoscopy supporting method of claim 1, wherein
The position data of the subsequent frame includes relative position information for the key frame position data and a frame number.
3. The endoscopy supporting method of claim 1 or 2, wherein
The viewing canvas is a simulated expansion of the viewing canvas in which the positions of a plurality of openings and a top in the cavity of the organ are specified by a general method, and one opening is arranged in the center, and
in the fifth step, a plurality of temporary position data are used as the position data of the plurality of subsequent frames until the next key frame position data is determined, and when the next key frame position data is determined, the determined position data of the plurality of subsequent frames are marked so that the plurality of temporary position data of the plurality of subsequent frames fit between the first key frame position data and the next key frame position data.
4. The endoscopy supporting method of claim 2, wherein
The relative position information is composed of coordinate position data attached with symbols representing types, wherein a plurality of segments of the same size and shape are assumed on the simulated-spread viewing canvas and are arranged to form a matrix, and coordinate positions of segments in which the at least one anatomical structure is located are defined as reference points in the matrix.
5. The endoscopy supporting method of claim 1, wherein the second step and the third step are implemented using a self-position estimating technique.
6. The endoscopy supporting method of claim 1, wherein in the sixth step,
executing at least one of a first support method and a second support method, the first support method supporting diagnosis using a trained image diagnosis model that has been trained by training data recorded as training data in an endoscopic image database including image data with annotation information, and
the second support method supports diagnosis by training using data that has been recorded as training data in the endoscopic image database to detect a region in which a possibility of a lesion in the endoscopic image is high and determining whether the region in which the possibility of the lesion is high is a trained image diagnosis model of the lesion.
7. The endoscopy diagnosis support method of claim 6, wherein the endoscope image database further comprises expanded annotation information obtained by expanding annotation information using an annotation expansion model.
8. The endoscopy supporting method of claim 7, wherein
The annotation expansion model is an annotation expansion model based on an automatic encoder composed of an encoder and a decoder, and
The annotation expansion model is trained to estimate annotation information after expansion by inputting into the encoder a set of features extracted from an intermediate layer of the image diagnosis model to which a lesion endoscopic image recorded in the endoscopic image database has been input using the image diagnosis model as a feature extractor, and annotation information corresponding to the lesion endoscopic image, and by causing the decoder to perform an inverse operation on the latent variable output from the encoder and the features.
9. The endoscopy supporting method of claim 8, wherein training is performed to reduce cross entropy between annotation information input to the encoder and the expanded annotation information.
10. The endoscopy diagnosis support method of claim 8, wherein the annotation expansion model randomly expands the expanded annotation information.
11. The endoscopy supporting method of claim 7, wherein the endoscope image database further comprises a dilated dataset containing dilated data obtained by dilating data of the lesion endoscope image recorded in the endoscope image database using a data dilation technique and dilated annotation information.
12. The endoscopy supporting method of claim 11, wherein
The trained image diagnosis model used in the second method for supporting diagnosis is configured to extract image features in all pixels from the endoscopic image to specify a region in which the possibility of the lesion is high from the endoscopic image, calculate lesion candidate features in the region in which the possibility of the lesion is high by using image features of a plurality of pixels located in the region in which the possibility of the lesion is high, and classify the region in which the possibility of the lesion is high into a normal portion and the lesion from the lesion candidate features.
13. The endoscopy supporting method of claim 12, wherein the trained image diagnosis model is comprised of:
a lesion area detection image diagnostic model that creates a lesion accuracy map from the image features and the endoscopic image,
a binarization processing section that creates a lesion candidate mask by performing binarization processing of the lesion accuracy map,
a region defining feature calculating section that calculates a region defining feature defined to a region in which a likelihood of the lesion is high based on the image feature and the lesion candidate mask,
A lesion candidate feature calculation section that calculates a lesion candidate feature for a region in which a likelihood of the lesion is high by averaging the region defining features, and
a lesion classification image diagnosis model that classifies a region in which the probability of the lesion is high as the normal portion and the lesion based on the lesion candidate feature.
14. The endoscopy supporting method of claim 1, wherein the computer further performs the step of displaying at least one of the following on a display screen of a display device:
a viewing position display that displays a plurality of viewing areas on a view similar to the viewing canvas,
a lesion location display displaying an observation area in which lesions are present on a view similar to the observation canvas,
a diagnosis result display showing the malignancy and type of the lesion in the observation area in which the lesion exists, and
display of a medical chart of the subject.
15. An endoscopy support system provided with a computer having means for performing the endoscopy support method of claim 1, wherein the computer comprises:
first means for performing the first step of: preparing viewing canvas data for a viewing canvas of the endoscopic image of the cavity,
Second means for performing the following second step: determining a keyframe containing at least one anatomical structure capable of specifying a location in a lumen of the organ in a frame, and tagging keyframe location data of the keyframe in the viewing canvas data,
third means for performing the third step of: setting the key frame as a first previous frame, and determining three or more key points existing on previous and subsequent frames defined in the plurality of frames, respectively, to calculate coordinates of the key points in the endoscopic image,
fourth means for performing the fourth step of: the displacement amount between the preceding frame and the following frame is calculated based on coordinates of the three or more key points in the endoscopic image,
fifth means for performing the fifth step of: marking position data of the determined plurality of subsequent frames in the viewing canvas data based at least on the displacement amount, first key frame position data marked first in the second step, and next key frame position data marked subsequently in the second step, and
sixth means for performing the sixth step of: in parallel with or after performing the second to fifth steps for the plurality of frames, supporting an image diagnosis of the presence of the lesion in the organ based on a plurality of the determined position data marked in the observation canvas data and the endoscopic images in the plurality of frames.

Claims (15)

1. An endoscopic diagnosis support method for supporting endoscopic diagnosis by using a computer, wherein an imaging device provided at a distal end portion of an endoscope is inserted into a cavity of an organ of a subject, and presence of a lesion in the organ is diagnosed by using the computer based on a plurality of frames including an endoscopic image taken by the imaging device, the computer performing the steps of:
a first step of: preparing viewing canvas data for a viewing canvas of the endoscopic image of the cavity,
and a second step of: determining a keyframe containing at least one anatomical structure capable of specifying a location in a lumen of the organ in a frame, and tagging keyframe location data of the keyframe in the viewing canvas data,
and a third step of: setting the key frame as a first previous frame, and determining three or more key points existing on previous and subsequent frames defined in the plurality of frames, respectively, to calculate coordinates of the key points in the endoscopic image,
fourth step: the displacement amount between the preceding frame and the following frame is calculated based on coordinates of the three or more key points in the endoscopic image,
Fifth step: marking position data of the determined plurality of subsequent frames in the viewing canvas data based at least on the displacement amount, first key frame position data marked first in the second step, and next key frame position data marked subsequently in the second step, and
sixth step: in parallel with performing the second to fifth steps for the plurality of frames or after performing the second to fifth steps for the plurality of frames, supporting an image diagnosis of the presence of the lesion in the organ based on a plurality of the determined position data marked in the observation canvas data and the endoscopic images in the plurality of frames.
2. The endoscopy supporting method of claim 1, wherein
The position data of the subsequent frame includes relative position information for the key frame position data and a frame number.
3. The endoscopy supporting method of claim 1 or 2, wherein
The viewing canvas is a simulated expansion of the viewing canvas in which the positions of a plurality of openings and a top in the cavity of the organ are specified by a general method, and one opening is arranged in the center, and
In the fifth step, a plurality of temporary position data are used as the position data of the plurality of subsequent frames until the next key frame position data is determined, and when the next key frame position data is determined, the determined position data of the plurality of subsequent frames are marked so that the plurality of temporary position data of the plurality of subsequent frames fit between the first key frame position data and the next key frame position data.
4. The endoscopy supporting method of claim 23, wherein
The relative position information is composed of coordinate position data attached with symbols representing types, wherein a plurality of segments of the same size and shape are assumed on the simulated-spread viewing canvas and are arranged to form a matrix, and coordinate positions of segments in which the at least one anatomical structure is located are defined as reference points in the matrix.
5. The endoscopy supporting method of claim 1, wherein the second step and the third step are implemented using a self-position estimating technique.
6. The endoscopy supporting method of claim 1, wherein in the sixth step,
Executing at least one of a first support method and a second support method, the first support method supporting diagnosis using a trained image diagnosis model that has been trained by training data recorded as training data in an endoscopic image database including image data with annotation information, and
the second support method supports diagnosis by training using data that has been recorded as training data in the endoscopic image database to detect a region in which a possibility of a lesion in the endoscopic image is high and determining whether the region in which the possibility of the lesion is high is a trained image diagnosis model of the lesion.
7. The endoscopy diagnosis support method of claim 6, wherein the endoscope image database further comprises expanded annotation information obtained by expanding annotation information using an annotation expansion model.
8. The endoscopy supporting method of claim 7, wherein
The annotation expansion model is an annotation expansion model based on an automatic encoder composed of an encoder and a decoder, and
the annotation expansion model is trained to estimate annotation information after expansion by inputting into the encoder a set of features extracted from an intermediate layer of the image diagnosis model to which a lesion endoscopic image recorded in the endoscopic image database has been input using the image diagnosis model as a feature extractor, and annotation information corresponding to the lesion endoscopic image, and by causing the decoder to perform an inverse operation on the latent variable output from the encoder and the features.
9. The endoscopy supporting method of claim 8, wherein training is performed to reduce cross entropy between annotation information input to the encoder and the expanded annotation information.
10. The endoscopy diagnosis support method of claim 8, wherein the annotation expansion model randomly expands the expanded annotation information.
11. The endoscopy supporting method of claim 7, wherein the endoscope image database further comprises a dilated dataset containing dilated data obtained by dilating data of the lesion endoscope image recorded in the endoscope image database using a data dilation technique and dilated annotation information.
12. The endoscopy supporting method of claim 11, wherein
The trained image diagnosis model used in the second method for supporting diagnosis is configured to extract image features in all pixels from the endoscopic image to specify a region in which the possibility of the lesion is high from the endoscopic image, calculate lesion candidate features in the region in which the possibility of the lesion is high by using image features of a plurality of pixels located in the region in which the possibility of the lesion is high, and classify the region in which the possibility of the lesion is high into a normal portion and the lesion from the lesion candidate features.
13. The endoscopy supporting method of claim 12, wherein the trained image diagnosis model is comprised of:
a lesion area detection image diagnostic model that creates a lesion accuracy map from the image features and the endoscopic image,
a binarization processing section that creates a lesion candidate mask by performing binarization processing of the lesion accuracy map,
a region defining feature calculating section that calculates a region defining feature defined to a region in which a likelihood of the lesion is high based on the image feature and the lesion candidate mask,
a lesion candidate feature calculation section that calculates a lesion candidate feature for a region in which a likelihood of the lesion is high by averaging the region defining features, and
a lesion classification image diagnosis model that classifies a region in which the probability of the lesion is high as the normal portion and the lesion based on the lesion candidate feature.
14. The endoscopy supporting method of claim 1, wherein the computer further performs the step of displaying at least one of the following on a display screen of a display device:
a viewing position display that displays a plurality of viewing areas on a view similar to the viewing canvas,
A lesion location display displaying an observation area in which lesions are present on a view similar to the observation canvas,
a diagnosis result display showing the malignancy and type of the lesion in the observation area in which the lesion exists, and
display of a medical chart of the subject.
15. An endoscopy support system provided with a computer having means for performing the endoscopy support method of claim 1, wherein the computer comprises:
first means for performing the first step of: preparing viewing canvas data for a viewing canvas of the endoscopic image of the cavity,
second means for performing the following second step: determining a keyframe containing at least one anatomical structure capable of specifying a location in a lumen of the organ in a frame, and tagging keyframe location data of the keyframe in the viewing canvas data,
third means for performing the third step of: setting the key frame as a first previous frame, and determining three or more key points existing on previous and subsequent frames defined in the plurality of frames, respectively, to calculate coordinates of the key points in the endoscopic image,
Fourth means for performing the fourth step of: the displacement amount between the preceding frame and the following frame is calculated based on coordinates of the three or more key points in the endoscopic image,
fifth means for performing the fifth step of: marking position data of the determined plurality of subsequent frames in the viewing canvas data based at least on the displacement amount, first key frame position data marked first in the second step, and next key frame position data marked subsequently in the second step, and
sixth means for performing the sixth step of: in parallel with or after performing the second to fifth steps for the plurality of frames, supporting an image diagnosis of the presence of the lesion in the organ based on a plurality of the determined position data marked in the observation canvas data and the endoscopic images in the plurality of frames.
CN202180093283.7A 2020-12-08 2021-12-07 Method for endoscopic diagnostic support and system for endoscopic diagnostic support Pending CN116916807A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2020203765 2020-12-08
JP2020-203765 2020-12-08
PCT/JP2021/045003 WO2022124315A1 (en) 2020-12-08 2021-12-07 Endoscopic diagnosis assistance method and endoscopic diagnosis assistance system

Publications (1)

Publication Number Publication Date
CN116916807A true CN116916807A (en) 2023-10-20

Family

ID=81974510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202180093283.7A Pending CN116916807A (en) 2020-12-08 2021-12-07 Method for endoscopic diagnostic support and system for endoscopic diagnostic support

Country Status (4)

Country Link
US (1) US12488896B2 (en)
JP (1) JP7388648B2 (en)
CN (1) CN116916807A (en)
WO (1) WO2022124315A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118918170A (en) * 2024-10-10 2024-11-08 荣耀终端有限公司 Displacement detection method, screen detection method and detection device

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117173075B (en) * 2022-05-24 2026-01-23 鸿海精密工业股份有限公司 Medical image detection method and related equipment
US20240005532A1 (en) * 2022-07-04 2024-01-04 Hefei University Of Technology Dynamic tracking methods for in-vivo three-dimensional key point and in-vivo three-dimensional curve
EP4461252A1 (en) * 2023-05-10 2024-11-13 Stryker Corporation Systems and methods for tracking locations of interest in endoscopic imaging

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2314794A1 (en) * 2000-08-01 2002-02-01 Dimitre Hristov Apparatus for lesion or organ localization
WO2014168128A1 (en) 2013-04-12 2014-10-16 オリンパスメディカルシステムズ株式会社 Endoscope system and operation method for endoscope system
WO2016044624A1 (en) 2014-09-17 2016-03-24 Taris Biomedical Llc Methods and systems for diagnostic mapping of bladder
JP2017205343A (en) * 2016-05-19 2017-11-24 オリンパス株式会社 Endoscope device and method for operating endoscope device
JP2018050890A (en) * 2016-09-28 2018-04-05 富士フイルム株式会社 Image display apparatus, image display method, and program
SG11202003973VA (en) 2017-10-30 2020-05-28 Japanese Found For Cancer Res Image diagnosis assistance apparatus, data collection method, image diagnosis assistance method, and image diagnosis assistance program
JP2019180966A (en) 2018-04-13 2019-10-24 学校法人昭和大学 Endoscope observation support apparatus, endoscope observation support method, and program
CN109146884B (en) * 2018-11-16 2020-07-03 青岛美迪康数字工程有限公司 Endoscopic examination monitoring method and device
US20220095889A1 (en) 2019-07-23 2022-03-31 Hoya Corporation Program, information processing method, and information processing apparatus

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118918170A (en) * 2024-10-10 2024-11-08 荣耀终端有限公司 Displacement detection method, screen detection method and detection device

Also Published As

Publication number Publication date
JPWO2022124315A1 (en) 2022-06-16
WO2022124315A1 (en) 2022-06-16
US20240038391A1 (en) 2024-02-01
US12488896B2 (en) 2025-12-02
JP7388648B2 (en) 2023-11-29

Similar Documents

Publication Publication Date Title
CN116916807A (en) Method for endoscopic diagnostic support and system for endoscopic diagnostic support
JP4631057B2 (en) Endoscope system
US12193634B2 (en) Method for real-time detection of objects, structures or patterns in a video, an associated system and an associated computer readable medium
Spyrou et al. Video-based measurements for wireless capsule endoscope tracking
CN111214255A (en) Medical ultrasonic image computer-aided diagnosis method
WO2017027638A1 (en) 3d reconstruction and registration of endoscopic data
IE20090299A1 (en) An endoscopy system
TW202322744A (en) Computer-implemented systems and methods for analyzing examination quality for an endoscopic procedure
US20110187707A1 (en) System and method for virtually augmented endoscopy
CN102065744A (en) Image processing device, image processing program, and image processing method
US12433478B2 (en) Processing device, endoscope system, and method for processing captured image
WO2021250951A1 (en) Program, information processing method, and information processing device
WO2022194126A1 (en) Method for building image reading model based on capsule endoscope, device, and medium
JPWO2022124315A5 (en)
CN116958147B (en) Target area determining method, device and equipment based on depth image characteristics
CN111311626A (en) Automatic detection method of skull fracture based on CT image and electronic medium
CN107204045A (en) Virtual endoscope system based on CT images
Lurie et al. Registration of free-hand OCT daughter endoscopy to 3D organ reconstruction
Barbour et al. Surface reconstruction of the pediatric larynx via structure from motion photogrammetry: a pilot study
JP2007105352A (en) Differential image display device, differential image display method and program thereof
CN119184591A (en) Bleeding point positioning method, positioning device, electronic equipment and medium
KR20240059417A (en) Urinary tract location estimation system
Figueiredo et al. Dissimilarity measure of consecutive frames in wireless capsule endoscopy videos: A way of searching for abnormalities
JP7768398B2 (en) Endoscopic examination support device, endoscopic examination support method, and program
CN112862754A (en) System and method for prompting missing detection of retained image based on intelligent identification

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination