[go: up one dir, main page]

WO2024240089A1 - Procédé et appareil d'affichage d'image d'endoscope, dispositif terminal et support de stockage - Google Patents

Procédé et appareil d'affichage d'image d'endoscope, dispositif terminal et support de stockage Download PDF

Info

Publication number
WO2024240089A1
WO2024240089A1 PCT/CN2024/094009 CN2024094009W WO2024240089A1 WO 2024240089 A1 WO2024240089 A1 WO 2024240089A1 CN 2024094009 W CN2024094009 W CN 2024094009W WO 2024240089 A1 WO2024240089 A1 WO 2024240089A1
Authority
WO
WIPO (PCT)
Prior art keywords
contour
image
target tissue
tissue
white light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/CN2024/094009
Other languages
English (en)
Chinese (zh)
Inventor
郭毅军
严崇源
刘顿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Xishan Science and Technology Co Ltd
Original Assignee
Chongqing Xishan Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Xishan Science and Technology Co Ltd filed Critical Chongqing Xishan Science and Technology Co Ltd
Publication of WO2024240089A1 publication Critical patent/WO2024240089A1/fr
Anticipated expiration legal-status Critical
Pending legal-status Critical Current

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances

Definitions

  • the present application relates to the field of image processing, and in particular to an endoscopic image display method, apparatus, terminal device and storage medium.
  • the related fluorescence endoscope imaging display modes are mainly divided into white light mode, fluorescence mode and fusion mode.
  • white light mode fluorescence mode
  • fluorescence mode fluorescence mode
  • fusion mode fusion mode
  • the white light mode can only display tissue information but cannot display the location of fluorescently marked tumor lesions; the fluorescence mode can only display black and white fluorescent images that mark the location of tumor lesions but cannot display tissue information; the traditional fusion mode is to superimpose and fuse the fluorescence image onto the white light image. Doctors can observe the fluorescently marked lesions in the white light mode, but because the superposition of the fluorescence image covers the original color and structure of the epidermis, blood vessels and other tissues, doctors must constantly switch display modes during the operation to avoid removing important blood vessels and normal tissues, which affects the efficiency of the operation and increases the risk of the operation.
  • an endoscopic image display method, apparatus, terminal device, and storage medium are provided.
  • the present application provides an endoscopic image display method, the endoscopic image display method comprising:
  • the target tissue outline is fused with the original white light image and displayed to obtain a white light image with the target tissue outline.
  • the step of extracting tissue contours from the tissue image to obtain the target tissue contour comprises:
  • the contour of the binary image is extracted based on an edge finding algorithm to obtain the contour of the target tissue.
  • the step of processing the tissue image by a binary method to obtain a binary image further includes:
  • the step of extracting the contour of the binary image based on an edge finding algorithm to obtain the contour of the target tissue comprises:
  • the contour of the target tissue is obtained by performing contour extraction on the smoothed binary image based on an edge finding algorithm.
  • the step of processing the tissue image by a binary method to obtain a binary image includes:
  • the tissue image is Segmentation into target tissue area and non-target tissue area;
  • a binary image is obtained according to the target tissue area and the non-target tissue area.
  • the step of segmenting the tissue image into a target tissue area and a non-target tissue area according to a comparison result between the grayscale of each pixel in the tissue image and the grayscale threshold comprises:
  • the pixel point For each pixel point in the tissue image, if the grayscale of the pixel point is greater than the grayscale threshold, the pixel point is determined as a pixel point of the target tissue area;
  • the pixel point is determined as a pixel point of the non-target tissue area
  • the target tissue area and the non-target tissue area in the tissue image are determined according to the pixel points of the target tissue area and the pixel points of the non-target tissue area.
  • the step of extracting tissue contours from the tissue image to obtain the target tissue contour comprises:
  • the step of fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour comprises:
  • the enhanced target tissue contour is fused with the original white light image to obtain the white light image with the target tissue contour.
  • the step of enhancing the target tissue contour to obtain the enhanced target tissue contour comprises:
  • a coefficient enhancement matrix is calculated by a coefficient enhancement algorithm
  • a nonlinear adjustment coefficient is calculated by a nonlinear adjustment algorithm
  • the enhanced target tissue contour is calculated based on the nonlinear adjustment coefficient, the channel contour pixel set, the tissue contour pixel set, and a contour enhancement algorithm.
  • the step of acquiring the original white light image of the endoscope and the marked tissue image comprises:
  • the step of fusing the enhanced target tissue contour with the original white light image to obtain the white light image with the target tissue contour comprises:
  • the enhanced target tissue contour is fused with the new white light image to obtain the white light image with the target tissue contour.
  • the step of obtaining the original white light image of the endoscope and the marked tissue image further includes:
  • the step of extracting tissue contours from the tissue image to obtain the target tissue contour comprises:
  • the step of fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour comprises:
  • the target tissue contour is fused with the standard white light image and displayed to obtain a white light image with the target tissue contour.
  • the present application also provides an endoscopic image display device, the endoscopic image display device comprising:
  • An image acquisition module used for acquiring the original white light image of the endoscope and the marked tissue image
  • a feature extraction module used to extract tissue contours from the tissue image to obtain a target tissue contour
  • the image processing module is used to fuse the target tissue outline with the original white light image for display, so as to obtain a white light image with the target tissue outline.
  • the feature extraction module is specifically used to:
  • the contour of the binary image is extracted based on an edge finding algorithm to obtain the contour of the target tissue.
  • the feature extraction module is specifically used to:
  • the contour of the target tissue is obtained by performing contour extraction on the smoothed binary image based on an edge finding algorithm.
  • the feature extraction module is specifically used to:
  • the tissue image is segmented into a target tissue area and a non-target tissue area;
  • a binary image is obtained according to the target tissue area and the non-target tissue area.
  • the feature extraction module is specifically used to:
  • the pixel point For each pixel point in the tissue image, if the grayscale of the pixel point is greater than the grayscale threshold, the pixel point is determined as a pixel point of the target tissue area;
  • the pixel point is determined as a pixel point of the non-target tissue area
  • the target tissue area and the non-target tissue area in the tissue image are determined according to the pixel points of the target tissue area and the pixel points of the non-target tissue area.
  • the endoscopic image display device is further used for:
  • the step of fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour comprises:
  • the enhanced target tissue contour is fused with the original white light image to obtain the white light image with the target tissue contour.
  • the endoscopic image display device is further used for:
  • a coefficient enhancement matrix is calculated by a coefficient enhancement algorithm
  • a nonlinear adjustment coefficient is calculated by a nonlinear adjustment algorithm
  • the enhanced target tissue contour is calculated based on the nonlinear adjustment coefficient, the channel contour pixel set, the tissue contour pixel set, and a contour enhancement algorithm.
  • the endoscopic image display device is further used for:
  • the step of fusing the enhanced target tissue contour with the original white light image to obtain the white light image with the target tissue contour comprises:
  • the enhanced target tissue contour is fused with the new white light image to obtain the target tissue contour.
  • the device is further used for:
  • the step of extracting tissue contours from the tissue image to obtain the target tissue contour comprises:
  • the step of fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour comprises:
  • the target tissue contour is fused with the standard white light image and displayed to obtain a white light image with the target tissue contour.
  • An embodiment of the present application also proposes a terminal device, which includes a memory, a processor, and an endoscopic image display program stored in the memory and executable on the processor, wherein the endoscopic image display program implements the steps of the endoscopic image display method as described above when executed by the processor.
  • An embodiment of the present application further proposes a computer-readable storage medium, on which an endoscopic image display program is stored.
  • the endoscopic image display program is executed by a processor, the steps of the endoscopic image display method as described above are implemented.
  • FIG. 1 is a schematic diagram of functional modules of a terminal device to which an endoscopic image display device belongs in one or more embodiments.
  • FIG. 2 is a flow chart of a method for displaying an endoscope image in one or more embodiments.
  • FIG. 3 is another schematic flow chart of an endoscope image display method in one or more embodiments.
  • FIG. 4 is another schematic flow chart of an endoscope image display method in one or more embodiments.
  • FIG. 5 is another flowchart of an endoscope image display method in one or more embodiments.
  • FIG. 6 is another flowchart of an endoscope image display method in one or more embodiments.
  • FIG. 7 is another flowchart of an endoscopic image display method in one or more embodiments.
  • FIG. 8 is another flowchart of an endoscopic image display method in one or more embodiments.
  • FIG. 9 is a flowchart of an endoscopic image display involved in one or more embodiments.
  • FIG. 10 is a flow chart of contour extraction involved in one or more embodiments.
  • FIG. 11 is a tissue image and a white light image with a target tissue outline involved in one or more embodiments.
  • FIG. 12 is an endoscope image display device involved in one or more embodiments.
  • the main solution of the embodiment of the present application includes: obtaining the original white light image of the endoscope and the marked tissue image; extracting the tissue contour of the tissue image to obtain the target tissue contour; fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour.
  • the target tissue position can be marked in the endoscope imaging display mode while displaying the tissue information of the marked position, thereby improving the surgical efficiency and reducing the surgical risk.
  • the embodiments of the present application take into account that the related technical solutions based on the white light mode can only display tissue information, but cannot display the location of the tumor lesion marked by fluorescence; the fluorescence mode can only display the black and white fluorescence image marked with the location of the tumor lesion, but cannot display tissue information; the traditional fusion mode is to superimpose and fuse the fluorescence image onto the white light image, and the doctor can Fluorescently marked lesions can be observed in light mode, but because the superposition of fluorescent images covers the original color and structure of tissues such as the epidermis and blood vessels, doctors must constantly switch display modes during the operation to avoid removing important blood vessels and normal tissues, which affects the efficiency of the operation and increases the risk of the operation.
  • an embodiment of the present application proposes a solution that can extract the lesion contour in the tissue image and fuse it into the original white light image, so as to achieve marking the target tissue position in the endoscopic imaging display mode while displaying the tissue information of the marked position, thereby improving surgical efficiency and reducing surgical risks.
  • Figure 1 is a functional module diagram of a terminal device to which an endoscopic image display device belongs in one or more embodiments.
  • the endoscopic image display device can be a device independent of the terminal device and capable of data processing, which can be carried on the terminal device in the form of hardware or software.
  • the terminal device may be a medical endoscope, or a fixed terminal device with an endoscope, a camera system or a server.
  • the terminal device to which the endoscopic image display apparatus belongs includes at least an output module 110 , a processor 120 , a memory 130 and a communication module 140 .
  • the memory 130 stores an operating system and an endoscope image display program.
  • the endoscope image display device can obtain the original white light image of the endoscope and the marked tissue image, extract the tissue contour of the tissue image, obtain the target tissue contour, fuse the target tissue contour with the original white light image for display, and obtain a white light image with the target tissue contour and store it in the memory 130.
  • the output module 110 can be a display screen, a speaker, etc.
  • the communication module 140 can include a WIFI module, a mobile communication module, a Bluetooth module, etc., and communicate with an external device or server through the communication module 140.
  • the target tissue outline is fused with the original white light image and displayed to obtain a white light image with the target tissue outline.
  • the contour of the binary image is extracted based on an edge finding algorithm to obtain the contour of the target tissue.
  • the contour of the target tissue is obtained by performing contour extraction on the smoothed binary image based on an edge finding algorithm.
  • the enhanced target tissue contour is fused with the original white light image to obtain the white light image with the target tissue contour.
  • a coefficient enhancement matrix is calculated by a coefficient enhancement algorithm
  • a nonlinear adjustment coefficient is calculated by a nonlinear adjustment algorithm
  • the enhanced target tissue contour is calculated based on the nonlinear adjustment coefficient, the channel contour pixel set, the tissue contour pixel set, and a contour enhancement algorithm.
  • the enhanced target tissue contour is fused with the new white light image to obtain the white light image with the target tissue contour.
  • the target tissue contour is fused with the standard white light image and displayed to obtain a white light image with the target tissue contour.
  • This embodiment obtains the original white light image of the endoscope and the marked tissue image; extracts the tissue contour of the tissue image to obtain the target tissue contour; and fuses the target tissue contour with the original white light image to obtain a white light image with the target tissue contour.
  • the marked tissue image can be a tissue image with the lesion position marked, so that the lesion position is marked in the endoscopic imaging display mode, and the tissue information of the marked position is displayed at the same time, thereby improving the surgical efficiency and reducing the surgical risk.
  • FIG. 2 is a flow chart of an endoscopic image display method in one or more embodiments.
  • An embodiment of the present application provides an endoscopic image display method, the method comprising:
  • Step S10 acquiring the original white light image of the endoscope and the marked tissue image
  • the system involved in the method of this embodiment may include a medical endoscope and an image processing device, wherein the image processing device may be integrated into the medical endoscope or carried on an image processing terminal device or server.
  • this embodiment adopts a contour and image fusion display method. Specifically, the original white light image with tissue information and the tissue image with the target tissue marked are first obtained, and then the contour of the target tissue in the tissue image is extracted to obtain the target tissue contour, and finally the target tissue contour is fused to the original white light image to obtain a white light image with the target tissue contour.
  • the target tissue can be a lesion tissue.
  • a tissue image with the lesion tissue marked can be first obtained, the lesion outline can be extracted from the tissue image, and then the lesion outline can be fused to the original white light image to obtain a white light image with the lesion outline, so as to achieve marking the lesion position while displaying the tissue information of the marked position, thereby improving surgical efficiency and reducing surgical risks.
  • a white light endoscope can be used to obtain original white light images with tissue information based on the visible light spectrum of 400-700nm.
  • the original white light image can observe the tissue information inside the human body, such as mucosal color, mucosal state, vascular texture, etc., which is very helpful for providing clear images of the inside of the human body during surgery.
  • the tissue image includes one or more of the following images: a fluorescent image, a near-infrared image, an ultraviolet image, and of course other tissue images may also be included.
  • the fluorescent image can be obtained through a fluorescent endoscope, specifically by injecting a fluorescent developer into the patient's body, and then using a fluorescent camera to capture the fluorescent signal to achieve the marking of the lesion tissue, thereby obtaining a tissue image with the lesion location marked, which plays a key role in accurate intraoperative positioning and reducing surgical risks.
  • the acquisition of fluorescent images and the acquisition of white light images can be carried out simultaneously, and only the hardware of the fluorescent endoscope and the white light endoscope need to be integrated.
  • the labeled tissue images also have limitations. They can only distinguish the black and white images of the locations of normal tissues and lesion tissues. Because the black and white tissue images cover the original colors and structures of the epidermis, blood vessels and other tissues, they cannot display tissue information.
  • both the original white light image and the tissue image have their own drawbacks. Even in the traditional fusion mode, that is, the fluorescent image is superimposed on the original white light image, these drawbacks cannot be avoided.
  • the doctor needs to obtain different effective information from multiple images respectively, which requires the doctor to frequently switch images during surgery, which brings great inconvenience, reduces surgical efficiency, and increases surgical risks.
  • this embodiment proposes to obtain the original white light image and the tissue image, and then perform further contour extraction and fusion processing to achieve the simultaneous acquisition of effective information of the two images, and avoid the drawbacks of the original white light image and the tissue image, so as to realize the marking of the lesion position while displaying the tissue information of the marked position, improve surgical efficiency, and reduce surgical risks.
  • Step S30 extracting tissue contours from the tissue image to obtain target tissue contours
  • this embodiment proposes to obtain the original white light image and tissue image, and then perform further contour extraction and fusion processing to achieve the simultaneous acquisition of effective information of the two images, and avoid the disadvantages of the original white light image and tissue image.
  • the contour of the target tissue in the tissue image is extracted by first obtaining the position and shape information of the target tissue from the tissue image, and then obtaining the contour of the target tissue.
  • the position and shape of the target tissue is expressed by using the contour of the target tissue that occupies fewer pixels, leaving more pixels to express the original color and structure of the epidermis, blood vessels and other tissues, thereby obtaining effective information such as the position and shape of the target tissue in the tissue image, while avoiding the disadvantage that the tissue image covers the original color and structure of the epidermis, blood vessels and other tissues.
  • the tissue image can be segmented into a target tissue region and a non-target tissue region by image segmentation, and then the edges of the target tissue region and the non-target tissue region can be obtained by an edge detection algorithm to obtain the target tissue contour.
  • the target tissue may be a lesion tissue
  • the non-target tissue may be a normal tissue.
  • tissue images such as fluorescence images
  • lesion tissue and normal tissue present different gray levels in the tissue image, wherein the lesion tissue is usually brighter than the normal tissue.
  • the lesion tissue and the normal tissue can be represented as a white area and a black area, respectively, thereby segmenting the lesion tissue area and the normal tissue area.
  • neural networks and other methods can also be used to automatically identify tissue images and segment lesion tissue areas and normal tissue areas, thereby improving analysis efficiency and accuracy.
  • the edge detection algorithm can be used to detect the edge between the white area (lesion tissue) and the black area (normal tissue) using the fluorescent image as an example, and the pixel set of the edge in the tissue image can be extracted to obtain the lesion tissue contour, that is, the target tissue contour, so that the position of the target tissue can be marked by replacing the white area with the target tissue contour. Because the target tissue contour occupies fewer pixels in the image, it does not affect the doctor's observation of the original color and structure of the epidermis, blood vessels and other tissues of the lesion tissue.
  • This embodiment obtains the target tissue contour by performing tissue contour extraction on the tissue image, and can express the position and shape of the lesion tissue in the image with the target tissue contour occupying fewer pixels, thereby avoiding the disadvantage that the tissue image covers the original color and structure of the epidermis, blood vessels and other tissues, and can effectively help doctors analyze tissue images and diagnose diseases.
  • Step S50 merging the target tissue contour with the original white light image for display, to obtain a white light image with the target tissue contour.
  • the target tissue contour can be fused with the original white light image to achieve the effect of information fusion.
  • the method of displaying the target tissue contour on the original white light image in the embodiment of the present application is different from some traditional technologies that identify the target tissue therein and display the target tissue contour by performing image analysis or spectral analysis on the white light image.
  • the embodiment of the present application does not need to obtain the target tissue contour through complex calculation analysis and sample analysis capabilities, but only needs to obtain the target tissue contour based on, for example, the fluorescent image of a fluorescent endoscope, and the processing efficiency and accuracy are improved.
  • the target tissue contour can be fused to the original white light image through image synthesis or image fusion technology to obtain a white light image with the target tissue contour.
  • the target tissue contour occupies relatively few pixels in the image, there are more pixels in the white light image with the target tissue contour to express tissue information, such as the original color and structure of the epidermis, blood vessels and other tissues of the target tissue. Therefore, this image display method can express the position and shape of the target tissue while not covering the color and structure of the target tissue. Thereby, the target tissue and its surrounding tissues can be displayed more comprehensively and clearly, which is more conducive to the doctor's identification of the target tissue.
  • the endoscopic image display method proposed in the embodiment of the present application obtains the original white light image of the endoscope and the marked tissue image; extracts the tissue contour of the tissue image to obtain the target tissue contour; and fuses the target tissue contour with the original white light image to obtain a white light image with the target tissue contour.
  • the tissue information of the marked position is displayed, thereby improving surgical efficiency and reducing surgical risks.
  • the target tissue may be a lesion tissue, and specifically, the lesion outline in the tissue image may be extracted and fused to the original white light image, thereby marking the lesion position in the endoscopic imaging display mode while displaying the tissue information of the marked position, thereby improving surgical efficiency and reducing surgical risks.
  • FIG. 3 is another flowchart of an endoscopic image display method in one or more embodiments.
  • the step S30 of extracting the tissue contour from the tissue image to obtain the target tissue contour is refined, wherein the step S30 may include:
  • step S30, extracting the tissue contour from the tissue image to obtain the target tissue contour includes:
  • Step S31 processing the tissue image by a binary method to obtain a binary image
  • the tissue images can be processed by binarization to obtain a binary image.
  • Binarization is a method of image segmentation. Image segmentation simplifies or changes the representation of an image to make it easier to analyze.
  • the working principle of binarization is to set the pixels in the image to two categories: black and white. Pixels with grayscale greater than a preset threshold are set to white, and vice versa. After completing the above judgment for all pixels in the image, a binary image consisting of black and white can be obtained.
  • a threshold calculation method can be used to calculate the tissue image to obtain a grayscale threshold, and then the grayscale of each pixel in the tissue image and the grayscale threshold are compared to segment the image into a target tissue area and a non-target tissue area. For pixels whose grayscale is greater than the grayscale threshold, they correspond to pixels in the target tissue area, and for pixels whose grayscale is less than or equal to the grayscale threshold, they correspond to pixels in the non-target tissue area.
  • the above operation is performed on all pixels in the tissue image to segment the image into a target tissue area and a non-target tissue area, thereby determining a binary image.
  • T is the grayscale threshold
  • F(x, y) is the result of the binary image
  • (x, y) is the pixel coordinate of the image
  • 1 is the target tissue area
  • 0 is the non-target tissue area
  • f(x, y) is the grayscale of the pixel point of the tissue image.
  • the binary image consists only of the target tissue area and the non-target tissue area.
  • the edge of the target tissue area and the non-target tissue area is the target tissue contour, which provides a basis for further extraction of the target tissue contour.
  • Step S33 performing contour extraction on the binary image based on an edge finding algorithm to obtain the target tissue contour.
  • the edge search algorithm can be used to extract the target tissue contour that occupies fewer image pixels to replace the target tissue area that occupies more image pixels, thereby expressing the position and shape of the target tissue, so that more pixels can be used to express the original tissue information.
  • the edge search algorithm is an algorithm that finds a set of pixels with contrast changes in surrounding grayscale intensity to determine the edge.
  • the grayscale intensity around a pixel is compared with a preset threshold through specific rules. If the grayscale intensity around a pixel is greater than the preset threshold, the pixel belongs to a set of surrounding pixels with contrasting grayscale changes, and the set is the edge pixel set, thereby obtaining an edge.
  • the target tissue contour can be extracted from the binary image by using the Sobel edge detection method, wherein the Sobel operator includes two sets of 3x3 matrices, namely, the X-direction matrix and the Y-direction matrix.
  • Sobel edge detection method wherein the Sobel operator includes two sets of 3x3 matrices, namely, the X-direction matrix and the Y-direction matrix.
  • the specific Sobel contour extraction operator is as follows:
  • the above is the Y direction matrix.
  • first find a pixel point in the target tissue area in the binary image then search its 8 neighboring pixels, input the grayscale of the 8 neighboring pixels of the pixel point into the X-direction matrix calculation of the Sobel operator, and input it into the Y-direction matrix calculation to obtain the X-direction gradient value and Y-direction gradient value of the pixel point.
  • Add the square of the X-direction gradient value of the pixel point and the square of the Y-direction gradient value of the pixel point and then take the square root to obtain the total gradient of the pixel point, which reflects the grayscale contrast of the pixels around the pixel point. Then compare the total gradient of the pixel point with the preset edge threshold.
  • the pixel point is determined as a pixel of the target tissue contour and added to the pixel set of the target tissue contour. Repeat the above operation for the pixel points in the target tissue area in the binary image to obtain the pixel set of the target tissue contour, thereby determining the target tissue contour.
  • the endoscopic image display method proposed in the embodiment of the present application processes the tissue image by a binary method to obtain a binary image; and extracts the contour of the binary image based on an edge search algorithm to obtain the target tissue contour.
  • the tissue image is divided into a target tissue area and a non-target tissue area by a binary method to obtain a binary image, and then the target tissue contour is extracted by a Sobel edge search method, thereby achieving the effect of expressing the position and shape of the target tissue in the image with a target tissue contour with fewer pixels, avoiding the disadvantage that the tissue image covers the original color and structure of the epidermis, blood vessels and other tissues, and can effectively help doctors analyze tissue images and diagnose diseases.
  • step S31 the tissue image is processed by a binary method. After the step of obtaining the binary image, supplementation is performed, wherein the supplementation step may include:
  • step S31 of processing the tissue image by a binary method to obtain a binary image the following step is further included:
  • Step S32 performing smoothing processing on the binary image to obtain a smoothed binary image
  • Step S33 extracting the contour of the binary image based on an edge finding algorithm, and obtaining the contour of the target tissue comprises:
  • Step S331 performing contour extraction on the smoothed binary image based on an edge finding algorithm to obtain the target tissue contour.
  • the binary image is smoothed. Smoothing refers to blurring the pixels in the image to reduce noise and irregular areas in the image.
  • Some common image filters such as Gaussian filters or median filters, can be used to smooth the image to obtain a smoothed binary image.
  • a shape-based opening operation of a circular structure element can be used to perform an erosion and then dilation operation on the binary image.
  • the edge between the segmented target tissue area and the non-target tissue area can be made smoother without increasing the extra area, and potential noise can be removed to obtain a smooth binary image with clearer edges.
  • the specific formula of the shape-based opening operation is as follows:
  • X is a single-channel image
  • S is a structural element
  • x is the pixel point through which the structural element passes.
  • X is a single-channel image
  • S is a structural element
  • x is the pixel point through which the structural element passes.
  • contour extraction may be performed on the smoothed binary image to obtain a target tissue contour with a clearer and smoother contour.
  • the endoscopic image display method proposed in the embodiment of the present application obtains a smoothed binary image by smoothing the binary image; and extracts the contour of the smoothed binary image based on an edge search algorithm to obtain the contour of the target tissue.
  • the present application specifically smoothes the binary image to effectively eliminate noise and irregular areas in the image, making the edges of the target tissue area and the non-target tissue area smoother and clearer, thereby optimizing the contour of the target tissue, which can help doctors analyze images and diagnose diseases more clearly and accurately.
  • step S30 the tissue contour is extracted from the tissue image, and further limitation is performed after the step of obtaining the target tissue contour.
  • step S30 of extracting the tissue contour from the tissue image to obtain the target tissue contour the following steps are further included:
  • Step S40 performing enhancement processing on the target tissue contour to obtain an enhanced target tissue contour
  • Step S50 fusing the target tissue contour with the original white light image for display, and obtaining the white light image with the target tissue contour comprises:
  • Step S51 fusing the enhanced target tissue contour with the original white light image to obtain the white light image with the target tissue contour.
  • the target tissue contour when the target tissue contour is fused to the original white light image, it may be mixed with the internal tissue information of the human body in the original white light image, resulting in unclear target tissue contour.
  • this can be achieved by optimizing the hue, brightness, and saturation of the pixels of the target tissue contour. For example, if the background is darker than the human body's internal tissue, the target tissue outline can be transformed into a set of pixels with medium brightness through enhancement processing; or if the background is reddish, the target tissue outline can be transformed into a set of pixels with green color. Different target tissue outlines can be generated through enhancement processing according to different environments, and the target tissue outline after enhancement processing can be clearly displayed under the background of various human body internal tissues.
  • the enhanced target tissue contour can be fused to the original white light image through image synthesis or image fusion technology, so that the doctor can see the enhanced target tissue contour more clearly, which is convenient for the doctor to quickly identify the target tissue, thereby effectively improving the efficiency of the operation.
  • the endoscopic image display method proposed in the embodiment of the present application performs enhancement processing on the target tissue contour to obtain the enhanced target tissue contour; the enhanced target tissue contour is fused with the original white light image for display to obtain the white light image with the target tissue contour.
  • the hue, saturation and brightness of the pixels of the target tissue contour are enhanced so that the enhanced target tissue contour can be clearly presented against the background of various internal tissues of the human body.
  • the doctor can clearly see the enhanced target tissue contour, which is convenient for the doctor to quickly identify the target tissue, thereby effectively improving the efficiency of the operation.
  • the step S40 of enhancing the target tissue contour to obtain the enhanced target tissue contour is refined, wherein the step S40 may include:
  • step S40 performing enhancement processing on the target tissue contour to obtain the enhanced target tissue contour includes:
  • Step S41 obtaining a contour area of the target tissue based on the target tissue contour
  • the target tissue contour when the target tissue contour is fused to the original white light image, it may be mixed with the internal tissue information of the human body in the original white light image, resulting in unclear target tissue contour.
  • the target tissue contour needs to be enhanced. Specifically, the hue, saturation and brightness of the target tissue contour can be changed to make the target tissue contour more recognizable against the background of the internal tissue of the human body.
  • a pixel point set of the target tissue contour that is, the contour area, is obtained.
  • Step S42 calculating the channel pixels of the target color of the original white light image in the contour area to obtain a channel contour pixel set
  • Step S43 calculating the pixels of the tissue image in the contour area to obtain a tissue contour pixel set
  • the target color is the color presented after the target tissue contour is enhanced.
  • the target color may be green. Based on the contour area, all pixel points of the green channel of the original white light image in the contour area are calculated, that is, the channel contour pixel set.
  • all pixel points in the contour area of the tissue image ie, a tissue contour pixel set, are calculated.
  • C is the contour area; (x, y) is the pixel coordinates in the image; C I_g (x, y) is the channel contour pixel set; C FI (x, y) is the tissue contour pixel set; I_g(x, y) is the green channel pixel set of the original white light image; FI(x, y) is the tissue image pixel set.
  • Step S44 obtaining contour features through contour feature calculation according to the channel contour pixel set and the tissue contour pixel set;
  • the absolute value of the difference between the tissue contour pixel set and the channel contour pixel set is calculated to obtain the contour feature.
  • the specific formula for contour feature calculation is:
  • D(x,y) is the contour feature.
  • Step S45 calculating a coefficient enhancement matrix according to the contour features through a coefficient enhancement algorithm
  • max(D(x,y)) is the maximum value of the matrix
  • P(x,y) is the coefficient enhancement matrix
  • the coefficient enhancement matrix can reflect the strength characteristics of the target tissue contour signal.
  • Step S46 calculating and obtaining a nonlinear adjustment coefficient through a nonlinear adjustment algorithm according to the coefficient enhancement matrix
  • is the weight gain coefficient, and the value of ⁇ is in [0, ⁇ ); W (x,y) is the nonlinear adjustment coefficient.
  • Step S47 calculating the enhanced target tissue contour according to the nonlinear adjustment coefficient, the channel contour pixel set and the tissue contour pixel set, and a contour enhancement algorithm.
  • W is the nonlinear adjustment coefficient
  • C fusion is the target tissue contour after enhancement processing.
  • the target tissue contour after enhancement processing can be displayed in green and at maximum brightness without losing information.
  • the endoscopic image display method proposed in the embodiment of the present application obtains the contour area of the target tissue based on the contour of the target tissue; calculates the channel pixels of the target color of the original white light image in the contour area to obtain the channel contour pixel set; calculates the pixels of the tissue image in the contour area to obtain the tissue contour pixel set; obtains the contour feature through contour feature operation according to the channel contour information pixel set and the tissue contour information pixel set; calculates the coefficient enhancement matrix according to the contour feature through the coefficient enhancement algorithm; calculates the nonlinear adjustment coefficient according to the coefficient enhancement matrix through the nonlinear adjustment algorithm; calculates the enhanced target tissue contour according to the nonlinear adjustment coefficient, the channel contour pixel set and the tissue contour pixel set, and the contour enhancement algorithm.
  • the channel contour pixel set and the tissue contour pixel set are obtained through the target tissue contour, and the nonlinear adjustment parameters that can display the target tissue contour at the maximum brightness without losing the amount of information are obtained through the contour feature operation, the coefficient enhancement algorithm and the nonlinear adjustment algorithm.
  • the channel contour pixel set, the tissue contour pixel set and the nonlinear adjustment parameter are combined to obtain the green target tissue contour displayed at the maximum brightness, that is, the enhanced target tissue contour. This makes the outline of the target tissue clearer, making it easier for doctors to analyze images and diagnose diseases.
  • step S10 is further limited after the original white light image of the endoscope and the marked tissue image are acquired.
  • step S10 after acquiring the original white light image of the endoscope and the marked tissue image, includes:
  • Step S48 deleting the pixels of the original white light image in the contour area to obtain a new white light image
  • the pixels belonging to the contour area in the original white light image are superimposed with the pixels of the target tissue contour when the target tissue contour is fused into the original white light image, it may affect the presentation effect of the target tissue contour fused into the original white light image. Therefore, the pixels belonging to the contour area in the original white light image are deleted to obtain a new white light image so that the target tissue contour is not fused into the original white light image. Tissue contours are clearly visualized when fused to the new white light image.
  • I_b(x,y) is the blue channel pixel set of the original white light image
  • I_r(x,y) is the red channel pixel set of the original white light image
  • I_g(x,y) is the green channel pixel set of the original white light image
  • C is the contour area
  • I_n is the new white light image.
  • Step S51 fusing the enhanced target tissue contour with the original white light image to obtain the white light image with the target tissue contour, comprises:
  • Step S511 merging the enhanced target tissue contour with the new white light image to obtain the white light image with the target tissue contour.
  • C fusion is the target tissue contour after enhancement processing
  • I_n is the new white light image
  • I_f is the white light image with the target tissue contour.
  • the endoscopic image display method proposed in the embodiment of the present application obtains a new white light image by deleting the pixels of the original white light image in the contour area; and the target tissue contour after the enhancement processing is fused with the new white light image to obtain the white light image with the target tissue contour.
  • the new white light image is obtained by deleting the pixels of the original white light image in the contour area, and then the target tissue contour after the enhancement processing is fused to the new white light image to obtain the white light image with the target tissue contour.
  • the fused target tissue contour is clearer, which is convenient for doctors to analyze images and diagnose diseases.
  • step S10 is further limited after the original white light image of the endoscope and the marked tissue image are acquired.
  • Step S20 registering the original white light image and the tissue image to obtain a standard white light image and a standard tissue image
  • Step S30 extracting tissue contours from the tissue image to obtain target tissue contours includes:
  • Step S311 extracting tissue contours from the standard tissue image to obtain target tissue contours
  • Step S50 fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour, comprises:
  • Step S5111 the target tissue contour is fused with the standard white light image and displayed to obtain a white light image with the target tissue contour.
  • the original white light image and tissue image come from different acquisition methods, may be taken at different times, different shooting angles, etc., there is a certain misalignment error between the original white light image and the tissue image.
  • the original white light image and the tissue image need to be registered.
  • registration is a method of mapping one image to another image through a spatial transformation, so that the points in the two images corresponding to the same position in space are matched one by one.
  • the original white light image and the tissue image are registered so that the original white light image and the tissue image Points in the image corresponding to the same position in space are matched one by one to achieve consistency in the specifications of the two images, thus obtaining a standard white light image and a standard tissue image.
  • the high bit depth image is converted to the specifications of the low bit depth image so that the color depths of the two images are consistent.
  • a high bit depth image and a low bit depth image can be determined from the standard white light image and the standard tissue image, and the determined high bit depth image is converted to the specifications of the low bit depth image so that the color depths of the standard white light image and the standard tissue image are consistent.
  • tissue contour extraction is performed on the standard tissue image to obtain the target tissue contour.
  • the target tissue contour can be fused with the standard white light image through image fusion technology to obtain a white light image with the target tissue contour.
  • the target tissue contour may be enhanced first, and then the enhanced target tissue contour may be fused with the standard white light image through image fusion technology to obtain a white light image with the target tissue contour.
  • the endoscopic image display method proposed in the embodiment of the present application obtains a standard white light image and a standard tissue image by registering the original white light image and the tissue image; extracts the tissue contour of the standard tissue image to obtain the target tissue contour; and fuses the target tissue contour with the standard white light image for display to obtain a white light image with the target tissue contour.
  • the original white light image and the tissue image are registered so that the original white light image and the tissue image are aligned in space and the error is reduced.
  • the tissue contour of the standardized tissue image is extracted to obtain the target tissue contour, and then the target tissue contour is fused with the standardized white light image for display to obtain a white light image with the target tissue contour.
  • FIG9 An endoscopic imaging flowchart involved in an embodiment of the present application may be shown in FIG9 .
  • a contour extraction flow chart involved in an embodiment of the present application may be shown in FIG10 .
  • tissue image and the white light image with the target tissue outline in the embodiment of the present application are shown in FIG. 11 .
  • the embodiment of the present application further provides an endoscopic image display device, as shown in FIG12 , the endoscopic image display device includes:
  • An image acquisition module 1201 is used to acquire the original white light image of the endoscope and the marked tissue image
  • a feature extraction module 1202 is used to extract the tissue contour of the tissue image to obtain the target tissue contour
  • the image processing module 1203 is used to fuse the target tissue outline with the original white light image for display, so as to obtain a white light image with the target tissue outline.
  • the feature extraction module 1202 is specifically used to:
  • the contour of the binary image is extracted based on an edge finding algorithm to obtain the contour of the target tissue.
  • the feature extraction module 1202 is specifically used to:
  • the contour of the target tissue is obtained by performing contour extraction on the smoothed binary image based on an edge finding algorithm.
  • the feature extraction module 1202 is specifically used to:
  • the tissue image is segmented into a target tissue area and a non-target tissue area;
  • a binary image is obtained according to the target tissue area and the non-target tissue area.
  • the feature extraction module 1202 is specifically used to:
  • the pixel point For each pixel point in the tissue image, if the grayscale of the pixel point is greater than the grayscale threshold, the pixel point is determined as a pixel point of the target tissue area;
  • the pixel point is determined as a pixel point of the non-target tissue area
  • the target tissue area and the non-target tissue area in the tissue image are determined according to the pixel points of the target tissue area and the pixel points of the non-target tissue area.
  • the endoscopic image display device is further used for:
  • the step of fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour comprises:
  • the enhanced target tissue contour is fused with the original white light image to obtain the white light image with the target tissue contour.
  • the endoscopic image display device is further used for:
  • a coefficient enhancement matrix is calculated by a coefficient enhancement algorithm
  • a nonlinear adjustment coefficient is calculated by a nonlinear adjustment algorithm
  • the enhanced target tissue contour is calculated based on the nonlinear adjustment coefficient, the channel contour pixel set, the tissue contour pixel set, and a contour enhancement algorithm.
  • the endoscopic image display device is further used for:
  • the step of fusing the enhanced target tissue contour with the original white light image to obtain the white light image with the target tissue contour comprises:
  • the enhanced target tissue contour is fused with the new white light image to obtain the white light image with the target tissue contour.
  • the device is further used for:
  • the step of extracting tissue contours from the tissue image to obtain the target tissue contour comprises:
  • the step of fusing the target tissue contour with the original white light image to obtain a white light image with the target tissue contour comprises:
  • the target tissue contour is fused with the standard white light image and displayed to obtain a white light image with the target tissue contour.
  • Each module in the above-mentioned endoscopic image display device can be implemented in whole or in part by software, hardware or a combination thereof.
  • the network interface can be an Ethernet card or a wireless network card, etc.
  • the above-mentioned modules can be embedded in or independently in the form of hardware.
  • the software can be stored in a processor in a server or in a memory in a server in the form of software, so that the processor can call and execute the operations corresponding to the above modules.
  • a component can be, but is not limited to, a process running on a processor, a processor, an object, an executable code, a thread of execution, a program, and/or a computer.
  • a component can be, but is not limited to, a process running on a processor, a processor, an object, an executable code, a thread of execution, a program, and/or a computer.
  • an application running on a server and a server can be a component.
  • One or more components can reside in a process and/or a thread of execution, and a component can be located in a computer and/or distributed between two or more computers.
  • an embodiment of the present application also proposes a terminal device, which includes a memory, a processor, and an endoscopic image display program stored in the memory and executable on the processor, and when the endoscopic image display program is executed by the processor, the steps of the endoscopic image display method as described above are implemented.
  • the present endoscopic image display program adopts all the technical solutions of all the aforementioned embodiments when executed by the processor, it has at least all the beneficial effects brought by all the technical solutions of all the aforementioned embodiments, which will not be described one by one here.
  • an embodiment of the present application further proposes a computer-readable storage medium, on which an endoscopic image display program is stored.
  • the endoscopic image display program is executed by a processor, the steps of the endoscopic image display method as described above are implemented.
  • the present endoscopic image display program adopts all the technical solutions of all the aforementioned embodiments when executed by the processor, it has at least all the beneficial effects brought by all the technical solutions of all the aforementioned embodiments, which will not be described one by one here.
  • the endoscope image display method, device, terminal device and storage medium proposed in the embodiment of the present application obtain the original white light image of the endoscope and the marked tissue image; extract the tissue contour of the tissue image to obtain the target tissue contour; fuse the target tissue contour with the original white light image to obtain a white light image with the target tissue contour.
  • the lesion position can be marked in the endoscope imaging display mode while displaying the tissue information of the marked position, thereby improving the surgical efficiency and reducing the surgical risk.
  • the technical solution of the present application can be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, disk, CD) as above, including a number of instructions for a terminal device (which can be a mobile phone, computer, server, controlled terminal, or network device, etc.) to execute the method of each embodiment of the present application.
  • a storage medium such as ROM/RAM, disk, CD

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Optics & Photonics (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Biophysics (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Image Analysis (AREA)
  • Endoscopes (AREA)

Abstract

L'invention concerne un procédé d'affichage d'image d'endoscope comprenant les étapes consistant à : acquérir une image de lumière blanche d'origine d'un endoscope et une image de tissu marqué de celui-ci ; effectuer une extraction de contour de tissu sur l'image de tissu, de façon à obtenir un contour de tissu cible ; et effectuer un affichage de fusion sur le contour de tissu cible et l'image de lumière blanche d'origine, de façon à obtenir une image de lumière blanche avec le contour de tissu cible.
PCT/CN2024/094009 2023-05-19 2024-05-17 Procédé et appareil d'affichage d'image d'endoscope, dispositif terminal et support de stockage Pending WO2024240089A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310577664.3A CN116784770A (zh) 2023-05-19 2023-05-19 内窥镜图像显示方法、装置、终端设备以及存储介质
CN202310577664.3 2023-05-19

Publications (1)

Publication Number Publication Date
WO2024240089A1 true WO2024240089A1 (fr) 2024-11-28

Family

ID=88046042

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/094009 Pending WO2024240089A1 (fr) 2023-05-19 2024-05-17 Procédé et appareil d'affichage d'image d'endoscope, dispositif terminal et support de stockage

Country Status (2)

Country Link
CN (1) CN116784770A (fr)
WO (1) WO2024240089A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116784770A (zh) * 2023-05-19 2023-09-22 重庆西山科技股份有限公司 内窥镜图像显示方法、装置、终端设备以及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680498A (zh) * 2015-03-24 2015-06-03 江南大学 一种基于改进梯度向量流模型的医学图像分割方法
CN111223069A (zh) * 2020-01-14 2020-06-02 天津工业大学 一种图像融合方法及系统
CN111513660A (zh) * 2020-04-28 2020-08-11 深圳开立生物医疗科技股份有限公司 一种应用于内窥镜的图像处理方法、装置及相关设备
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质
CN114298980A (zh) * 2021-12-09 2022-04-08 杭州海康慧影科技有限公司 一种图像处理方法、装置及设备
CN115330651A (zh) * 2022-08-13 2022-11-11 浙江大学 一种内窥镜荧光与可见光图像融合方法
CN116784770A (zh) * 2023-05-19 2023-09-22 重庆西山科技股份有限公司 内窥镜图像显示方法、装置、终端设备以及存储介质

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2843334B2 (ja) * 1988-08-24 1999-01-06 株式会社日立メディコ 臓器領域抽出方法
JP6516772B2 (ja) * 2015-01-23 2019-05-22 オリンパス株式会社 画像処理装置、画像処理装置の作動方法および画像処理プログラム
CN114863498B (zh) * 2022-05-27 2025-02-07 武汉理工大学 一种基于AGC和Frangi的手部静脉红外图像的增强与分割方法
CN115631195B (zh) * 2022-12-20 2023-03-21 新光维医疗科技(苏州)股份有限公司 血管轮廓提取方法、血管轮廓提取装置及内窥镜系统

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104680498A (zh) * 2015-03-24 2015-06-03 江南大学 一种基于改进梯度向量流模型的医学图像分割方法
CN111223069A (zh) * 2020-01-14 2020-06-02 天津工业大学 一种图像融合方法及系统
CN111513660A (zh) * 2020-04-28 2020-08-11 深圳开立生物医疗科技股份有限公司 一种应用于内窥镜的图像处理方法、装置及相关设备
CN113208567A (zh) * 2021-06-07 2021-08-06 上海微创医疗机器人(集团)股份有限公司 多光谱成像系统、成像方法和存储介质
WO2022257946A1 (fr) * 2021-06-07 2022-12-15 上海微觅医疗器械有限公司 Système et procédé d'imagerie multispectrale et support d'enregistrement
CN114298980A (zh) * 2021-12-09 2022-04-08 杭州海康慧影科技有限公司 一种图像处理方法、装置及设备
CN115330651A (zh) * 2022-08-13 2022-11-11 浙江大学 一种内窥镜荧光与可见光图像融合方法
CN116784770A (zh) * 2023-05-19 2023-09-22 重庆西山科技股份有限公司 内窥镜图像显示方法、装置、终端设备以及存储介质

Also Published As

Publication number Publication date
CN116784770A (zh) 2023-09-22

Similar Documents

Publication Publication Date Title
CN114445316B (zh) 一种内窥镜荧光与可见光图像融合方法
EP2188779B1 (fr) Procédé d'extraction graphique d'une zone de langue reposant sur une analyse graphique et géométrique
US8131054B2 (en) Computerized image analysis for acetic acid induced cervical intraepithelial neoplasia
US20040264749A1 (en) Boundary finding in dermatological examination
Kolar et al. Hybrid retinal image registration using phase correlation
JP5622461B2 (ja) 画像処理装置、画像処理方法、および画像処理プログラム
WO2023103467A1 (fr) Procédé, appareil et dispositif de traitement d'images
CN108961334B (zh) 一种基于图像配准的视网膜血管壁厚度测量方法
CN114494092A (zh) 一种可见光图像与荧光图像融合方法及系统
WO2020038312A1 (fr) Dispositif et procédé de détection de bord de corps de langue à canaux multiples et support de stockage
JP6704933B2 (ja) 画像処理装置、画像処理方法およびプログラム
WO2024240089A1 (fr) Procédé et appareil d'affichage d'image d'endoscope, dispositif terminal et support de stockage
CN120339093B (zh) 双荧光与可见光图像融合方法和装置、电子设备以及介质
US20240065540A1 (en) Apparatus and method for detecting cervical cancer
Mukku et al. A specular reflection removal technique in cervigrams
CN108629780B (zh) 基于颜色分解和阈值技术的舌图像分割方法
CN114266817B (zh) 一种荧光深度图像合成方法、装置、电子设备及存储介质
JP2012213518A (ja) 画像処理装置、画像処理方法、及び画像処理プログラム
Obukhova et al. Learning from multiple modalities of imaging data for cancer detection/diagnosis
CN111784664B (zh) 肿瘤淋巴结分布图谱生成方法
CN116091476A (zh) 医学图像处理方法、装置、计算机设备及存储介质
Pardo et al. Automated skin lesion segmentation with kernel density estimation
Mu et al. FCF-CSM: A Fuzzy Clustering Framework Based on Chromaticity Statistical Model for Automatic Segmentation of Port Wine Stains
CN120876484B (zh) 一种基于rgb图像和光谱数据融合的舌象分析方法和系统
CN120746865B (zh) 一种多通道内窥镜图像特征增强方法、设备和介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24810331

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE