[go: up one dir, main page]

CN119090764B - High-definition medical image processing method - Google Patents

High-definition medical image processing method Download PDF

Info

Publication number
CN119090764B
CN119090764B CN202411089205.1A CN202411089205A CN119090764B CN 119090764 B CN119090764 B CN 119090764B CN 202411089205 A CN202411089205 A CN 202411089205A CN 119090764 B CN119090764 B CN 119090764B
Authority
CN
China
Prior art keywords
image
lesion area
edge
pixel points
index
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411089205.1A
Other languages
Chinese (zh)
Other versions
CN119090764A (en
Inventor
王小刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Shengan Medical Technology Co ltd
Original Assignee
Chengdu Shengan Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Shengan Medical Technology Co ltd filed Critical Chengdu Shengan Medical Technology Co ltd
Priority to CN202411089205.1A priority Critical patent/CN119090764B/en
Publication of CN119090764A publication Critical patent/CN119090764A/en
Application granted granted Critical
Publication of CN119090764B publication Critical patent/CN119090764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及图像处理技术领域,具体地说,本发明涉及一种高清医学影像处理方法,本发明通过引导滤波算法对回盲部图像进行降噪处理,得出回盲部图像直方图,并根据回盲部直方图计算出回盲部图像中每个像素点的潜在分泌物指数,通过设定分割阈值将超出潜在分泌物指数的像素点提取,将像素点进行连接,突出回盲部病变区域,并利用边缘检测算法提取病变区域图像的边缘特征,获取边缘像素点的分辨率,再利用傅里叶变换分析得出边缘像素点的模糊程度系数,根据边缘像素点的分辨率和模糊程度系数得出病变区域图像的增强系数,通过调整病变区域图像的增强系数实现回盲部病变区域图像的增强。

The present invention relates to the technical field of image processing, and in particular to a high-definition medical image processing method. The present invention performs noise reduction processing on an ileocecal image by a guided filtering algorithm to obtain an ileocecal image histogram, and calculates a potential secretion index of each pixel in the ileocecal image according to the ileocecal histogram, extracts pixels exceeding the potential secretion index by setting a segmentation threshold, connects the pixels, highlights the ileocecal lesion area, extracts edge features of the lesion area image by an edge detection algorithm, obtains the resolution of the edge pixel points, and then uses Fourier transform analysis to obtain a blur degree coefficient of the edge pixel points, obtains an enhancement coefficient of the lesion area image according to the resolution and blur degree coefficient of the edge pixel points, and enhances the ileocecal lesion area image by adjusting the enhancement coefficient of the lesion area image.

Description

High-definition medical image processing method
Technical Field
The invention relates to the technical field of image processing, in particular to a high-definition medical image processing method.
Background
The image processing is an important technology, along with the continuous progress of the image processing technology, particularly the development of a deep learning algorithm, tiny lesions and early lesions can be detected and identified more accurately, the accuracy of disease diagnosis is improved, a personalized treatment scheme can be formulated for each patient through detailed analysis of medical images of the patient, the treatment effect is improved, a large amount of medical image data is analyzed by utilizing the image processing, potential disease modes and risk factors are mined, early prediction and prevention of the diseases are realized, the traditional high-definition medical image processing method is single in processing mode and has no pertinence when the medical image processing is carried out, the lesion areas are not found out through analyzing the characteristics in the images, multi-element characteristic extraction is carried out on the lesion areas, and the enhancement coefficient of the lesion area images is obtained through analyzing the multi-element characteristics of the lesion area images, so that the enhancement of the lesion area images is realized.
Disclosure of Invention
The present invention is directed to a method for processing high-definition medical images, which solves the problems set forth in the background art.
In order to achieve the above object, the present invention provides a high definition medical image processing method, comprising the steps of:
S1, acquiring a medical image of a back blind part by using an image acquisition device, marking the medical image as a back blind part image, carrying out noise reduction on the back blind part image by using a guide filtering algorithm, extracting pixel values of the back blind part image subjected to the noise reduction, and balancing the pixel values of the back blind part image to obtain a back blind part image histogram;
S2, calculating potential secretion indexes of each pixel point in the back-blind part image according to the back-blind part histogram, setting a segmentation threshold value, and comparing and classifying the potential secretion indexes of each pixel point in the back-blind part image with the segmentation threshold value by using a threshold segmentation method to obtain pixel points exceeding the segmentation threshold value in the back-blind part image to form a suspected lesion pixel point set;
S3, extracting pixel points in the suspected lesion pixel point set as pixel points of a lesion area, connecting the pixel points of the lesion area by utilizing an edge detection algorithm to form a lesion area image, extracting edge features of the lesion area image, and acquiring the resolution of the edge pixel points according to the edge features of the lesion area image;
S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain a ambiguity index of the lesion area image, and obtaining a ambiguity coefficient of the edge pixel point according to the ambiguity index;
s5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, and enhancing the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
As a further improvement of the present technical solution, in S1, the noise reduction processing is performed on the blind-back image by using a guided filtering algorithm, which specifically includes:
Randomly selecting an image from the back-blind part image as a guide image, marking the image as A, taking the back-blind part image to be noise reduced as an input image, marking the image as B, setting the window radius of guide filtering as r, regularization parameter alpha, establishing a window D e by taking each pixel point c in the back-blind part image to be noise reduced as a center, and calculating the output F c of the guide filtering in the window D e by the following formula, namely:
Fc=ae*Ac+be
where a e and b e are obtained by linear regression calculation, a c represents the pixel values of the guide image, and F c represents the back-blind image with the guide filtering noise reduction completed.
As a further improvement of the present solution, the calculating the potential secretion index of the ileocecum according to the ileocecum histogram in S2 specifically includes:
the intensity value range of the back-blind image F c (x, y) subjected to the noise reduction process is set to be [0, L-1], the size of which is m×n, and L is expressed as the total number of color depths in the back-blind image, and the intensity histogram H (k) is calculated by using the following formula:
Wherein, Represented as gray values of the image F c at coordinates (x, y),Expressed as a kronecker function, where 0≤k≤L-1, defined as:
extracting the skewness and kurtosis of the intensity histogram H (k), establishing a linear model based on the skewness and kurtosis of the intensity histogram H (k), and calculating the potential secretion index gamma of each pixel point, wherein the expression of the potential secretion index gamma of each pixel point is as follows:
γ=ω1*PDH(k)2*FDH(k)
Wherein, ω 1 and ω 2 are weight coefficients affecting the potential secretion index γ of each pixel point obtained by experimental data, PD is expressed as skewness of the intensity histogram H (k), and FD is expressed as kurtosis of the intensity histogram H (k).
As a further improvement of the present technical solution, in S2, the potential secretion index of each pixel point in the ileocecum image is compared and classified with the segmentation threshold, which is specifically as follows:
Setting a secretion index threshold, wherein the potential secretion index of the lesion is higher than that of the peripheral region, comparing the potential secretion index of each pixel point in the back-blind image with the secretion index threshold, screening out the pixels with the potential secretion index of the pixel points higher than the segmentation threshold in the back-blind image, and forming a suspected lesion pixel point set by the pixels exceeding the segmentation threshold in the back-blind image.
As a further improvement of the present technical solution, in S3, the connecting the pixel points of the lesion area to form the lesion area image specifically includes:
Calculating the gradient amplitude and the gradient direction of each pixel point in the lesion area by using an edge detection algorithm, performing non-maximum suppression in the gradient direction, setting a high-low threshold, dividing the gradient amplitude into two types of strong edges and weak edges according to the high-low threshold, tracking the strong edge pixel points, dividing the weak edge pixel points adjacent to the strong edge pixel points into edge pixel points, and connecting the edge pixel points to form an image of the lesion area.
As a further improvement of the present technical solution, in S3, the obtaining the resolution of the edge pixel point according to the edge feature of the lesion area image specifically includes:
Extracting coordinates (x i,yi) of all edge pixel points in the lesion area image, wherein i is an index of the edge pixel, setting an actual distance represented by each pixel in the horizontal direction as deltax, and converting the coordinates of the edge pixel points from a pixel unit to an actual space unit, namely:
Xi=xi*Δx
Yi=yi*Δy
By calculating the distance between edge pixels and analyzing the edge features, the Euclidean distance between edge pixels (X i,Yi) and (X j,Yj) is:
and obtaining the resolution of the edge pixel points by calculating the minimum distance between the edge pixel points, wherein the calculation formula of the resolution of the edge pixel points is as follows:
wherein min (d ij) represents the minimum distance between all edge pixels.
As a further improvement of the present technical solution, in S4, frequency components of the lesion area image are analyzed by using fourier transform, so as to obtain an ambiguity index of the lesion area image, which specifically includes:
Converting an image of a lesion area into a gray image, smoothing the gray image by Gaussian filtering, defining the gray image as h (x ', y'), and applying a two-dimensional fast Fourier transform to the gray image, namely:
f(u,v)=f’{h(x’,y’)}
Where f (u, v) is a frequency domain representation, (u, v) is a frequency coordinate, a spectral amplitude S (u, v) is calculated according to f (u, v), the spectral amplitude S (u, v) represents a variation intensity of each frequency component (u, v) in the frequency domain, the energy contribution of these frequencies in the original image is reflected, and the expression of the spectral amplitude S (u, v) is:
S(u,v)=|f(u,v)|
according to the frequency spectrum amplitude, an energy ratio of high-frequency components to low-frequency components is utilized to define an ambiguity index, and the ambiguity index is expressed as follows:
Wherein W is represented as an index set of a high-frequency component, G is represented as an index set of a low-frequency component, the index set of the high-frequency component is represented as information of details, edges and noise in an image, a blurring degree coefficient defining an edge pixel point is R=1-Q, if the value of R is close to 1, the pixel point is clear, and otherwise, the pixel point is blurred.
As a further improvement of the present technical solution, in S5, the enhancement coefficient of the lesion area image is obtained according to the resolution and the blurring coefficient of the edge pixel point, and specifically includes:
Setting an enhancement coefficient T which is inversely proportional to a blurring degree coefficient R of an edge pixel point and directly proportional to the resolution of the edge pixel point, wherein the expression of the enhancement coefficient T is as follows:
T=t*(1-R)*δ
Wherein T is a proportionality constant for controlling the enhancement degree, the enhancement coefficient T is used for adjusting the pixel value of the lesion area image, the pixel value P (x i,yi) of the lesion area image at the position (x i,yi) is taken, the enhanced part is obtained by calculating the product between the difference value between the pixel value and the background pixel value P' (x i,yi) and the enhancement coefficient T, the enhanced part is added back to the original pixel value P (x i,yi) to obtain the enhanced pixel value P ZQ(xi,yi), and the expression of the enhanced pixel value P ZQ(xi,yi is:
PZQ(xi,yi)=P(xi,yi)+T*[(P(xi,yi))-P'(xi,yi)]
The detail and the brightness of the image are enhanced by increasing the contrast ratio with the background, so that the enhancement of the image of the lesion area of the ileocecum is realized.
Another object of the present invention is to provide a system for implementing a high-definition medical image processing method, comprising:
the acquisition processing unit is used for acquiring a medical image of the back blind part, and preprocessing the back blind part image through a guide filtering algorithm to obtain a back blind part image histogram;
the lesion analysis unit comprises a calculation analysis module and a lesion generation module;
The calculation and analysis module is used for calculating potential secretion of each pixel point according to the back blind part image histogram, setting a secretion index threshold value, and comparing and classifying by using a threshold segmentation method to form a lesion pixel point set;
The lesion generation module connects pixel points of a lesion area by utilizing an edge detection algorithm to form a lesion area image, and the resolution of the edge pixel points is obtained according to the edge characteristics of the lesion area image;
the fuzzy evaluation unit analyzes frequency components of the lesion area image by utilizing Fourier transformation to obtain a fuzzy index of the lesion area image, and obtains a fuzzy degree coefficient of the edge pixel point according to the fuzzy index;
the enhancement determination unit obtains an enhancement coefficient of the lesion area image according to the resolution and the blurring degree coefficient of the edge pixel points, and enhances the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image
Compared with the prior art, the invention has the beneficial effects that:
In the high-definition medical image processing method, noise reduction processing is carried out on the back-blind part image through a guide filtering algorithm to obtain a back-blind part image histogram, edge information of the image can be well reserved, the back-blind part image histogram is still clearly presented after noise reduction, then a threshold segmentation method is utilized to compare and classify potential secretion indexes of each pixel point in the back-blind part image with a segmentation threshold value, and pixel points exceeding the segmentation threshold value in the back-blind part image are obtained, so that the data volume of subsequent processing is reduced, the processing efficiency is improved, the resolution of the edge pixel points is obtained according to the edge characteristics of the lesion area image, the boundary of the lesion area can be more accurately determined, the accurate evaluation of the size of the lesion is facilitated, frequency components of the lesion area image are analyzed through Fourier transformation, the ambiguity index of the lesion area image is obtained, the degree coefficient of the edge pixel points is obtained according to the ambiguity index, the overall fuzzy characteristic of the lesion area is comprehensively evaluated, the enhancement coefficient of the lesion area image is obtained according to the resolution and the ambiguity coefficient of the edge pixel points is not limited to local pixels, and the enhancement of the lesion area image is realized.
Drawings
FIG. 1 is an overall workflow diagram of the present invention;
FIG. 2 is a schematic diagram of the overall structure of the present invention;
The meaning of each reference sign in the figure is:
1. An acquisition processing unit; 2, a lesion analysis unit, 21, a calculation analysis module, 22, a lesion generation module, 3, a fuzzy evaluation unit and 4, an enhancement determination unit.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-2, the present embodiment provides a high-definition medical image processing method, which includes the following steps:
s1, acquiring a medical image of a back blind part by using an image acquisition device, marking the medical image as a back blind part image, carrying out noise reduction on the back blind part image by using a guided filtering algorithm, extracting pixel values of the back blind part image subjected to the noise reduction, and equalizing the pixel values of the back blind part image to obtain a back blind part image histogram, so that the visibility and detail of the back blind part image are improved, the contrast of the image is enhanced, and the hidden structure in the back blind part image is more clear and visible;
in S1, noise reduction processing is carried out on the blind-back part image through a guide filtering algorithm, and the method specifically comprises the following steps:
Randomly selecting an image from the back-blind part image as a guide image, marking the image as A, taking the back-blind part image to be noise reduced as an input image, marking the image as B, setting the window radius of guide filtering as r, regularization parameter alpha, establishing a window D e by taking each pixel point c in the back-blind part image to be noise reduced as a center, and calculating the output F c of the guide filtering in the window D e by the following formula, namely:
Fc=ae*Ac+be
where a e and b e are obtained by linear regression calculation, a c represents the pixel values of the guide image, and F c represents the back-blind image with the guide filtering noise reduction completed.
be=B'-ae*A’
Where J is represented as the number of pixels in the window J centered around pixel c, which is used to average the results of the computation in the window to obtain a representative statistic, A 'is represented as the average of the guide image A in the window J, reflecting the average brightness or intensity level of the guide image in the window, B' is represented as the average of the input image B in the window J,The variance of the guiding image A in the window J is used for measuring the dispersion degree or the distribution width of the guiding image pixel value in the window, a e is a coefficient calculated by the statistical information in the window and used for determining the linear relation between the output image and the guiding image, b e is another coefficient calculated by the statistical information in the window and used for determining the final value of the output image together with a e, alpha is a regularization parameter and used for preventing instability caused by zero denominator over-small numerical value, and meanwhile, the filtering effect is controlled and excessive smoothness or excessive fitting is avoided.
S2, calculating potential secretion indexes of each pixel point in the back-blind part image according to the back-blind part histogram, setting a segmentation threshold value, and comparing and classifying the potential secretion indexes of each pixel point in the back-blind part image with the segmentation threshold value by using a threshold segmentation method to obtain pixel points exceeding the segmentation threshold value in the back-blind part image to form a suspected lesion pixel point set;
And S2, calculating potential secretion indexes of the ileocecum according to the ileocecum histogram, wherein the potential secretion indexes comprise:
The intensity histogram H (k) is calculated by assuming that the intensity value range of the back-blind image F c (x, y) subjected to the noise reduction process is [0, L-1], the size of which is m×n, and L is the total number of color depths in the back-blind image, that is:
Wherein, Represented as gray values of the image F c at coordinates (x, y),Expressed as a Croneck function, 0≤k≤L-1, the Croneck function being defined as:
Because the ileocecal part of the secretion usually has obvious difference in intensity with surrounding tissues, the intensity of the potential secretion in the image is usually compared with that of the surrounding tissues, which intensity values represent the potential secretion can be determined by analyzing the histogram, the skewness and kurtosis of the intensity histogram H (k) are extracted, a linear model is built based on the skewness and kurtosis of the intensity histogram H (k), and the potential secretion index gamma of each pixel point is calculated, so that the expression of the potential secretion index gamma of each pixel point is as follows:
γ=ω1*PDH(k)2*FDH(k)
Wherein, ω 1 and ω 2 are obtained through experimental data, and the weighting coefficient affecting the potential secretion index γ of each pixel point is represented by PD as the skewness of the intensity histogram H (k), and FD is represented by the kurtosis of the intensity histogram H (k), and calculating the potential secretion index of each pixel point according to the intensity histogram can provide a solid basis for subsequent medical image analysis, thereby improving the accuracy and reliability of diagnosis.
In S2, comparing the potential secretion index of each pixel point in the ileocecum image with a segmentation threshold value, and classifying the potential secretion index specifically as follows:
when the ileocecum part of the human intestinal tract is diseased, a human body mechanism performs self-protection by generating a large amount of secretion, a large amount of secretion generated at the disease part of the ileocecum part can diffuse in the intestinal tract, so that potential secretion index difference is formed, the potential secretion index of the disease part is higher than that of the peripheral area, a secretion index threshold value is set, the potential secretion index of the disease part is higher than that of the peripheral area, the potential secretion index of each pixel point in the ileocecum part image is compared with the secretion index threshold value, the pixel points, of which the potential secretion index of the pixel points is higher than the segmentation threshold value, in the ileocecum part image are screened out, and the pixel points, exceeding the segmentation threshold value, in the ileocecum part image form a suspected disease pixel point set, so that the potential ileocecum part disease area is conveniently and accurately identified and screened out, and the efficiency of medical image analysis is improved.
S3, extracting pixel points in the suspected lesion pixel point set as pixel points of a lesion area, connecting the pixel points of the lesion area by utilizing an edge detection algorithm to form a lesion area image, extracting edge features of the lesion area image, and acquiring the resolution of the edge pixel points according to the edge features of the lesion area image;
s3, connecting pixel points of a lesion area to form a lesion area image, wherein the method specifically comprises the following steps:
Calculating the gradient amplitude and the gradient direction of each pixel point in the lesion area by using an edge detection algorithm, performing non-maximum suppression in the gradient direction, setting a high-low threshold, dividing the gradient amplitude into two types of strong edges and weak edges according to the high-low threshold, tracking the strong edge pixel points, dividing the weak edge pixel points adjacent to the strong edge pixel points into edge pixel points, and connecting the edge pixel points to form an image of the lesion area.
In the step S3, the resolution of the edge pixel point is obtained according to the edge characteristics of the lesion area image, and the method specifically comprises the following steps:
The edge pixel points represent the boundary between the lesion area and the background in the image, the specific boundary of the suspected lesion can be accurately positioned and defined by acquiring the coordinates of the edge pixel points, so that the coordinates (x i,yi) of all the edge pixel points in the lesion area image are extracted, i is the index of the edge pixel, in the medical image analysis, the measurement of the actual space unit can provide a more visual and understandable result, the doctor uses the actual unit to evaluate the size and the position of the lesion, and therefore, the pixel unit is converted into the actual space unit so that the interpretation of the result is clearer, the actual distance represented by each pixel in the horizontal direction is set as deltax, the actual distance deltay represented by each pixel in the vertical direction is converted from the pixel unit into the actual space unit, namely:
Xi=xi*Δx
Yi=yi*Δy
By calculating the distance between edge pixels and analyzing the edge features, the Euclidean distance between edge pixels (X i,Yi) and (X j,Yj) is:
the resolution of the edge pixel points is obtained by calculating the minimum distance between the edge pixel points, the definition degree of details in the image is determined, and the calculation formula of the resolution of the edge pixel points is as follows:
wherein min (d ij) represents the minimum distance between all edge pixels.
And S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain a ambiguity index of the lesion area image, and obtaining a ambiguity coefficient of the edge pixel point according to the ambiguity index.
And S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain an ambiguity index of the lesion area image, wherein the method specifically comprises the following steps of:
The different frequency components in the image correspond to different features, for example, the high frequency component generally represents edges and details, the low frequency component generally represents rough shapes and backgrounds, the frequency features of the image can be better understood through fourier transformation, the blurring of a lesion area can be quantified through analyzing the frequency components of the frequency components, the blurring composite image can generally lose high frequency information, therefore, a ambiguity index can be extracted through frequency domain analysis to assist in understanding the nature of the lesion, the image of the lesion area is converted into a gray image, the gray image is smoothed through gaussian filtering, the gray image is defined as h (x ', y'), and two-dimensional fast fourier transformation is applied to the gray image, namely:
f(u,v)=f’{h(x’,y’)}
Where f (u, v) is a frequency domain representation, (u, v) is a frequency coordinate, a spectral amplitude S (u, v) is calculated according to f (u, v), the spectral amplitude S (u, v) represents a variation intensity of each frequency component (u, v) in the frequency domain, the energy contribution of these frequencies in the original image is reflected, and the expression of the spectral amplitude S (u, v) is:
S(u,v)=|f(u,v)|
According to the frequency spectrum amplitude, the energy ratio of the high-frequency component and the low-frequency component is utilized to define an ambiguity index, and the ambiguity index has the following expression:
Wherein, W is represented as an index set of high-frequency components, G is represented as an index set of low-frequency components, and the index set of high-frequency components is represented as information of details, edges and noise in the image;
the blurring degree coefficient of the edge pixel point is defined as r=1-Q, if the value of R is close to 1, the pixel point is clear, otherwise, the pixel point is blurred.
S5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, and enhancing the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
And S5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, wherein the enhancement coefficient comprises the following specific steps:
setting an enhancement coefficient T which is inversely proportional to a blurring degree coefficient R of the edge pixel point and directly proportional to the resolution of the edge pixel point, wherein the expression of the enhancement coefficient T is as follows:
T=t*(1-R)*δ
Wherein T is a proportionality constant used for controlling the enhancement degree, the enhancement coefficient T is used for adjusting the pixel value of the lesion area image, the pixel value P (x i,yi) of the lesion area image at the position (x i,yi) is taken, the enhanced part is obtained by calculating the product between the difference value of the pixel value and the background pixel value P' (x i,yi) and the enhancement coefficient T, the background pixel value is obtained by a field averaging method, which is not described in detail in the prior art, the enhanced part is added back to the original pixel value P (x i,yi) to obtain the enhanced pixel value P ZQ(xi,yi), and the expression of the enhanced pixel value P ZQ(xi,yi) is as follows:
PZQ(xi,yi)=P(xi,yi)+T*[(P(xi,yi))-P'(xi,yi)]
the detail and the brightness of the image are enhanced by increasing the contrast ratio with the background, so that the pathological change area is highlighted, and the enhancement of the image of the pathological change area of the ileocecum is realized.
According to the invention, noise is reduced on the back-blind part image through a guide filtering algorithm, the edge and detail of the image are effectively reserved, pixel value extraction is carried out on the back-blind part image subjected to noise reduction processing, a back-blind part image histogram is obtained through balancing the pixel value of the back-blind part image, the contrast is enhanced, potential secretion indexes of each pixel point in the back-blind part image are calculated according to the back-blind part image histogram, the potential secretion indexes of each pixel point are subjected to contrast classification according to a threshold segmentation method to form a suspected lesion pixel point set, the pixel points of a lesion region are connected to form a lesion region image, the edge feature of the lesion region image is extracted through an edge detection method to obtain the resolution ratio of the edge pixel points, the ambiguity index of the lesion region image is analyzed through Fourier transformation, the ambiguity coefficient of the edge pixel points is calculated according to the ambiguity index, the resolution ratio and the ambiguity coefficient of the edge pixel points are obtained, and the enhancement coefficient of the lesion region image is obtained, and the enhancement of the lesion region image is realized through adjusting the enhancement coefficient of the lesion region image.
Example 2
The second object of the present invention is to provide a system for implementing the high-definition medical image processing method including any one of the above, which includes:
the acquisition processing unit 1 is used for acquiring a medical image of the back blind part, and preprocessing the back blind part image through a guide filtering algorithm to obtain a back blind part image histogram;
The lesion analysis unit 2 includes a calculation analysis module 21 and a lesion generation module 22;
the calculation and analysis module 21 is configured to calculate potential secretion of each pixel according to the image histogram of the ileocecum, set a secretion index threshold, and perform contrast classification by using a threshold segmentation method to form a lesion pixel set;
the lesion generation module 22 connects the pixels of the lesion area by using an edge detection algorithm to form a lesion area image, and obtains the resolution of the pixels of the edge according to the edge characteristics of the lesion area image;
the fuzzy evaluation unit 3 analyzes frequency components of the lesion area image by utilizing Fourier transformation to obtain a fuzzy index of the lesion area image, and obtains a fuzzy degree coefficient of the edge pixel point according to the fuzzy index;
the enhancement determining unit 4 obtains an enhancement coefficient of the lesion area image according to the resolution and the blurring degree coefficient of the edge pixel points, and enhances the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.

Claims (8)

1. A high-definition medical image processing method is characterized by comprising the following steps:
s1, acquiring a medical image of a back blind part by using an image acquisition device, marking the medical image as a back blind part image, carrying out noise reduction on the back blind part image by using a guide filtering algorithm, extracting pixel values of the back blind part image subjected to the noise reduction, and balancing the pixel values of the back blind part image to obtain a back blind part image histogram;
s2, calculating potential secretion indexes of each pixel point in the back-blind part image according to the back-blind part histogram, setting a segmentation threshold value, and comparing and classifying the potential secretion indexes of each pixel point in the back-blind part image with the segmentation threshold value by using a threshold segmentation method to obtain pixel points exceeding the segmentation threshold value in the back-blind part image to form a lesion pixel point set;
S3, extracting pixel points in the suspected lesion pixel point set as pixel points of a lesion area, connecting the pixel points of the lesion area by utilizing an edge detection algorithm to form a lesion area image, extracting edge features of the lesion area image, and acquiring the resolution of the edge pixel points according to the edge features of the lesion area image;
S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain a fuzziness index of the lesion area image, obtaining a fuzziness coefficient of an edge pixel point according to the fuzziness index, and analyzing the frequency components of the lesion area image by utilizing Fourier transformation to obtain the fuzziness index of the lesion area image, wherein the method specifically comprises the following steps:
Converting an image of a lesion area into a gray image, smoothing the gray image by Gaussian filtering, defining the gray image as h (x ', y'), and applying a two-dimensional fast Fourier transform to the gray image, namely:
f(u,v)=f’{h(x’,y’)}
Where f (u, v) is a frequency domain representation, (u, v) is a frequency coordinate, a spectral amplitude S (u, v) is calculated according to f (u, v), the spectral amplitude S (u, v) represents a variation intensity of each frequency component (u, v) in the frequency domain, the energy contribution of these frequencies in the original image is reflected, and the expression of the spectral amplitude S (u, v) is:
S(u,v)=|f(u,v)|
according to the frequency spectrum amplitude, an energy ratio of high-frequency components to low-frequency components is utilized to define an ambiguity index, and the ambiguity index is expressed as follows:
Wherein, W is represented as an index set of high-frequency components, G is represented as an index set of low-frequency components, the index set of the high-frequency components is represented as information of details, edges and noise in an image, the blurring degree coefficient of the pixel points of the defined edges is R=1-Q, if the value of R is close to 1, the pixel points are clear, otherwise, the pixel points are blurred;
s5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, and enhancing the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
2. The method for processing high-definition medical images according to claim 1, wherein the step S1 of performing noise reduction processing on the back-blind image by a guided filtering algorithm comprises the following steps:
Randomly selecting an image from the back-blind part image as a guide image, marking the image as A, taking the back-blind part image to be noise reduced as an input image, marking the image as B, setting the window radius of guide filtering as r, regularization parameter alpha, establishing a window D e by taking each pixel point c in the back-blind part image to be noise reduced as a center, and calculating the output F c of the guide filtering in the window D e by the following formula, namely:
Fc=ae*Ac+be
where a e and b e are obtained by linear regression calculation, a c represents the pixel values of the guide image, and F c represents the back-blind image with the guide filtering noise reduction completed.
3. The method for processing high-definition medical images according to claim 2, wherein the step S2 of calculating the potential secretion index of the ileocecum according to the ileocecum histogram comprises the following steps:
the intensity value range of the back-blind image F c (x, y) subjected to the noise reduction process is set to be [0, L-1], the size of which is m×n, and L is expressed as the total number of color depths in the back-blind image, and the intensity histogram H (k) is calculated by using the following formula:
Wherein, Represented as gray values of image F c at coordinates (x, y), k is a discrete value within the range of back-blind image intensity values,Expressed as a kronecker function, where 0≤k≤L-1, defined as:
extracting the skewness and kurtosis of the intensity histogram H (k), establishing a linear model based on the skewness and kurtosis of the intensity histogram H (k), and calculating the potential secretion index gamma of each pixel point, wherein the expression of the potential secretion index gamma of each pixel point is as follows:
γ=ω1*PDH(k)2*FDH(k)
Wherein, ω 1 and ω 2 are weight coefficients affecting the potential secretion index γ of each pixel point obtained by experimental data, PD is expressed as skewness of the intensity histogram H (k), and FD is expressed as kurtosis of the intensity histogram H (k).
4. The method for processing high-definition medical images according to claim 3, wherein the step S2 is to compare the potential secretion index of each pixel point in the ileocecum image with a segmentation threshold, and the method is as follows:
Setting a secretion index threshold, wherein the potential secretion index of the lesion is higher than that of the peripheral region, comparing the potential secretion index of each pixel point in the back-blind image with the secretion index threshold, screening out the pixels with the potential secretion index of the pixel points higher than the segmentation threshold in the back-blind image, and forming a suspected lesion pixel point set by the pixels exceeding the segmentation threshold in the back-blind image.
5. The method for processing high-definition medical images according to claim 1, wherein the step S3 of connecting the pixels of the lesion area to form the lesion area image comprises the following steps:
Calculating the gradient amplitude and the gradient direction of each pixel point in the lesion area by using an edge detection algorithm, performing non-maximum suppression in the gradient direction, setting a high-low threshold, dividing the gradient amplitude into two types of strong edges and weak edges according to the high-low threshold, tracking the strong edge pixel points, dividing the weak edge pixel points adjacent to the strong edge pixel points into edge pixel points, and connecting the edge pixel points to form an image of the lesion area.
6. The method for processing high-definition medical images according to claim 1, wherein the step S3 of obtaining the resolution of the edge pixel point according to the edge feature of the lesion area image comprises the following steps:
Extracting coordinates (x i,yi) of all edge pixel points in the lesion area image, wherein i is an index of the edge pixel, setting an actual distance represented by each pixel in the horizontal direction as deltax, and converting the coordinates of the edge pixel points from a pixel unit to an actual space unit, namely:
Xi=xi*Δx
Yi=yi*Δy
By calculating the distance between edge pixels and analyzing the edge features, the Euclidean distance between edge pixels (X i,Yi) and (X j,Yj) is:
and obtaining the resolution of the edge pixel points by calculating the minimum distance between the edge pixel points, wherein the calculation formula of the resolution of the edge pixel points is as follows:
wherein min (d ij) represents the minimum distance between all edge pixels.
7. The method for processing high-definition medical images according to claim 1, wherein the step S5 is characterized in that the enhancement coefficient of the lesion area image is obtained according to the resolution and the blurring degree coefficient of the edge pixel points, and the method specifically comprises the following steps:
Setting an enhancement coefficient T which is inversely proportional to a blurring degree coefficient R of an edge pixel point and directly proportional to the resolution of the edge pixel point, wherein the expression of the enhancement coefficient T is as follows:
T=t*(1-R)*δ
Wherein T is a proportionality constant for controlling the enhancement degree, the enhancement coefficient T is used for adjusting the pixel value of the lesion area image, the pixel value P (x i,yi) of the lesion area image at the position (x i,yi) is taken, the enhanced part is obtained by calculating the product between the difference value between the pixel value and the background pixel value P' (x i,yi) and the enhancement coefficient T, the enhanced part is added back to the original pixel value P (x i,yi), the enhanced pixel value P ZQ(xi,yi) is obtained, and the expression of the enhanced pixel value P ZQ(xi,yi is:
PZQ(xi,yi)=P(xi,yi)+T*[(P(xi,yi))-P'(xi,yi)]
The detail and the brightness of the image are enhanced by increasing the contrast ratio with the background, so that the enhancement of the image of the lesion area of the ileocecum is realized.
8. A system for implementing a method for processing high definition medical images comprising any one of claims 1-7, comprising:
The acquisition processing unit (1) is used for acquiring a medical image of the back blind part, and preprocessing the back blind part image through a guide filtering algorithm to obtain a back blind part image histogram;
the lesion analysis unit (2) comprises a calculation analysis module (21) and a lesion generation module (22);
The calculation and analysis module (21) is used for calculating potential secretion of each pixel point according to the back blind part image histogram, setting a secretion index threshold value, and carrying out comparison classification by using a threshold segmentation method to form a lesion pixel point set;
The lesion generation module (22) connects pixel points of a lesion area by utilizing an edge detection algorithm to form a lesion area image, and the resolution of the edge pixel points is obtained according to the edge characteristics of the lesion area image;
The fuzzy evaluation unit (3) analyzes frequency components of the lesion area image by utilizing Fourier transformation to obtain a fuzzy index of the lesion area image, and obtains a fuzzy degree coefficient of the edge pixel point according to the fuzzy index;
the enhancement determining unit (4) obtains an enhancement coefficient of the lesion area image according to the resolution and the blurring degree coefficient of the edge pixel points, and enhances the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
CN202411089205.1A 2024-08-09 2024-08-09 High-definition medical image processing method Active CN119090764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411089205.1A CN119090764B (en) 2024-08-09 2024-08-09 High-definition medical image processing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411089205.1A CN119090764B (en) 2024-08-09 2024-08-09 High-definition medical image processing method

Publications (2)

Publication Number Publication Date
CN119090764A CN119090764A (en) 2024-12-06
CN119090764B true CN119090764B (en) 2025-06-10

Family

ID=93659322

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411089205.1A Active CN119090764B (en) 2024-08-09 2024-08-09 High-definition medical image processing method

Country Status (1)

Country Link
CN (1) CN119090764B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509104A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
WO2022037642A1 (en) * 2020-08-19 2022-02-24 南京图格医疗科技有限公司 Method for detecting and classifying lesion area in clinical image

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2214101A1 (en) * 1995-03-03 1996-09-12 Ulrich Bick Method and system for the detection of lesions in medical images
US7995825B2 (en) * 2001-04-05 2011-08-09 Mayo Foundation For Medical Education Histogram segmentation of FLAIR images
CN111340829B (en) * 2020-02-10 2023-02-28 上海海洋大学 An improved neural network segmentation model construction method for DME edema region
CN112861994B (en) * 2021-03-12 2023-04-28 中国科学院自动化研究所 Gastric seal ring cell cancer image intelligent classification system based on Unet transfer learning
CN115100304B (en) * 2022-04-24 2024-04-19 江苏中勤通信科技有限公司 Nuclear magnetic resonance image enhancement method based on image processing
CN116758069B (en) * 2023-08-17 2023-11-14 济南宝林信息技术有限公司 Medical image enhancement method for intestinal endoscope
CN118262875A (en) * 2024-04-11 2024-06-28 南昌大学第二附属医院 Medical image diagnosis and contrast film reading method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509104A (en) * 2011-09-30 2012-06-20 北京航空航天大学 Confidence map-based method for distinguishing and detecting virtual object of augmented reality scene
WO2022037642A1 (en) * 2020-08-19 2022-02-24 南京图格医疗科技有限公司 Method for detecting and classifying lesion area in clinical image

Also Published As

Publication number Publication date
CN119090764A (en) 2024-12-06

Similar Documents

Publication Publication Date Title
US8958625B1 (en) Spiculated malignant mass detection and classification in a radiographic image
Esmaeili et al. Automatic detection of exudates and optic disk in retinal images using curvelet transform
CN116630762B (en) Multi-mode medical image fusion method based on deep learning
JP2012512672A (en) Method and system for automatically detecting lesions in medical images
Chatterjee et al. Dermatological expert system implementing the ABCD rule of dermoscopy for skin disease identification
US20120099771A1 (en) Computer aided detection of architectural distortion in mammography
CN104794708A (en) Atherosclerosis plaque composition dividing method based on multi-feature learning
CN119559446A (en) Method and system for identifying spinal deformity based on image recognition
CN118967442A (en) Training method and reconstruction method of super-resolution generation model for glaucoma fundus images
Nagaraj et al. Carotid wall segmentation in longitudinal ultrasound images using structured random forest
CN119090764B (en) High-definition medical image processing method
Taouil et al. A new automatic approach for edge detection of skin lesion images
Taouil et al. Automatic segmentation and classification of skin lesion images
Nagpal et al. Performance analysis of diabetic retinopathy using diverse image enhancement techniques
CN119600341A (en) Computer vision-based craniocerebral CT image tumor recognition method and system
WO2025061646A1 (en) Method of bioimage analysis for the detection of color abnormalities through color distribution assessment
US20250225657A1 (en) Methods of processing optical images and applications thereof
Joda et al. Digital mammogram enhancement based on automatic histogram clipping
Thanaraj et al. Automatic boundary detection and severity assessment of mitral regurgitation
CN120047418B (en) A method and system for grading and identifying thyroid nodules
CN113940704A (en) Thyroid-based muscle and fascia detection device
CN113658193A (en) A tumor segmentation method in liver CT images based on information fusion
CN119169014B (en) A method, device and terminal equipment for mapping cardiac function parameters
Triyani et al. Malignant Detection of Breast Nodules On BIRADS-Based Ultrasound Images Margin, Orientation, And Posterior
CN118822953A (en) A tumor image analysis and processing method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant