Disclosure of Invention
The present invention is directed to a method for processing high-definition medical images, which solves the problems set forth in the background art.
In order to achieve the above object, the present invention provides a high definition medical image processing method, comprising the steps of:
S1, acquiring a medical image of a back blind part by using an image acquisition device, marking the medical image as a back blind part image, carrying out noise reduction on the back blind part image by using a guide filtering algorithm, extracting pixel values of the back blind part image subjected to the noise reduction, and balancing the pixel values of the back blind part image to obtain a back blind part image histogram;
S2, calculating potential secretion indexes of each pixel point in the back-blind part image according to the back-blind part histogram, setting a segmentation threshold value, and comparing and classifying the potential secretion indexes of each pixel point in the back-blind part image with the segmentation threshold value by using a threshold segmentation method to obtain pixel points exceeding the segmentation threshold value in the back-blind part image to form a suspected lesion pixel point set;
S3, extracting pixel points in the suspected lesion pixel point set as pixel points of a lesion area, connecting the pixel points of the lesion area by utilizing an edge detection algorithm to form a lesion area image, extracting edge features of the lesion area image, and acquiring the resolution of the edge pixel points according to the edge features of the lesion area image;
S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain a ambiguity index of the lesion area image, and obtaining a ambiguity coefficient of the edge pixel point according to the ambiguity index;
s5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, and enhancing the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
As a further improvement of the present technical solution, in S1, the noise reduction processing is performed on the blind-back image by using a guided filtering algorithm, which specifically includes:
Randomly selecting an image from the back-blind part image as a guide image, marking the image as A, taking the back-blind part image to be noise reduced as an input image, marking the image as B, setting the window radius of guide filtering as r, regularization parameter alpha, establishing a window D e by taking each pixel point c in the back-blind part image to be noise reduced as a center, and calculating the output F c of the guide filtering in the window D e by the following formula, namely:
Fc=ae*Ac+be
where a e and b e are obtained by linear regression calculation, a c represents the pixel values of the guide image, and F c represents the back-blind image with the guide filtering noise reduction completed.
As a further improvement of the present solution, the calculating the potential secretion index of the ileocecum according to the ileocecum histogram in S2 specifically includes:
the intensity value range of the back-blind image F c (x, y) subjected to the noise reduction process is set to be [0, L-1], the size of which is m×n, and L is expressed as the total number of color depths in the back-blind image, and the intensity histogram H (k) is calculated by using the following formula:
Wherein, Represented as gray values of the image F c at coordinates (x, y),Expressed as a kronecker function, where 0≤k≤L-1, defined as:
extracting the skewness and kurtosis of the intensity histogram H (k), establishing a linear model based on the skewness and kurtosis of the intensity histogram H (k), and calculating the potential secretion index gamma of each pixel point, wherein the expression of the potential secretion index gamma of each pixel point is as follows:
γ=ω1*PDH(k)+ω2*FDH(k)
Wherein, ω 1 and ω 2 are weight coefficients affecting the potential secretion index γ of each pixel point obtained by experimental data, PD is expressed as skewness of the intensity histogram H (k), and FD is expressed as kurtosis of the intensity histogram H (k).
As a further improvement of the present technical solution, in S2, the potential secretion index of each pixel point in the ileocecum image is compared and classified with the segmentation threshold, which is specifically as follows:
Setting a secretion index threshold, wherein the potential secretion index of the lesion is higher than that of the peripheral region, comparing the potential secretion index of each pixel point in the back-blind image with the secretion index threshold, screening out the pixels with the potential secretion index of the pixel points higher than the segmentation threshold in the back-blind image, and forming a suspected lesion pixel point set by the pixels exceeding the segmentation threshold in the back-blind image.
As a further improvement of the present technical solution, in S3, the connecting the pixel points of the lesion area to form the lesion area image specifically includes:
Calculating the gradient amplitude and the gradient direction of each pixel point in the lesion area by using an edge detection algorithm, performing non-maximum suppression in the gradient direction, setting a high-low threshold, dividing the gradient amplitude into two types of strong edges and weak edges according to the high-low threshold, tracking the strong edge pixel points, dividing the weak edge pixel points adjacent to the strong edge pixel points into edge pixel points, and connecting the edge pixel points to form an image of the lesion area.
As a further improvement of the present technical solution, in S3, the obtaining the resolution of the edge pixel point according to the edge feature of the lesion area image specifically includes:
Extracting coordinates (x i,yi) of all edge pixel points in the lesion area image, wherein i is an index of the edge pixel, setting an actual distance represented by each pixel in the horizontal direction as deltax, and converting the coordinates of the edge pixel points from a pixel unit to an actual space unit, namely:
Xi=xi*Δx
Yi=yi*Δy
By calculating the distance between edge pixels and analyzing the edge features, the Euclidean distance between edge pixels (X i,Yi) and (X j,Yj) is:
and obtaining the resolution of the edge pixel points by calculating the minimum distance between the edge pixel points, wherein the calculation formula of the resolution of the edge pixel points is as follows:
wherein min (d ij) represents the minimum distance between all edge pixels.
As a further improvement of the present technical solution, in S4, frequency components of the lesion area image are analyzed by using fourier transform, so as to obtain an ambiguity index of the lesion area image, which specifically includes:
Converting an image of a lesion area into a gray image, smoothing the gray image by Gaussian filtering, defining the gray image as h (x ', y'), and applying a two-dimensional fast Fourier transform to the gray image, namely:
f(u,v)=f’{h(x’,y’)}
Where f (u, v) is a frequency domain representation, (u, v) is a frequency coordinate, a spectral amplitude S (u, v) is calculated according to f (u, v), the spectral amplitude S (u, v) represents a variation intensity of each frequency component (u, v) in the frequency domain, the energy contribution of these frequencies in the original image is reflected, and the expression of the spectral amplitude S (u, v) is:
S(u,v)=|f(u,v)|
according to the frequency spectrum amplitude, an energy ratio of high-frequency components to low-frequency components is utilized to define an ambiguity index, and the ambiguity index is expressed as follows:
Wherein W is represented as an index set of a high-frequency component, G is represented as an index set of a low-frequency component, the index set of the high-frequency component is represented as information of details, edges and noise in an image, a blurring degree coefficient defining an edge pixel point is R=1-Q, if the value of R is close to 1, the pixel point is clear, and otherwise, the pixel point is blurred.
As a further improvement of the present technical solution, in S5, the enhancement coefficient of the lesion area image is obtained according to the resolution and the blurring coefficient of the edge pixel point, and specifically includes:
Setting an enhancement coefficient T which is inversely proportional to a blurring degree coefficient R of an edge pixel point and directly proportional to the resolution of the edge pixel point, wherein the expression of the enhancement coefficient T is as follows:
T=t*(1-R)*δ
Wherein T is a proportionality constant for controlling the enhancement degree, the enhancement coefficient T is used for adjusting the pixel value of the lesion area image, the pixel value P (x i,yi) of the lesion area image at the position (x i,yi) is taken, the enhanced part is obtained by calculating the product between the difference value between the pixel value and the background pixel value P' (x i,yi) and the enhancement coefficient T, the enhanced part is added back to the original pixel value P (x i,yi) to obtain the enhanced pixel value P ZQ(xi,yi), and the expression of the enhanced pixel value P ZQ(xi,yi is:
PZQ(xi,yi)=P(xi,yi)+T*[(P(xi,yi))-P'(xi,yi)]
The detail and the brightness of the image are enhanced by increasing the contrast ratio with the background, so that the enhancement of the image of the lesion area of the ileocecum is realized.
Another object of the present invention is to provide a system for implementing a high-definition medical image processing method, comprising:
the acquisition processing unit is used for acquiring a medical image of the back blind part, and preprocessing the back blind part image through a guide filtering algorithm to obtain a back blind part image histogram;
the lesion analysis unit comprises a calculation analysis module and a lesion generation module;
The calculation and analysis module is used for calculating potential secretion of each pixel point according to the back blind part image histogram, setting a secretion index threshold value, and comparing and classifying by using a threshold segmentation method to form a lesion pixel point set;
The lesion generation module connects pixel points of a lesion area by utilizing an edge detection algorithm to form a lesion area image, and the resolution of the edge pixel points is obtained according to the edge characteristics of the lesion area image;
the fuzzy evaluation unit analyzes frequency components of the lesion area image by utilizing Fourier transformation to obtain a fuzzy index of the lesion area image, and obtains a fuzzy degree coefficient of the edge pixel point according to the fuzzy index;
the enhancement determination unit obtains an enhancement coefficient of the lesion area image according to the resolution and the blurring degree coefficient of the edge pixel points, and enhances the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image
Compared with the prior art, the invention has the beneficial effects that:
In the high-definition medical image processing method, noise reduction processing is carried out on the back-blind part image through a guide filtering algorithm to obtain a back-blind part image histogram, edge information of the image can be well reserved, the back-blind part image histogram is still clearly presented after noise reduction, then a threshold segmentation method is utilized to compare and classify potential secretion indexes of each pixel point in the back-blind part image with a segmentation threshold value, and pixel points exceeding the segmentation threshold value in the back-blind part image are obtained, so that the data volume of subsequent processing is reduced, the processing efficiency is improved, the resolution of the edge pixel points is obtained according to the edge characteristics of the lesion area image, the boundary of the lesion area can be more accurately determined, the accurate evaluation of the size of the lesion is facilitated, frequency components of the lesion area image are analyzed through Fourier transformation, the ambiguity index of the lesion area image is obtained, the degree coefficient of the edge pixel points is obtained according to the ambiguity index, the overall fuzzy characteristic of the lesion area is comprehensively evaluated, the enhancement coefficient of the lesion area image is obtained according to the resolution and the ambiguity coefficient of the edge pixel points is not limited to local pixels, and the enhancement of the lesion area image is realized.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Referring to fig. 1-2, the present embodiment provides a high-definition medical image processing method, which includes the following steps:
s1, acquiring a medical image of a back blind part by using an image acquisition device, marking the medical image as a back blind part image, carrying out noise reduction on the back blind part image by using a guided filtering algorithm, extracting pixel values of the back blind part image subjected to the noise reduction, and equalizing the pixel values of the back blind part image to obtain a back blind part image histogram, so that the visibility and detail of the back blind part image are improved, the contrast of the image is enhanced, and the hidden structure in the back blind part image is more clear and visible;
in S1, noise reduction processing is carried out on the blind-back part image through a guide filtering algorithm, and the method specifically comprises the following steps:
Randomly selecting an image from the back-blind part image as a guide image, marking the image as A, taking the back-blind part image to be noise reduced as an input image, marking the image as B, setting the window radius of guide filtering as r, regularization parameter alpha, establishing a window D e by taking each pixel point c in the back-blind part image to be noise reduced as a center, and calculating the output F c of the guide filtering in the window D e by the following formula, namely:
Fc=ae*Ac+be
where a e and b e are obtained by linear regression calculation, a c represents the pixel values of the guide image, and F c represents the back-blind image with the guide filtering noise reduction completed.
be=B'-ae*A’
Where J is represented as the number of pixels in the window J centered around pixel c, which is used to average the results of the computation in the window to obtain a representative statistic, A 'is represented as the average of the guide image A in the window J, reflecting the average brightness or intensity level of the guide image in the window, B' is represented as the average of the input image B in the window J,The variance of the guiding image A in the window J is used for measuring the dispersion degree or the distribution width of the guiding image pixel value in the window, a e is a coefficient calculated by the statistical information in the window and used for determining the linear relation between the output image and the guiding image, b e is another coefficient calculated by the statistical information in the window and used for determining the final value of the output image together with a e, alpha is a regularization parameter and used for preventing instability caused by zero denominator over-small numerical value, and meanwhile, the filtering effect is controlled and excessive smoothness or excessive fitting is avoided.
S2, calculating potential secretion indexes of each pixel point in the back-blind part image according to the back-blind part histogram, setting a segmentation threshold value, and comparing and classifying the potential secretion indexes of each pixel point in the back-blind part image with the segmentation threshold value by using a threshold segmentation method to obtain pixel points exceeding the segmentation threshold value in the back-blind part image to form a suspected lesion pixel point set;
And S2, calculating potential secretion indexes of the ileocecum according to the ileocecum histogram, wherein the potential secretion indexes comprise:
The intensity histogram H (k) is calculated by assuming that the intensity value range of the back-blind image F c (x, y) subjected to the noise reduction process is [0, L-1], the size of which is m×n, and L is the total number of color depths in the back-blind image, that is:
Wherein, Represented as gray values of the image F c at coordinates (x, y),Expressed as a Croneck function, 0≤k≤L-1, the Croneck function being defined as:
Because the ileocecal part of the secretion usually has obvious difference in intensity with surrounding tissues, the intensity of the potential secretion in the image is usually compared with that of the surrounding tissues, which intensity values represent the potential secretion can be determined by analyzing the histogram, the skewness and kurtosis of the intensity histogram H (k) are extracted, a linear model is built based on the skewness and kurtosis of the intensity histogram H (k), and the potential secretion index gamma of each pixel point is calculated, so that the expression of the potential secretion index gamma of each pixel point is as follows:
γ=ω1*PDH(k)+ω2*FDH(k)
Wherein, ω 1 and ω 2 are obtained through experimental data, and the weighting coefficient affecting the potential secretion index γ of each pixel point is represented by PD as the skewness of the intensity histogram H (k), and FD is represented by the kurtosis of the intensity histogram H (k), and calculating the potential secretion index of each pixel point according to the intensity histogram can provide a solid basis for subsequent medical image analysis, thereby improving the accuracy and reliability of diagnosis.
In S2, comparing the potential secretion index of each pixel point in the ileocecum image with a segmentation threshold value, and classifying the potential secretion index specifically as follows:
when the ileocecum part of the human intestinal tract is diseased, a human body mechanism performs self-protection by generating a large amount of secretion, a large amount of secretion generated at the disease part of the ileocecum part can diffuse in the intestinal tract, so that potential secretion index difference is formed, the potential secretion index of the disease part is higher than that of the peripheral area, a secretion index threshold value is set, the potential secretion index of the disease part is higher than that of the peripheral area, the potential secretion index of each pixel point in the ileocecum part image is compared with the secretion index threshold value, the pixel points, of which the potential secretion index of the pixel points is higher than the segmentation threshold value, in the ileocecum part image are screened out, and the pixel points, exceeding the segmentation threshold value, in the ileocecum part image form a suspected disease pixel point set, so that the potential ileocecum part disease area is conveniently and accurately identified and screened out, and the efficiency of medical image analysis is improved.
S3, extracting pixel points in the suspected lesion pixel point set as pixel points of a lesion area, connecting the pixel points of the lesion area by utilizing an edge detection algorithm to form a lesion area image, extracting edge features of the lesion area image, and acquiring the resolution of the edge pixel points according to the edge features of the lesion area image;
s3, connecting pixel points of a lesion area to form a lesion area image, wherein the method specifically comprises the following steps:
Calculating the gradient amplitude and the gradient direction of each pixel point in the lesion area by using an edge detection algorithm, performing non-maximum suppression in the gradient direction, setting a high-low threshold, dividing the gradient amplitude into two types of strong edges and weak edges according to the high-low threshold, tracking the strong edge pixel points, dividing the weak edge pixel points adjacent to the strong edge pixel points into edge pixel points, and connecting the edge pixel points to form an image of the lesion area.
In the step S3, the resolution of the edge pixel point is obtained according to the edge characteristics of the lesion area image, and the method specifically comprises the following steps:
The edge pixel points represent the boundary between the lesion area and the background in the image, the specific boundary of the suspected lesion can be accurately positioned and defined by acquiring the coordinates of the edge pixel points, so that the coordinates (x i,yi) of all the edge pixel points in the lesion area image are extracted, i is the index of the edge pixel, in the medical image analysis, the measurement of the actual space unit can provide a more visual and understandable result, the doctor uses the actual unit to evaluate the size and the position of the lesion, and therefore, the pixel unit is converted into the actual space unit so that the interpretation of the result is clearer, the actual distance represented by each pixel in the horizontal direction is set as deltax, the actual distance deltay represented by each pixel in the vertical direction is converted from the pixel unit into the actual space unit, namely:
Xi=xi*Δx
Yi=yi*Δy
By calculating the distance between edge pixels and analyzing the edge features, the Euclidean distance between edge pixels (X i,Yi) and (X j,Yj) is:
the resolution of the edge pixel points is obtained by calculating the minimum distance between the edge pixel points, the definition degree of details in the image is determined, and the calculation formula of the resolution of the edge pixel points is as follows:
wherein min (d ij) represents the minimum distance between all edge pixels.
And S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain a ambiguity index of the lesion area image, and obtaining a ambiguity coefficient of the edge pixel point according to the ambiguity index.
And S4, analyzing frequency components of the lesion area image by utilizing Fourier transformation to obtain an ambiguity index of the lesion area image, wherein the method specifically comprises the following steps of:
The different frequency components in the image correspond to different features, for example, the high frequency component generally represents edges and details, the low frequency component generally represents rough shapes and backgrounds, the frequency features of the image can be better understood through fourier transformation, the blurring of a lesion area can be quantified through analyzing the frequency components of the frequency components, the blurring composite image can generally lose high frequency information, therefore, a ambiguity index can be extracted through frequency domain analysis to assist in understanding the nature of the lesion, the image of the lesion area is converted into a gray image, the gray image is smoothed through gaussian filtering, the gray image is defined as h (x ', y'), and two-dimensional fast fourier transformation is applied to the gray image, namely:
f(u,v)=f’{h(x’,y’)}
Where f (u, v) is a frequency domain representation, (u, v) is a frequency coordinate, a spectral amplitude S (u, v) is calculated according to f (u, v), the spectral amplitude S (u, v) represents a variation intensity of each frequency component (u, v) in the frequency domain, the energy contribution of these frequencies in the original image is reflected, and the expression of the spectral amplitude S (u, v) is:
S(u,v)=|f(u,v)|
According to the frequency spectrum amplitude, the energy ratio of the high-frequency component and the low-frequency component is utilized to define an ambiguity index, and the ambiguity index has the following expression:
Wherein, W is represented as an index set of high-frequency components, G is represented as an index set of low-frequency components, and the index set of high-frequency components is represented as information of details, edges and noise in the image;
the blurring degree coefficient of the edge pixel point is defined as r=1-Q, if the value of R is close to 1, the pixel point is clear, otherwise, the pixel point is blurred.
S5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, and enhancing the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
And S5, obtaining an enhancement coefficient of the lesion area image according to the resolution ratio and the blurring degree coefficient of the edge pixel points, wherein the enhancement coefficient comprises the following specific steps:
setting an enhancement coefficient T which is inversely proportional to a blurring degree coefficient R of the edge pixel point and directly proportional to the resolution of the edge pixel point, wherein the expression of the enhancement coefficient T is as follows:
T=t*(1-R)*δ
Wherein T is a proportionality constant used for controlling the enhancement degree, the enhancement coefficient T is used for adjusting the pixel value of the lesion area image, the pixel value P (x i,yi) of the lesion area image at the position (x i,yi) is taken, the enhanced part is obtained by calculating the product between the difference value of the pixel value and the background pixel value P' (x i,yi) and the enhancement coefficient T, the background pixel value is obtained by a field averaging method, which is not described in detail in the prior art, the enhanced part is added back to the original pixel value P (x i,yi) to obtain the enhanced pixel value P ZQ(xi,yi), and the expression of the enhanced pixel value P ZQ(xi,yi) is as follows:
PZQ(xi,yi)=P(xi,yi)+T*[(P(xi,yi))-P'(xi,yi)]
the detail and the brightness of the image are enhanced by increasing the contrast ratio with the background, so that the pathological change area is highlighted, and the enhancement of the image of the pathological change area of the ileocecum is realized.
According to the invention, noise is reduced on the back-blind part image through a guide filtering algorithm, the edge and detail of the image are effectively reserved, pixel value extraction is carried out on the back-blind part image subjected to noise reduction processing, a back-blind part image histogram is obtained through balancing the pixel value of the back-blind part image, the contrast is enhanced, potential secretion indexes of each pixel point in the back-blind part image are calculated according to the back-blind part image histogram, the potential secretion indexes of each pixel point are subjected to contrast classification according to a threshold segmentation method to form a suspected lesion pixel point set, the pixel points of a lesion region are connected to form a lesion region image, the edge feature of the lesion region image is extracted through an edge detection method to obtain the resolution ratio of the edge pixel points, the ambiguity index of the lesion region image is analyzed through Fourier transformation, the ambiguity coefficient of the edge pixel points is calculated according to the ambiguity index, the resolution ratio and the ambiguity coefficient of the edge pixel points are obtained, and the enhancement coefficient of the lesion region image is obtained, and the enhancement of the lesion region image is realized through adjusting the enhancement coefficient of the lesion region image.
Example 2
The second object of the present invention is to provide a system for implementing the high-definition medical image processing method including any one of the above, which includes:
the acquisition processing unit 1 is used for acquiring a medical image of the back blind part, and preprocessing the back blind part image through a guide filtering algorithm to obtain a back blind part image histogram;
The lesion analysis unit 2 includes a calculation analysis module 21 and a lesion generation module 22;
the calculation and analysis module 21 is configured to calculate potential secretion of each pixel according to the image histogram of the ileocecum, set a secretion index threshold, and perform contrast classification by using a threshold segmentation method to form a lesion pixel set;
the lesion generation module 22 connects the pixels of the lesion area by using an edge detection algorithm to form a lesion area image, and obtains the resolution of the pixels of the edge according to the edge characteristics of the lesion area image;
the fuzzy evaluation unit 3 analyzes frequency components of the lesion area image by utilizing Fourier transformation to obtain a fuzzy index of the lesion area image, and obtains a fuzzy degree coefficient of the edge pixel point according to the fuzzy index;
the enhancement determining unit 4 obtains an enhancement coefficient of the lesion area image according to the resolution and the blurring degree coefficient of the edge pixel points, and enhances the lesion area image of the ileocecum part by adjusting the enhancement coefficient of the lesion area image.
The foregoing has shown and described the basic principles, principal features and advantages of the invention. It will be understood by those skilled in the art that the present invention is not limited to the above-described embodiments, and that the above-described embodiments and descriptions are only preferred embodiments of the present invention, and are not intended to limit the invention, and that various changes and modifications may be made therein without departing from the spirit and scope of the invention as claimed. The scope of the invention is defined by the appended claims and equivalents thereof.