Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, exemplary embodiments according to the present application will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some embodiments of the present application and not all embodiments of the present application, and it should be understood that the present application is not limited by the example embodiments described herein. Based on the embodiments of the application described in the present application, all other embodiments that a person skilled in the art would have without inventive effort shall fall within the scope of the application.
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of the present application. It will be apparent, however, to one skilled in the art that the application may be practiced without one or more of these details. In other instances, well-known features have not been described in detail in order to avoid obscuring the application.
It should be understood that the present application may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the application to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. As used herein, the term "and/or" includes any and all combinations of the associated listed items.
In order to provide a thorough understanding of the present application, detailed structures will be presented in the following description in order to illustrate the technical solutions presented by the present application. Alternative embodiments of the application are described in detail below, however, the application may have other implementations in addition to these detailed descriptions.
Next, an analysis method 100 of DR imaging according to an embodiment of the present application is described first with reference to fig. 1. As shown in fig. 1, the analysis method 100 of DR imaging may include the following steps:
In step S110, acquiring a digital radiography DR image of a target object by a DR imaging apparatus, the DR image including at least one of a DR original image and an image after the DR original image is processed;
Extracting image features from the DR image, the image features including at least one of gray entropy features, texture features, noise features, gradient features, and divergence features at step S120;
In step S130, acquiring device parameter characteristics according to device parameters of the DR imaging device, and/or acquiring object parameter characteristics according to object parameters of the target object;
In step S140, a DR feature index is determined according to the image feature, the device parameter feature and/or the object parameter feature, where the DR feature index is used to represent a comprehensive level between a radiation dose of X-rays to which the target object is subjected and an image quality of the DR image.
According to the DR imaging analysis method 100 of the embodiment of the application, image characteristics and parameter characteristics are extracted from the DR image, and DR characteristic indexes capable of objectively representing the comprehensive level between the radiation dose of X-rays received by a target object and the image quality of the DR image are obtained according to the extracted image characteristics and parameter characteristics, so that an objective judgment basis is provided for an operator, and directions of in-hospital quality control, dose-reducing imaging and the like can be guided. The DR characteristic index may also be referred to as a DR characteristic index, a DR objectification index, or any other suitable designation.
In step S110, the DR image may be acquired locally from the DR imaging apparatus, or may be acquired from another external apparatus, which is not particularly limited herein. The DR image may be acquired from the local acquisition device in real time, or may be acquired from a DR image acquired in non-real time and stored in the local DR imaging device.
In one possible implementation, the specific process of acquiring the DR image from the local site is to control the DR imaging apparatus to emit X-rays toward and pass through a target tissue site of a target object, receive the X-rays after passing through the target tissue site, and process the X-rays after passing through the target tissue site to acquire a digital radiography DR image. The target tissue site may be a tissue site of a human or other animal body to be examined, such as the head, abdomen, etc.
The DR image of the embodiment of the present invention may be at least one of a DR original image and an image after the DR original image is processed. In the DR imaging process, the X-rays emitted by the X-ray generator are attenuated to different degrees after passing through the human body and are received by the detector. Illustratively, the upper layer of the detector is an X-ray conversion medium having photoconductive properties capable of converting X-rays into an electrical signal, producing positive and negative charges that are moved along an electric field in the form of a current under a bias voltage, collected by an array of detector detection units, each detection unit corresponding to a pixel of the DR image. The digital signal, i.e., DR original image, can be obtained by amplifying the electric signal, analog-to-digital converting, and the like. The DR original image is a gray-scale image that has not undergone post-processing such as contrast adjustment, brightness adjustment, and the like. The image processed by the DR original image may be an image obtained by subjecting the DR original image to a certain transformation relationship, or post-processing such as image contrast, image brightness adjustment, etc., that is, the image processed by the DR original image may be regarded as an image obtained by subjecting the DR original image to any post-processing, which is not particularly limited herein.
In step S120, image features including at least one of gray entropy features, texture features, noise features, gradient features, and divergence features are extracted from the DR image. That is, the image features extracted in step S120 may be all of gray entropy features, texture features, noise features, gradient features, and divergence features, or any one or more of them. The gray entropy feature, the texture feature, the noise feature, the gradient feature and the divergence feature respectively reflect gray information, texture information, noise information, gradient information and divergence information of the DR image, and the multi-dimensional image feature is beneficial to obtaining a more accurate DR feature index.
In one embodiment, the gray entropy feature is the information content obtained by statistically screening redundant gray scale sources in the DR image. Depending on the transfer characteristics of the X-ray quanta converted to digital signals by the detector of the DR system, each individual gray level in the DR image may be considered a source and the DR imaging process may be considered a process of transferring information through the gray level source. Of all gray levels, there are some gray levels where no gray value exists, and thus it is regarded as a redundant gray level source. Because the extraction process of the gray entropy features of the embodiment of the invention screens redundant gray level information sources, more effective image information can be extracted.
Illustratively, gray entropy features may be extracted from a DR image in the following manner:
First, a first probability statistical distribution pH (i) of each gray level source in the DR image is obtained, wherein i is an image gray value of the DR image. Then, according to the size of the first probability statistical distribution, a first probability statistical distribution pNH (i) of the non-redundant gray scale source is determined from the first probability statistical distribution pH (i) of the gray scale source. Illustratively, if the first probability statistical distribution pH (i) is greater than 0, the first probability statistical distribution pNH (i) that is a non-redundant gray scale source is determined, whereas if the first probability statistical distribution pH (i) is greater than 0, the first probability statistical distribution pNH (i) that is a non-redundant gray scale source is determined, namely:
After obtaining the first probability statistical distribution pNH (i) of the non-redundant gray scale information sources, obtaining a second probability statistical distribution pH' (i) of each non-redundant gray scale information source according to the ratio of the first probability statistical distribution pNH (i) of each non-redundant gray scale information source to the sum Σpnh (i) of the first probability statistical distributions of all the non-redundant gray scale information sources, namely:
pH' (i) =pnh (i)/Σpnh (i) formula (2)
Finally, entropy is calculated according to a second probability statistical distribution pH' (i) of the non-redundant gray level source to obtain a gray entropy feature H (Image), where the entropy calculation method includes, but is not limited to, taking a natural logarithm ln based on e or taking a logarithm based on other values, and the entropy calculation method is expressed as:
therefore, gray entropy characteristics obtained after redundant gray level information sources are screened out can be obtained.
The texture features represent the spatial change relation of different gray values on the DR image, and can reflect the abstract features of the change rule of human tissues. Illustratively, a statistical method may be employed to extract texture features of the DR image, which results in statistical characteristics of texture regions based on gray scale properties of pels and their neighbors. Details of the statistical method for extracting texture features are mainly described below, but the method for extracting texture features may also include geometric methods, model methods, or other suitable texture feature extraction methods.
When a statistical method is used to extract texture features from a DR image, a texture feature description matrix is first extracted from the gray values of each pixel in the DR image, and then at least one two-dimensional component of the texture feature description matrix is extracted as at least one texture feature.
In one embodiment, the texture feature description matrix describes the distance between different pixels in the DR image and the change in gray value in the direction between different pixels. Let DR Image image=f (x, y), the texture feature description matrix P (i, j) is:
P (i, j) = # { (x 1, y 1), (x 2, y 2) ∈m x n|f (x 1, y 1) =i, f (x 2, y 2) =j } formula (4)
Where, # (x) represents the number of elements in the set x, and assuming that the distance between two points (x 1, y 1) and (x 2, y 2) in the image is k, the texture feature description matrix in different directions l can be expanded to P (i, j, k, l). The texture feature description matrix can be expanded by counting the changes of gray values in different distances and different directions to obtain the texture feature description matrix, so that information in more dimensions can be obtained. For example, in order to calculate the actual feature values of different angles, super-resolution interpolation may be performed on the DR image in each direction to obtain a sub-pixel gray value corresponding to the new target position, and the texture feature description matrix in different directions may be obtained according to the sub-pixel gray value.
After the texture feature description matrix is obtained, at least one two-dimensional component of the texture feature description matrix is extracted to obtain texture features, and the key characteristics of the texture feature description matrix are reflected by the texture features. Illustratively, the texture feature comprises at least one of:
The size of the P1 value can reflect the definition of the DR image and the groove depth of the texture, the deeper the groove of the texture is, the larger the P1 value is, and the shallower the groove is, the smaller the P1 value is;
the second texture feature P2 is used for counting the similarity degree of the value distribution in the texture feature description matrix and the parallel and normal directions in the DR image, and the larger the P2 value is, the larger the similarity degree of the gray level of the image in different directions is;
the third texture feature P3 is used for counting the uniformity degree of the value distribution in the texture feature description matrix and the gray level change distribution in the DR image, the size of the P3 value can reflect the uniformity degree of the gray level distribution of the image and the thickness of the texture, and the larger the P3 value is, the more stable the texture change of the DR image is;
and the fourth texture feature P4 is used for counting the value distribution in the texture feature description matrix and the measurement of the local variation of the texture of the DR image, and the larger the P4 value is, the stronger the regularity of the texture is.
The texture features extracted from the texture feature description matrix are not limited to the above four, and in other embodiments, other two-dimensional components of the texture feature description matrix may be extracted as texture features.
The image noise mainly comprises X-ray quantum noise, the distribution of the X-ray quantum is subjected to Poisson distribution, the variance of the distribution is in direct proportion to the average quantum detection number, and the fluctuation degree of the X-ray quantum can be counted from the DR image as noise characteristics according to the characteristic of the X-ray quantum noise, namely the noise characteristics reflect the fluctuation degree of the X-ray quantum in the DR image.
In one embodiment, extracting the noise characteristic from the DR image comprises the steps of filtering the DR image to obtain a high-frequency image, wherein the high-frequency image obtained from the DR image mainly comprises noise information, solving a root mean square of a neighborhood of each pixel point in the high-frequency image to obtain a noise distribution image, and counting noise value distribution in the noise distribution image to obtain the noise characteristic.
As one implementation mode, filtering the DR image to obtain a high-frequency image comprises the steps of carrying out low-frequency filtering on the DR image I to filter low-frequency components in the DR image I so as to obtain a low-frequency image I1, and obtaining the high-frequency image I2 according to the difference value between the low-frequency image and the DR image. In other implementations, the DR image I may also be directly subjected to high-frequency filtering to obtain a high-frequency image I2.
Illustratively, the DR image I may be subjected to gaussian low-pass filtering, resulting in a low-frequency image I1. The image is a two-dimensional signal, so that a two-dimensional Gaussian function is adopted to carry out Gaussian low-pass filtering, and the two-dimensional Gaussian filtering kernel is as follows:
The high frequency image I2, i.e., i2=i1-I, can be obtained by calculating the difference between the low frequency image I1 and the image I. The noise distribution image I3 can be obtained by extracting effective information in the high-frequency image I2. In one embodiment, the noise distribution image I3 is an image formed by solving for the root mean square for the neighborhood of each pixel in the high frequency image, and the method for calculating the root mean square includes using the L1 norm or other approximate or equivalent measure, for example:
where I3 (I, j) is the value of the noise distribution image I3 at the pixel point (I, j), and I2 (l, k) is the value of the high frequency I2 at the pixel point (l, k).
After the noise distribution image I3 is acquired, the noise value distribution is counted to obtain the acoustic characteristics. For example, a noise value section having the highest noise value distribution probability in the noise distribution image may be determined, and a noise value of the noise value section having the highest noise value distribution probability may be used as the noise feature.
Specifically, first, the pixel value of the noise distribution image I3 is divided into M bins, the pixel value interval of each bin is d, the initialized histogram vector is h, the length is M, and each component h (I) of the initialized histogram represents the number of pixel value values in the I-th bin. For a pixel point (I, j) in the noise distribution image I3, the interval corresponding to the pixel value of the point is R [ I3 (I, j) ]. And traversing each pixel point of the noise distribution image I3, counting h (R (I3 (I, j))), obtaining h, wherein after h is obtained, the maximum value of the main peak is max (h), and calculating the noise distribution probability R0 x d as the noise characteristic when the corresponding interval R0=argmax (h).
In some embodiments, the image features further comprise gradient features. According to the difference of absorption coefficients of X-rays penetrating through different tissues of a human body, region distribution of different tissues can be formed in a DR image, boundaries with different degrees exist between different tissue regions, and the definition degree of the boundaries can be quantified by extracting gradient features, namely the gradient features are used for representing the definition of the boundaries of the different tissues in the DR image. In one embodiment, extracting gradient features from the DR image includes determining regional distributions of different tissues in the DR image, and obtaining sharpness of boundaries of the regional distributions of different tissues as gradient features.
Illustratively, the gradient characteristics may be calculated as follows:
wherein Grad (x, y) is used to characterize the gradient magnitude at coordinates (x, y), and Image (x, y) is used to characterize the pixel value at coordinates (x, y).
In some embodiments, the image features further include a divergence feature, in the DR image, the boundaries of the different tissues are not strictly defined, but rather there is a degree of transition, and the degree of intensity and/or trend consistency of such transition may be quantified by extracting the divergence feature, i.e., the divergence feature is used to characterize the degree of transition intensity and/or trend consistency of the boundaries of the different tissues in the DR image. In one embodiment, the extracting of the divergence features from the DR image comprises determining regional distributions of different tissues in the DR image, and obtaining transition intensity and/or trend consistency of boundaries of the regional distributions of the different tissues as the divergence features.
Illustratively, the divergence characteristics may be calculated as follows:
Wherein Diver (x, y) is used to characterize the magnitude of the divergence at coordinates (x, y), image (x, y) is used to characterize the pixel value at coordinates (x, y), actan represents the arctangent function.
In some embodiments, in addition to at least one of the above image features, other image features may be extracted from the DR image for obtaining DR feature indexes, for example, gray gradient, gray average, variance, pixel value information, signal-to-noise ratio, contrast-to-noise ratio, and the like, which may be specific according to practical situations, and is not limited herein.
In step S130, device parameter characteristics are acquired according to device parameters of the DR imaging device, and/or object parameter characteristics are acquired according to object parameters of the target object. The device parameter features and object parameter features may be collectively referred to as parameter features.
Wherein the device parameter characteristics may be obtained from device parameters that can affect the image quality of the DR image or relate to the dose of X-ray radiation to the target object. When the device parameter characteristics are acquired, the device parameter characteristics can be acquired according to the parameter setting value of the DR imaging device, and also can be acquired according to the parameter detection value. After obtaining the equipment parameters, the equipment parameters can be operated according to a specific operation rule so as to obtain the characteristics of the equipment parameters.
In one embodiment, the device parameter characteristics include tube voltage parameter characteristics and/or tube current parameter characteristics, the tube voltage and tube current being the primary factors affecting the exposure dose. Specifically, an X-ray generator of the DR imaging device mainly comprises a high-voltage generator and a bulb tube, tube voltage is provided for the bulb tube through the high-voltage generator, electrons generated by a filament in the bulb tube are accelerated under the action of the tube voltage, bombard a target material to generate X-rays, and the electrons received by the target material form tube current. Generally, the tube voltage controls the energy and quality of the X-ray beam generated by the bulb tube, which in turn determines the penetration of the X-rays, the intensity of which determines the amount of the line impinging on the detector, and thus the contrast of the DR image. The tube current controls the number of X-ray beams, which determines the upper limit of the amount of X-rays that may impinge on the detector during DR imaging, and thus the density of the DR image. Too high tube voltage and tube current will cause too many X-rays penetrating the human body to increase image noise, and too low tube voltage and tube current will cause too few X-rays penetrating the human body to clearly display image details. The tube voltage parameter characteristics and the tube current parameter characteristics can reflect whether the tube voltage and the tube current are suitable or not, and finally are embodied in the DR characteristic index.
When acquiring the tube voltage parameter characteristics, the tube voltage parameter characteristics can be acquired according to the tube voltage setting value of a console of the DR imaging equipment, and also can be acquired according to the tube voltage receiving value of a high-voltage generator of the DR imaging equipment. Similarly, when acquiring the tube current parameter characteristics, the tube current parameter characteristics may be acquired from the tube current setting value of the console of the DR imaging apparatus, or may be acquired from the tube current reception value of the high voltage generator of the DR imaging apparatus.
The device parameter characteristics may also include exposure time parameter characteristics. Specifically, X-rays are generated by a high-voltage driving bulb from a high-voltage generator to enter an exposure process, wherein a detector receives the X-rays and converts the X-rays into electric signals, and the exposure time determines whether the dosage of the X-rays reaches the standard. Illustratively, the exposure time parameter characteristic may be acquired from an exposure time setting value of a console of the DR imaging apparatus, or from an exposure time reception value of a high voltage generator of the DR imaging apparatus, or from a feedback value of an automatic exposure control device of the DR imaging apparatus.
The device parameter characteristics may also include shot body position and/or body shape parameter characteristics. The tube voltage, tube current, exposure parameters, etc. applied to different photographing positions/body types are different. For example, the photographing body position and/or body type parameter characteristics may be acquired from body position and/or body type setting values of a console of the DR imaging apparatus, or from a visualized image obtained by an image pickup device of the DR imaging apparatus.
The device parameter features may also include camera distance parameter features. The imaging distance is generally referred to as the Source image distance (Source IMAGE DISTANCE, SID), which is the distance between the X-ray generator and the detector, and SID can affect the size of the field of illumination. For example, the photographing distance parameter characteristic may be acquired according to a photographing distance set value of a console of the DR imaging apparatus, or according to a photographing distance measured by a first measuring device of the DR imaging apparatus, wherein the first measuring device may be an additional measuring device of a beam limiter of the DR imaging apparatus.
The device parameter characteristics may also include an illumination field parameter characteristic. The irradiation field is the exposure area of the body surface irradiated by the X-rays, and the size of the irradiation field is closely related to the radiation dose of the X-rays and the quality of the DR image. The irradiation field mainly depends on a beam limiter and a projection distance, wherein the beam limiter is an optical device, and can control the irradiation field of X rays emitted by the bulb tube in a focusing mode, so that the projection range of the X rays is reduced on the premise of meeting the requirement of X ray imaging, and unnecessary radiation dose is avoided. Generally, the irradiation field should be reduced as much as possible under the condition of covering the photographing part to improve DR image quality and reduce the radiation dose of the target object. The irradiation field parameter characteristic may be obtained according to an irradiation field setting value of a console of the DR imaging apparatus, or may be obtained according to a photographing distance parameter characteristic and an opening size fed back by the beam limiter of the DR imaging apparatus in combination with a preset inference rule, because the irradiation field depends on the beam limiter and the photographing distance, for example.
The grid is capable of eliminating stray radiation and improving the sharpness of the DR image, and thus, in some embodiments, the device parameter features may also include grid parameter features derived from the grid parameters. In particular, grid parameter characteristics may be obtained from grid settings or grid parameter feedback values of a console of the DR imaging apparatus. Grid parameters include the thickness of the grid, the width of the bars, the gap width, the grid ratio, etc.
In addition to device parameter features, parameter features may also include object parameter features. Because of the differences of attenuation coefficients and thicknesses among human tissues, when X-rays penetrate different tissues of a human body, the attenuation degrees are different, and different images can be obtained after imaging treatment. Accordingly, corresponding object parameter characteristics may be obtained from object parameters of the target object related to tissue attenuation coefficients or thickness.
The object parameter characteristics include, for example, height parameter characteristics and/or weight parameter characteristics. When the DR imaging device is provided with the image pickup device, the height parameter characteristics and/or the weight parameter characteristics can be obtained according to the visual image obtained by the image pickup device of the DR imaging device, and specifically, the visual image can be subjected to image recognition to obtain the height parameters and/or the weight parameters, so that the height parameter characteristics and/or the weight parameter characteristics are obtained. Or the height parameter characteristic and/or the weight parameter characteristic can be obtained according to the input body position and/or body type setting value. In addition, the height parameter characteristic and/or the weight parameter characteristic can be obtained according to the height parameter and/or the weight parameter measured by the second measuring device of the DR imaging device, specifically, the second measuring device comprises a height measuring device for measuring the height parameter to obtain the height parameter characteristic, or a weight measuring device for measuring the weight parameter to obtain the weight parameter characteristic.
The subject parameter characteristics may also include, for example, an age parameter characteristic and/or a gender parameter characteristic. Wherein the age parameter characteristic is determined according to the age parameter and the gender parameter characteristic is determined according to the gender parameter. The age parameter characteristic and/or the sex parameter characteristic can be obtained according to the visual image obtained by the camera device of the DR imaging equipment, and specifically, the visual image can be subjected to image recognition to obtain the age parameter and/or the sex parameter, and further the age parameter characteristic and/or the sex parameter characteristic can be obtained. Or the age and/or sex parameter characteristics are obtained according to the input body position and/or body type setting value. Or the age parameter characteristic and/or the sex parameter characteristic may also be acquired from the age information and/or the sex information input through the input means of the DR imaging apparatus.
Referring to fig. 2, the step of extracting the image features and the step of acquiring the parameter features may be mutually affected or optimized based on a preset rule. That is, the extraction of the image features may be adjusted according to the device parameter features and/or the object parameter features, or the acquisition of the device parameter features and/or the object parameter features may be adjusted according to the image features. For example, the statistical scale or direction of the texture features in the image features may be adjusted to different extents depending on the capture body position and/or body type parameter features in the device parameter features. Specifically, for a larger body position/body shape, the texture degree is considered to be rich, and the statistical scale or direction of the texture features can be enlarged. For the mutual adjustment of other parameters, the adjustment can also be performed according to the association between the device parameter/object parameter and the image feature in the above-mentioned manner, which is not described in detail herein. By the mutual influence and optimization of the image features and the parameter features, the image features and the parameter features can better represent the image information and the parameter information, and the DR feature index can better reflect the feature information content of the DR image.
In step S140, a DR feature index reflecting the feature information content of the DR image is determined according to the image features, and the device parameter features and/or the object parameter features.
In one embodiment, with continued reference to FIG. 2, image features as well as device parameter features and/or object parameter features may be input into the target model and DR feature indices output by the target model are obtained. The target model can be a trained model or a model which is not trained. The target model can be trained in an offline mode, and the training method mainly comprises the following steps:
First, a DR image set including a plurality of sample DR images, i.e., DR images as training samples, is acquired. An image feature set may be derived from the DR image set, the image feature set comprising sample image features extracted from a plurality of sample DR images. Illustratively, at least one of a gray entropy feature, a texture feature, a noise feature, a gradient feature, and a divergence feature is extracted for each sample DR image in the DR image set, respectively, and a plurality of sample image features extracted from a plurality of sample DR images in the DR image set collectively constitute an image feature set, each sample DR image in the DR image set corresponds to at least one sample image feature in the image feature set, and the method of extracting sample image features from sample DR images may refer to the method of extracting image features from DR images in step S120.
According to the corresponding device parameters and/or object parameters of each sample DR image, a parameter feature set can be obtained, wherein the parameter feature set comprises device parameter features and/or object parameter features. For example, for each sample DR image in the DR image set, the device parameter feature and/or the object parameter feature are acquired respectively, the plurality of sample parameter features acquired from the plurality of sample DR images in the DR image set collectively form a parameter feature set, each sample DR image in the DR image set corresponds to at least one device parameter feature and/or object parameter feature in the parameter feature set, and the method for acquiring the device parameter feature and/or the object parameter feature from the sample DR image may refer to the method for acquiring the device parameter feature and/or the object parameter feature from the DR image in step S130.
A scoring set may also be derived from the DR image set, the scoring set including scores derived from evaluating the plurality of sample DR images. For example, if the sample DR image in the DR image set is a DR original image, image processing may be performed on a plurality of sample DR images in the DR image set, so as to obtain a plurality of DR diagnostic images, that is, visual images used for diagnosis by a doctor, and the plurality of DR diagnostic images form a DR diagnostic image set, where the score set is obtained according to the DR diagnostic image set, and the score set includes scores obtained by evaluating the plurality of DR diagnostic images. In some embodiments, the clinical expert may evaluate the DR diagnostic images in the DR diagnostic image set according to the quality of the DR diagnostic images, and give an expert score, where the higher the score of the DR diagnostic image is, the more the content of the basic feature information is included in the DR original image corresponding thereto, whereas the lower the score of the DR diagnostic image is, the less the content of the basic feature information is included in the DR original image corresponding thereto. The expert scores of the DR diagnostic images constitute an expert score set. Alternatively, if the DR image in the DR image set is a visual image for diagnosis by a doctor, the score set may be obtained directly from the DR image set.
And then, training the initial model by taking the acquired image feature set, parameter feature set and score set as training sample sets to obtain a target model. The target model can be a traditional machine learning model or a deep learning model, and comprises a neural network, a support vector machine, linear discriminant analysis and the like, wherein the training method comprises a model training method such as linear regression, gradient descent and the like. For example, an optimal mapping function of image features to scores may be learned such that the DR feature index resulting from image feature mapping is minimally error from the actual calibrated expert score. The optimal mapping function is performed on the image features acquired in step S120 and the parameter features acquired in step S130, so as to obtain a prediction result of the DR feature index closest to the expert score.
In one embodiment, when the image feature set, the parameter feature set and the score set are used for classification regression training, the classification regression training formula is:
Wherein x is a feature vector, { x i}i=1,...,m is a support vector, { α i is a weighting coefficient, b is a bias, k is a kernel function, and each value of the feature vector x is required to be obtained through a linear transfer function:
Wherein, the For input features, w and s are scaling and translation parameter vectors, respectively, the kernel function k may employOr (b)
It should be noted that the embodiment of the present invention does not limit the linear transfer function, kernel function or parameters used in the regression training method.
After the trained target model is obtained in the model training stage, in the practical application process, at least one image feature of the gray entropy feature, the texture feature, the noise feature, the gradient feature and the divergence feature extracted in the step S120 and at least one parameter feature of the device parameter feature and the object parameter feature obtained in the step S130 are input into the trained network model, so that a DR feature index for representing the comprehensive level between the radiation dose of the X-ray received by the target object and the image quality of the DR image can be obtained, and the DR feature index can be displayed by a display device or otherwise output. It is understood that the integrated level herein refers to the integrated level of the quality/level of the image quality of the DR image, and the level of the radiation dose of the X-rays to which it is subjected. Under the condition that the user considers that the image quality is relatively good and meets the requirement, the DR characteristic index can be tried to be reduced so as to achieve the effect of reducing the radiation dose of the received X rays, and under the condition that the image quality is relatively poor, the DR characteristic index can be improved so as to achieve the effect of improving the image quality. Factors affecting the DR characteristic index include image characteristics, device parameter characteristics, and/or object parameter characteristics as described herein, and the DR characteristic index is output by training a model.
In some embodiments, after obtaining the DR characteristic index, a prompt message for guiding adjustment of the emission dose of the X-ray may be further output according to the DR characteristic index. Since the DR characteristic index characterizes the comprehensive level between the radiation dose of the X-ray received by the target object and the image quality of the DR image, it can be determined whether the radiation dose of the X-ray received by the target object is suitable according to the DR characteristic index, and when it is determined that the radiation dose of the X-ray received by the target object is unsuitable, a prompt message for guiding and adjusting the emission dose of the X-ray is output, so as to guide the user to increase or decrease the emission dose of the X-ray.
In some embodiments, the prompting information about whether the DR characteristic index satisfies the requirement may be further output according to a relationship between the numerical value of the DR characteristic index and a preset threshold. Specifically, the numerical value of the DR characteristic index may be compared with a preset threshold, and when the numerical value of the DR characteristic index does not meet the range of the preset threshold, it is determined that the DR characteristic index does not meet the requirement, and prompt information is output in the form of text, graphics, voice, etc., so as to prompt the user to adjust the imaging parameters in time.
In some embodiments, after obtaining the DR feature index, the image quality of the DR image may be further evaluated according to the DR feature index, and the evaluation result may be output. For example, in order to more intuitively embody the image quality of the DR image, classification or grading may be performed based on the DR feature index, to obtain a qualitative quality evaluation result, or a semi-quantitative quality grading result, to prompt the user of the quality of the DR image.
Based on the above description, the analysis method for DR imaging according to the embodiments of the present application obtains, according to the image features and the device parameter features of the DR image and/or the object parameter features, a DR feature index capable of objectively reflecting a comprehensive level between a radiation dose of X-rays to which the target object is subjected and an image quality of the DR image, thereby providing an objective judgment basis for an operator.
Another aspect of an embodiment of the present invention provides a DR imaging apparatus, see fig. 3, the DR imaging apparatus 300 comprising an X-ray generator 310, a detector 320, a processor 330 and a display 340, wherein the processor 330 is communicatively coupled to the X-ray generator 310, the detector 320 and the display 340, the X-ray generator 310 is configured to generate X-rays, emit the X-rays toward a target tissue site of a target subject, and control the X-rays to pass through the target tissue site, the detector 320 is configured to receive the X-rays after passing through the target tissue site, the processor 330 is configured to process the X-rays after passing through the target tissue site to obtain a digital radiography DR image, and the display 340 is configured to display the DR image.
The X-ray generator 310 illustratively comprises a bulb for emitting X-rays, and the detector 320 comprises a flat panel detector, and in particular comprises an upper layer of X-ray conversion medium that converts the X-rays into electrical signals, producing positive and negative charges that are moved in the form of electrical currents under bias voltages along an electric field, collected by the array of detection units, and converted into electrical signals, and a lower layer of detection unit arrays. The detector 320 includes, but is not limited to, amorphous silicon, amorphous selenium, CCD, etc. In image acquisition using the DR imaging apparatus 300, the position of the X-ray generator 310 and the position of the detector 320 need to be fixed in a designated manner, so that the X-rays emitted from the X-ray generator 310 can be irradiated to the detector 320 after being transmitted through the target tissue, and thus the detector 320 can acquire the X-rays after passing through the target tissue site to obtain a DR image.
The processor 330 is configured to process the X-rays after passing through the target tissue site to obtain a digital radiography DR image. The processor 330 is further configured to perform the analysis method 100 of DR imaging described above to obtain a DR characteristic index reflecting a combined level between a dose of X-rays received by the target object and an image quality of the DR image, and the display 340 is configured to display the DR characteristic index. Specific details of the analysis method 100 of DR imaging can be found above. According to the DR imaging device 300 provided by the embodiment of the application, the DR characteristic index which can objectively reflect the comprehensive level between the radiation dose of the X-ray received by the target object and the image quality of the DR image is obtained according to the image characteristics and the device parameter characteristics of the DR image and/or the object parameter characteristics, so that an objectively judging basis is provided for an operator.
Referring to fig. 4, an embodiment of the present invention further provides an electronic device 400, where the electronic device 400 may be used to implement the above-described analysis method 100 for DR imaging, where the electronic device 400 may be a DR imaging device, for example, a flat panel detector or an automatic exposure controller, or a mobile DR device or a fixed DR device, etc., and may also be another computer or a terminal device, for example, a mobile phone, a computer, a palm computer, etc., which are not limited herein specifically.
The electronic device 400 includes a memory 410 and a processor 420, where the memory 410 stores a computer program executed by the processor 420, and the computer program executes the steps of the analysis method 100 for DR imaging when executed by the processor, and details thereof are referred to above and are not described herein.
Wherein the processor 420 may be implemented in software, hardware, firmware, or any combination thereof, may use circuitry, single or multiple application specific integrated circuits, single or multiple general purpose integrated circuits, single or multiple microprocessors, single or multiple programmable logic devices, or any combination of the foregoing circuits and/or devices, or other suitable circuits or devices, and the processor 420 may control other components in the electronic device 400 to perform the desired functions. Memory 410 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random access memory and/or cache memory, etc. The non-volatile memory may include, for example, read-only memory, hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 420 to implement the DR image analysis method and/or other various desired functions of the present invention. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
Furthermore, according to an embodiment of the present invention, there is also provided a computer storage medium on which program instructions are stored, which program instructions, when executed by a computer or processor, are adapted to carry out the respective steps of the analysis method of DR imaging of any embodiment of the present invention. In some embodiments, the computer storage medium is a non-volatile computer readable storage medium that may include a storage program area that may store an operating system, application programs required for at least one function, data created during execution of program instructions, and the like. Further, the non-volatile computer-readable storage medium may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the non-transitory computer readable storage medium optionally includes memory remotely located relative to the processor.
By way of example, the computer storage media may comprise, for example, a hard disk of a personal computer, a memory component of a tablet computer, a read-only memory (ROM), an erasable programmable read-only memory (EPROM), a portable compact disc read-only memory (CD-ROM), a USB memory, a memory card of a smart phone, or any combination of the foregoing. The computer-readable storage medium may be any combination of one or more computer-readable storage media.
Furthermore, according to an embodiment of the present invention, there is also provided a computer program, which may be stored on a cloud or local storage medium. Which when executed by a computer or processor is adapted to carry out the respective steps of the analysis method of DR imaging of an embodiment of the invention.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the above illustrative embodiments are merely illustrative and are not intended to limit the scope of the present application thereto. Various changes and modifications may be made therein by one of ordinary skill in the art without departing from the scope and spirit of the application. All such changes and modifications are intended to be included within the scope of the present application as set forth in the appended claims.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the several embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described device embodiments are merely illustrative, e.g., the division of the elements is merely a logical functional division, and there may be additional divisions when actually implemented, e.g., multiple elements or components may be combined or integrated into another device, or some features may be omitted or not performed.
In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the application may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in order to streamline the application and aid in understanding one or more of the various inventive aspects, various features of the application are sometimes grouped together in a single embodiment, figure, or description thereof in the description of exemplary embodiments of the application. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed application requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this application.
It will be understood by those skilled in the art that all of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or units of any method or apparatus so disclosed, may be combined in any combination, except combinations where the features are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features but not others included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
Various component embodiments of the application may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that some or all of the functions of some of the modules in an item analysis device according to embodiments of the present application may be implemented in practice using a microprocessor or Digital Signal Processor (DSP). The present application can also be implemented as an apparatus program (e.g., a computer program and a computer program product) for performing a portion or all of the methods described herein. Such a program embodying the present application may be stored on a computer readable medium, or may have the form of one or more signals. Such signals may be downloaded from an internet website, provided on a carrier signal, or provided in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The use of the words first, second, third, etc. do not denote any order. These words may be interpreted as names.
The foregoing description is merely illustrative of specific embodiments of the present application and the scope of the present application is not limited thereto, and any person skilled in the art can easily think about variations or substitutions within the scope of the present application. The protection scope of the application is subject to the protection scope of the claims.