[go: up one dir, main page]

HK1129551A - Method and system of computer-aided quantitative and qualitative analysis of medical images - Google Patents

Method and system of computer-aided quantitative and qualitative analysis of medical images Download PDF

Info

Publication number
HK1129551A
HK1129551A HK09107041.5A HK09107041A HK1129551A HK 1129551 A HK1129551 A HK 1129551A HK 09107041 A HK09107041 A HK 09107041A HK 1129551 A HK1129551 A HK 1129551A
Authority
HK
Hong Kong
Prior art keywords
image data
features
evaluation
medical image
lesion
Prior art date
Application number
HK09107041.5A
Other languages
Chinese (zh)
Inventor
杰弗里.柯林斯
弗雷德里克.拉赫曼
卡伦.萨加特尔扬
桑德拉.斯特普尔顿
Original Assignee
美的派特恩公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 美的派特恩公司 filed Critical 美的派特恩公司
Publication of HK1129551A publication Critical patent/HK1129551A/en

Links

Description

Method and system for computer-aided qualitative and quantitative analysis of medical images
Technical Field
The present invention relates generally to the field of medical images (medical images) and computer-aided analysis of detection of suspicious abnormalities. In particular, the present invention relates to a method and system for processing medical images obtained by multiple modalities, including analysis of kinetic and morphological features and automatic detection of abnormalities and analysis of medical images from multiple modalities.
Background
Magnetic Resonance Imaging (MRI) has emerged as a powerful tool for imaging breast abnormalities. In general, MRI provides better characterization of breast lesions (lesions) than traditional imaging modalities due to the rich soft tissue contrast, thin cross-section, and multi-planar capability.
Traditionally, lesion morphology is analyzed and classified to distinguish benign lesions from possible cancerous tumors. For example, the American College of Radiology (ACR) has developed a suite of features and dictionaries (a data dictionary system) over the years, along with a breast imaging reporting and data system (BI-RADS)) Are used together. The BI-RADS MRI dictionary indicates that the following morphological features may be associated with benign lesions:
shape of Circular, oval or lobular
Edge of a container Smoothing
Lump reinforcement (mass enhancement) Homogeneous, non-contrast-enhancing, non-enhancing intrinsic membrane
On the other hand, the BI-RADS MRI dictionary indicates that the following features may describe the likelihood of malignancy:
shape of Irregularity
Edge of a container With burrs
Briquette reinforcement Non-uniform, edge-reinforced, tubular extension
Recently, increased attention has been focused on contrast enhanced MRI of breast lesions. Before or during the examination, the contrast enhancer is injected into a vein in the patient's arm. Typically, a gadolinium-based contrast agent (e.g., Gd-DTPA) is used. The use of contrast agents tends to provide greater contrast between normal and abnormal tissue. Contrast enhancement stems from the fact that: the growth and metabolic potential of tumors can be directly related to the extent of peripheral vascular proliferation. For tumors growing more than a few millimeters in diameter, it is necessary to form blood vessels that will provide the oxygen and nutrients necessary for survival. These new vessels proliferate in a disorganized pattern and are of poor quality, thus allowing them to leak and causing blood to pool around the tumor. Analysis of the signal from the diffusible contrast agent aids in the detection and characterization of suspected abnormalities in the breast.
Quantitative studies of signal intensity over time (or kinetic curves), enhancement and changes in kinetic levels over time (e.g., ascending and descending behavior), suggest that malignant lesions may be rapidly enhanced regions, reaching their peak enhancement between 1 and 3 minutes post-injection. Benign lesions are more slowly enhanced, with peak enhancement occurring only after a few minutes.
The shape of the kinetic curve can also be a good indicator of whether the lesion is malignant. It was found that the kinetic curves describing benign lesions tend to be straight or slightly curved (type I). For the bend type, the time-signal intensity continues to increase while growth generally is slower, and the curve flattens out in the late post-contrast period (due to saturation effects). On the other hand, kinetic curves suggestive or indicative of malignancy show a plateau or a descending segment. The plateau (type II) shows an initial up-run followed by a sharp cut-off of enhancement, while the signal intensity levels off during the middle and late post-contrast periods. The down-going type (type III) shows an initial up-going followed by a sharp cut-off of enhancement, while the signal intensity decreases (down-going) in the middle post-contrast period (2-3 min after contrast injection).
However, while contrast enhanced MRI methods have achieved high levels of sensitivity (94% to 100%), they provide only a limited level of specificity (40% to 95%). Here, sensitivity refers to true positive detection, while specificity refers to false positive reduction. The low level of specificity is a result of not only the enhancement of malignant lesions but also benign lesions, leading to many unnecessary biopsies. Thus, enhancement alone occurs and cannot be used to distinguish benign lesions from malignant lesions.
Benign lesions are considered to be a result of distortion of the normal process. For example, fibrocystic lesions are the most common benign disorders (40% to 50%), fibroadenomas are the most common tumors in young and old women, while inverted papillomas (papillomas) are less dangerous lesions. Other benign lesions include radial scars (sclerosis), which is a star-shaped lesion mimicking cancer, petiole tumors, and ductal hyperplasia (mammary glands).
Contrast MRI studies of the breast have demonstrated not only enhancement of malignant lesions, but also enhancement of many benign lesions including fibroadenomas, fibrocystic changes, and radial scarring. Moreover, some cases in which there may be a malignant lesion, such as Invasive Ductal Carcinoma (IDC), Invasive Lobular Carcinoma (ILC) or Ductal Carcinoma In Situ (DCIS), will not be rapidly enhanced, but where the lesion morphology indicates the presence of a malignant tumor. It is believed that the mere presence of contrast enhancement cannot be used to distinguish benign lesions from malignant ones.
Recently, attention has turned to magnetic resonance spectroscopy ("MRS"), a new technique for cancer diagnosis. MRS is a particular type of magnetic resonance detection technique. Which provides chemical information by determining the concentration or intensity of various marker chemicals (marker chemicals), such as choline, in the suspected tumor. The amount or concentration of the labeled chemical is believed to provide information about the disease process in the area under examination.
Generally, the signal obtained by MRS does not produce a scanned image. Instead, spectral information for various chemicals is generated. More recently, spectral data has been available from well-localized regions. This allows the biochemical information obtained by MRS to be evaluated in relation to the localized area. However, correlating spectral data and scan images is often a difficult task in a clinical setting.
The foregoing creates a problem: i.e. to develop systems and methods for analyzing medical images to distinguish malignant from benign lesions and to adapt to clinical needs. It is an object of the present invention to mitigate or obviate at least one of the above disadvantages.
Disclosure of Invention
The present invention combines qualitative and quantitative features to achieve optimal discrimination of suspected abnormalities, such as imaged breast lesions. Images (imaging) and data from multiple modalities are processed and analyzed to extract quantitative and qualitative information. The quantitative information can include kinetic information and biochemical information. The dynamic features can be extracted from a time series of image data, such as MRI image data. Biochemical information can be extracted from the spectral analysis of MRS data. The morphological features can be extracted from MRI images, ultrasound images, x-ray images, or images of other modalities. Computer applications are provided for extracting quantitative and qualitative features from medical images and data, and for combining results from the quantitative and qualitative analyses to produce a composite result. The analysis of the time history dynamics may be performed before or after the evaluation of the lesion morphology in the post-contrast image. Optionally, the results from the first analysis performed are evaluated before proceeding to the next analysis. In those cases, if the results from the first analysis performed (e.g., kinetic analysis) are clearly indicative, the next analysis (e.g., morphological analysis) is not performed. If the results of the analysis from one mode (e.g., kinetics) are inconclusive or suggest benign lesions, then further analysis (e.g., morphology) is performed.
In one aspect of the invention, a method of analyzing a plurality of medical image data of a region in an anatomy and detecting an abnormality in the region is provided. At least one set of the plurality of medical image data contains temporal information responsive to the administration of the contrast enhancer. The method comprises the following steps: obtaining a plurality of medical image data; identifying (identifying) a set of data points representing possible lesions in the region from a plurality of medical image data; extracting features associated with the set of data points from the plurality of medical image data, the features including at least two of a set of morphological features, a set of temporal-information kinetic features, and a set of biochemical features; calculating an initial diagnostic evaluation of the potential lesion from the at least two sets of features; and providing the initial rating to the user for evaluation. The evaluation is evaluated by incorporating at least two sets of features in the evaluation process.
In a feature of this aspect of the invention, the method comprises the further steps of: receiving a correction of at least two sets of features from a user; calculating a revised evaluation; and providing the revised rating to the user for further evaluation. The evaluation of the correction is calculated by combining the correction in the calculation.
In another feature of this aspect of the invention, the dynamic characteristic is extracted from a contrast variation curve corresponding to a time-dependent local contrast variation in a subset of the set of data points. In another feature, the dynamic characteristic includes a classification of the contrast variation curve into one of a continuously enhanced, smoothed, and downstroke (washout) type.
In another feature of this aspect of the invention, the biochemical characteristic is extracted from a spectral analysis of a subset of MRS of the set of data points. In another feature, the biochemical characteristic includes at least a concentration distribution of the marker chemical or a relative intensity of the two or more marker chemicals obtained from the spectroscopic analysis.
In another aspect, the present invention provides a system for analyzing a plurality of medical image data of a region in an anatomy. At least one set of the plurality of medical image data contains temporal information responsive to the administration of the contrast agent. The system comprises: an image data module for retrieving a plurality of medical image data; a morphology module for identifying possible lesions in the medical image data and extracting and classifying morphological features associated with the possible lesions; a dynamics module; a spectral analysis module; a comprehensive decision engine; and a graphical user interface for displaying at least a portion of the plurality of medical image data and an initial diagnostic evaluation for user evaluation and modification. The kinetics module extracts kinetics of temporal information associated with the possible lesion from the plurality of medical image data, the spectral analysis module extracts biochemical characteristics associated with the one or more marker chemicals from the plurality of medical image data, and the integrated decision engine receives the morphological features from the morphology module, the kinetics of the temporal information from the kinetics module, and the biochemical characteristics from the spectral analysis module and computes an initial diagnostic evaluation of the possible lesion from the morphological features, the kinetics, and the biochemical characteristics.
In features of this aspect of the invention, the system further comprises: a morphological decision engine to derive a morphological evaluation from the morphological feature; a dynamics decision engine for deriving a dynamics evaluation from the dynamics; and a spectral analysis decision engine for deriving spectral evaluations from the biochemical characteristics. The integrated decision engine correlates and combines morphology evaluation, kinetic evaluation, and spectral evaluation in its calculations.
In another feature of this aspect of the present invention, the system further includes an annotation module for receiving, via the graphical user interface, a modification to at least one of the morphological characteristic, the kinetic characteristic, and the biochemical characteristic. Once the revision is received, the revision is provided to the integrated decision engine which in turn computes a revised diagnostic evaluation.
In another feature of this aspect of the invention, the system further comprises: a risk profile module for retrieving patient risk information from a database; and a patient history module for retrieving patient history information. Assessment of the assessment incorporates patient risk information and patient history information.
In another aspect of the present invention, a method of acquiring and analyzing MRS medical image data from a region in a patient's anatomy is provided. The method comprises the following steps: obtaining a plurality of medical image data of the region; identifying a set of data points from the plurality of medical image data representing possible lesions in the region; extracting features associated with the possible lesion from a plurality of medical image data; calculating an initial diagnostic evaluation of the possible lesion from the features; and once the initial diagnostic evaluation meets a preselected criterion, completing the steps of: acquiring MRS medical image data from a candidate region including a possible lesion; extracting biochemical characteristics from the MRS medical image data; further combining biochemical characteristics in recalculation and recalculating a comprehensive assessment of possible lesions; and providing the composite rating to the user for evaluation and correction.
In another aspect of the invention, a system for analyzing medical image data of a region of an anatomy, the medical image data being acquired from a plurality of modalities is provided. The system comprises: an image data module for receiving medical image data; a plurality of image processing modules; a multi-modal decision engine; a comprehensive decision engine; a comprehensive decision engine that combines modality evaluation and calculates an initial diagnostic evaluation of a possible lesion from the modality evaluation; and a graphical user interface for displaying at least a portion of the medical image data and the initial diagnostic evaluation for user evaluation and modification. Each of the plurality of image processing modules identifies a possible lesion in the medical image data and extracts and classifies a set of modal characteristics associated with the possible lesion. The set of modality characteristics associated with the modality is submitted to a corresponding modality decision engine for calculating a modality assessment of the possible lesion. The modal evaluations computed by the modal decision engine are combined in their computations by the integrated decision engine.
In other aspects, the invention provides various combinations and sub-combinations of the various aspects described above.
Drawings
The foregoing and other aspects of the invention are explained in more detail, for purposes of illustration and not of limitation, with reference to the accompanying drawings, wherein:
FIG. 1 is a schematic diagram illustrating a computer-aided detection (CAD) system;
FIG. 2 is a block diagram of the primary functional components of a CAD application of the CAD system shown in FIG. 1;
FIG. 3 is a flowchart showing process steps for qualitative and quantitative analysis of medical image data by the CAD application shown in FIG. 2;
FIG. 3A illustrates another process for MRS data and ultrasound image analysis by a CAD application;
FIG. 3B is a flow chart illustrating an alternative process performed by the CAD application shown in FIG. 2, employing results from one modality as input to another modality;
FIG. 4 illustrates in detail a portion of the process shown in FIG. 3;
fig. 5 schematically illustrates a time sequence of medical images and corresponding contrast variation curves;
FIG. 6 illustrates the general behavior of the contrast variation curve that can be expected;
FIG. 7 is a flow chart illustrating a portion of the process shown in FIG. 3 for constructing the contrast profiles shown in FIGS. 5 and 6;
FIG. 8 is a flow chart illustrating a portion of the process shown in FIG. 3 for generating integrated results combining morphological and kinetic features;
FIG. 9 schematically illustrates an exemplary screen display providing a user with a side-by-side comparison of analyzed images of two modalities and integrated results; and
fig. 10 illustrates a process modified from that shown in fig. 3 for processing images of the same modality taken at different times.
Detailed Description
The present invention relates generally to the field of computer-aided analysis of medical images and detection of suspicious abnormalities. In particular, the invention relates to a method and system for processing medical images obtained from multiple modalities, including analysis of kinetic and morphological features.
The present invention combines data from multiple modalities, including kinetic (quantitative), morphological (qualitative) and biochemical (quantitative) information, to achieve the most robust characterization of suspected abnormalities for imaging, such as imaged breast lesionsAnd 4, good discrimination. Morphological features of a lesion are generally those related to size, shape, signal distribution in the mass, or boundary characteristics of the lesion. They include features such as whether the lesion is a mass with a circular, elliptical or leaflet shape, a mass with smooth, irregular or spiked boundaries, or a mass with uniform, peripheral (blank) or tubular reinforcement. The morphological features can be extracted from MRI, ultrasound or X-ray images or other morphological image data. The kinetic features are related to the signal temporal behavior of the imaged lesion in the image or time series of image data. The dynamic characteristics of MRI data generally refer to, but are not limited to, time-dependent contrast enhancement of a region within a scanned anatomical volume after administration of a contrast agent. The kinetic curve can be type I (continuously increasing), type II (plateau) or type III (descending). Biochemical information can be obtained by analyzing MRS data, i.e., spectral information, to determine a labeled chemical (e.g., choline, creatine, or creatine) within a single voxel or within several voxels31P, among others) and relative concentrations. This information is considered relevant in cancer diagnosis. A computer application is provided for extracting morphological, kinetic and biochemical information from medical image data and obtaining a composite result in combination with results of qualitative and quantitative analysis of medical image data from multiple modalities.
While diagnostic evaluations can be derived from any results of kinetic, morphological or biochemical (i.e., spectroscopic) analysis of single modality image data, combining results from multiple modalities tends to improve the confidence level of the obtained evaluations, as such, comprehensive evaluations generally are derived from larger data sets and thus tend to be more statistically reliable. For example, in post-contrast images, analysis of temporal process kinetics can be performed before or after lesion morphology assessment. Alternatively, the results from the most advanced analysis can be evaluated before proceeding to the next analysis. If the results from the most advanced analysis (e.g., kinetic analysis) are clearly indicative, then the next analysis (e.g., morphological or spectroscopic analysis) may not be necessary. On the other hand, if a pattern from (e.g., dynamics) is ambiguous or suggests benign lesions, further analysis (e.g., morphology) may be worthwhile. Also, the results from one analysis may be used as input for another pattern analysis. For example, the results of the kinetic analysis typically include identification (identification) of the lesion, which may be used to facilitate the segmentation (segmentation) portion of the morphological process.
Fig. 1 shows a computer-aided detection (CAD) system 100. The CAD system 100 processes and analyzes images and data obtained from multiple modalities, including performing kinetic, morphological, and spectral analyses for providing diagnostic evaluations based on extracted kinetic, morphological, and spectral features. CAD system 100 has medical imaging device 102. The medical imaging device 102 is used by a user to acquire medical images and data by scanning or imaging a patient. Different imaging modalities may be configured for CAD system 100. For example, the medical image may be an ultrasound image, an X-ray image, an MRI image, a Computed Tomography (CT) image, a Positron Emission Tomography (PET) image, a PET/CT, a nuclide, MRS, or any image or data from a suitable image or data acquisition device.
Image data acquired by the medical imaging device 102 is provided to the computer 104 for processing. Although only a single computer is shown in FIG. 1, the computer 104 may be any general purpose or special purpose computer. It may also be an embedded system, such as in an image acquisition system including the medical imager 102.
Computer programs 106, i.e., application software for implementing CAD system functions, are stored by computer 104. CAD application 106 has a number of components. There is one dedicated component for each modality. For example, ultrasound subsystem 108 corresponds to an ultrasound modality. The ultrasound subsystem is dedicated to retrieving, processing and analyzing ultrasound image data. Similarly, the CT subsystem 110 is dedicated to processing and analyzing CT image data. Corresponding to the MRI image data, there is an MRI subsystem 112. Corresponding to the MRS spectral data, there is an MRS subsystem 113.
CAD application 106 has a comprehensive decision engine 114. The integrated decision engine 114 receives as its input the results from these modalities, i.e., from the ultrasound subsystem 108, the CT subsystem 110, the MRI subsystem 112, and the MRS subsystem 113, and computes an integrated evaluation in conjunction with the results from each of these modalities. CAD application 106 can use rules built into the application or stored in database 116 for making comprehensive decisions. These rules may be derived from images of samples containing benign and malignant lesions, or constructed from statistical models, or established by employing any suitable methodology.
A workstation 118 is provided. The user interface 120 provided by the workstation 118 permits a user of the system 100 to view medical images, manipulate the images, and interact with the system to process the images. The user interface 120 includes a display 122. The display may be a display screen, or an image protector, or may be any other suitable display device capable of visually presenting medical images to a user and presenting graphical and textual content to a user.
Workstation 118 displays the image data and results generated from CAD application 106 to a user to facilitate user diagnostic analysis of the image. For example, images from each modality and features extracted from those images may be displayed to a user. They may be displayed side by side on the same display to make it more convenient for the user to make a diagnosis. Lesions identified in these medical images and extracted features may also be highlighted. Additionally, a format consistent with medical standards may be preset in connection with any results automatically detected by the system. The preliminary evaluations automatically calculated by the system may also be displayed to the user for confirmation or correction by the user.
The user interface 120 also includes an input device 124 to enable a user to interact with the system and identify system-specific regions of interest in the displayed medical images. The input device 124 may include a keyboard, for example, for any text input by the user. The speech recognition module may provide for speech-to-text transcription so that the user can verbally enter a textual description of the imaging object, enter other text without having to type in the text, or issue any computer program command. It may also comprise a mouse or some other pointing device for the user to identify a particular pixel or region of the medical image of the system. The display 122 and input device 124 may be physically incorporated into a monolithic hardware unit, such as a touch screen capable of displaying graphical and textual output and receiving user input. The user interface 120 may also include a remote user interface, such as a remote terminal or web browser 126, for sharing with other radiologists or physicians over a telecommunications network 128. The telecommunications network 128 may be implemented using a direct cable connection, a Local Area Network (LAN), or the internet. The remote user interface allows the physician to remotely review the images taken by the operator from the patient and make any corrections in real time to the results automatically generated by the system 100. The physician, whether in a room or workstation 118 next to the medical imaging device 102 or in an office several thousand kilometers away, can make a diagnosis through the remote user interface.
The system 100 also includes a number of output peripherals 130 so that a user can reproduce or record the results of the analysis session or other output of the system. For example, the output peripherals may include a film or paper based printer 132. Film-based printers may be used to convert medical images, whether original or processed, into film for use by users in more traditional display devices requiring filmed images. Paper-based printers may also be used to generate hardcopy reports for sharing with other physicians or for archival purposes. In addition, the export peripheral 130 may include a DICOM-compatible device 134 for converting or storing processed results, i.e., composite images produced by the system along with associated reports.
System 100 has access to an image archiving server 136. Image archiving server 136 may be part of system 100. It may also be provided by an external server provider, such as a hospital information system. Image archiving server 136 has a server database 138 for storing archived images 140. When CAD application 106 requests archival images 140 of image archival server 136, image archival server 136 retrieves the requested images from server database 138 and transmits the requested images to CAD application 106. It should be understood that the archived images are all images that have been acquired by the medical image device. The archived images may be from images of any supported modality, such as MRI, CT, or PET. The archived image data can also be images combined by different modalities, such as digital tomosynthesis image data. The archived image 140 need not be the same modality as the medical imaging device 102 that is currently directly connected to the computer 104. For example, a computer may be connected to an ultrasound imaging device, while an image archiving server 136 may contain images previously acquired from a CT imaging device or an MRI imaging device. Moreover, although only one image archiving server 136 is shown in FIG. 1, it should be understood that there may be multiple image archiving servers connected to computer 104. In addition, each image archiving server 136 may not necessarily have only one database, and may have access to multiple databases, which may be physically located at different locations.
Data associated or generated by the system is typically stored with the archived image 140. For example, the archived image may be stored with annotations made on the image by a physician during previous analysis or diagnostic data. Preferably, image archiving server 136 supports archiving DICOM-compatible images, as well as other formats of images such as JPEG, BITMAP, among other images. The annotations, comments, and results of all image processing can be archived as part of a DICOM-compliant file. Audit information, such as a user ID, data or time stamp of the processed image, and user additions or modifications of the detected features can also be recorded in each case of archiving of the processed image.
FIG. 2 is a block diagram of the primary functional components of CAD application 106 in accordance with one embodiment. As shown in fig. 2, CAD application 106 has an image data module 202, a processing module 204, and a modality decision engine 206 for retrieving and analyzing image data. As will be described in detail below, the image data module 202 retrieves image data from the medical imaging device 102 or the image archiving server 136 and pre-processes the image data to extract images or other data from the image data for further processing. The images retrieved from the image data module 202 and pre-processed are submitted to the processing module 204. A processing module 204 is provided for extracting information related to the diagnosed disease from the pre-processed image data. For example, the module may provide for identifying suspected lesions in the images and extracting from the images those features associated with suspected lesions that are deemed to be relevant for diagnosing the disease, i.e. identifying the lesions. The modality decision engine 206 classifies the lesion based on the extracted information and calculates an evaluation of the lesion from the extracted information. Such an evaluation can be calculated, for example, based on a pre-established set of rules or using a pre-selected algorithm.
CAD application 106 is modular, with each of image data module 202, processing module 204, and modality decision engine 206 having components for supported modalities. For example, modality decision engine 206 has as its ultrasound components an ultrasound decision engine 208, as its MRS components an MRS decision engine (not shown), and as its MRI components an MRI morphology decision engine 210 and an MRI dynamics decision engine 212. While the image or scan data acquired from a particular modality is processed by CAD application 106, the image or scan data is processed by corresponding modality components of image data module 202, processing module 204, and modality decision engine 206. The modality-specific image data module 202, the processing module 204 and the modality decision engine 206 form a subsystem of the modality. For example, the ultrasound components of the image data module 202, the processing module 204, and the modality decision engine 206 form the ultrasound subsystem 108. To process an image or data of another modality, a corresponding set is added to each of image data module 202, processing module 204, and modality decision engine 206 without having to alter the overall architecture of CAD application 106. Each modality requires its own components, as in general, image data obtained from one modality often has certain unique aspects not found in other modalities. For example, certain ultrasound image features, such as echo patterns, associated with an ultrasound image are not typically shown in an x-ray image. Similarly, spectral processing is generally proprietary to the MRS modality.
Fig. 2 shows that the MRI modality has two components in each of the processing module 204 and modality decision engine 206, one for processing and extracting morphological features related to lesions imaged in the MRI scan, and the other for processing and extracting dynamics, i.e. temporal features, related to the time series of the MRI scan.
CAD application 106 has a comprehensive decision engine 114. The composite decision engine 114 combines all results obtained from each modality, along with patient data, to compute a composite score for lesions identified by the individual modality. Patient data may include, for example, the patient's risk of illness or the patient's medical history, or both. A risk of disease module 214 is also provided. The risk of contracting module 214 extracts risk of contracting information from the database 116, processes the risk of contracting information and submits the results to the integrated decision engine 114. Risk of illness information may include the presence of a particular gene-for example, a breast cancer predisposition gene (also known as BRCA-1). A patient history module 216 is also provided. The patient history module 216 extracts information related to the patient's medical history, processes the medical history information and provides the processed medical history information to the integrated decision engine 114. Patient history may include family history of breast cancer, diagnosis and treatment of previous cancers. The patient history information may also include information relating to images of the same lesion taken during a previous clinical consultation, e.g., several months ago. The medical history module 216 can use information about previously taken images and direct the image data module 202 to retrieve these previously taken images for comparison with the currently processed image.
The integrated decision module 114 has a plurality of individual components. These separate components include a classification module 218, a lesion type module 220, a lesion extent module 222, and a stage assessment module 224. The same lesion can generally be seen in multimodalities. Each of the modules 218, 220, 222, 224 may include components for processing image data from each modality. A composite image can be generated and displayed to show results from the multiple modalities. For example, the results of the MRS modality can be overlaid onto and displayed with an image of one of the image modalities. The comprehensive decision module 114 correlates the analysis results of the lesions observed in the images from the multiple modalities, including biochemical information about the chemical composition of the tumor obtained by the multiple voxel or single voxel MRS analysis to produce a comprehensive result.
For example, in one embodiment, classification module 218 combines the results from all modalities to provide a possible classification of the lesion. For example, local morphological features detected by all modalities, such as local short spiculation, local branch topography, local vessel extension, can be combined and compared to a predefined list of features to classify the lesion as being ACRBI-RADS5 categories or ACR BI-RADS4a species. Similarly, the lesion type module 220 combines the results from all modalities to derive a likely type of lesion, such as DCIS or CA. The lesion severity module 222 combines the results from all modalities to arrive at the estimated size and to outline the geometry of the lesion. The grading assessment module 224 combines as input the results from all modalities and the overall classification, type and degree, as well as patient risk and patient history information, to calculate or generate a suggested assessment of the stage of the lesion. The combined results, including classification, type and extent of the lesion and the suggested diagnostic assessments of the lesion stage, are displayed to the user via the user interface 120.
It should be understood that other embodiments are possible. For example, a system may have an ultrasound subsystem for processing ultrasound images. That is, one system may have a classification module, a lesion type module, a lesion extent module, and a rating module for processing ultrasound images. One system may have another MRI subsystem with its own classification module, lesion type module, lesion extent module and grading evaluation module for processing MRI images, or other subsystems for its modality. The synthesis engine combines the results of each modality subsystem to generate a synthesized result. Embodiments providing multi-modal processing but combining modules differently are also possible as long as all necessary processing, such as classification, lesion type, lesion extent, etc. determination, is provided for all modalities, and the combined result is obtained from the combined result from all modalities.
The integrated result is submitted to the user for confirmation or correction. For example, the user can correct automatically detected features in an image from one of the multiple modalities. It will be appreciated that any modification of the features detected in one modality may affect the detection results regarding lesions at the modality level and may further alter the overall results. The user may also directly modify the integrated results automatically generated by the integration engine. Whatever modification is made by the user, the results of the modification are passed back to the processing module 204, the modality decision engine 206, or the integrated decision engine 114, as the case may be. The combined results of the corrections, including the revised proposed assessment of the lesion stage, are recalculated and presented to the user for correction or confirmation. Once confirmed, reports are automatically generated summarizing the results of the analysis and evaluation of these medical images.
In operation, the user directs CAD application 106 to retrieve medical images or data generated by an imaging acquisition device, or to retrieve previously scanned and archived images or data from image archive server 136 for processing and analysis. The user may issue the instructions from, for example, a user interface 120 provided by the workstation 118, or a remote user interface, such as a web browser 126. Fig. 3 shows, in flow chart format, a process 300 that CAD application 106 undergoes to analyze and process images contained in the image data and generate a composite evaluation.
Fig. 3 shows three parallel sub-processes, namely a patient profile data retrieval sub-process 302, an ultrasound sub-process 304 and an MRI sub-process 306. These sub-processes are shown as parallel processes. These sub-processes are not necessarily performed in parallel in time, but are independent of each other. These sub-processes can be performed in any time series relative to each other, provided that the results of the sub-processes are available before the final step, a composite evaluation is calculated (step 308). For example, patient data relating to a patient's risk or history of illness may be retrieved before, after, or during the ultrasound imaging procedure. However, it should be understood that in the implementation of process 300, results from one modality are often able to be input (or at least part of the input) as another modality. For example, if the MRI subprocess 306 is first applied to an MRI data set, the lesion centroid can be identified in the analysis of signal enhancement within a concentrated region or volume. The lesion centroids so identified can serve as starting points for a segmentation process for MRI morphological procedures. Although sub-processes corresponding to two modalities are shown, sub-processes corresponding to other modalities, such as the CT modality, may be added. These other modalities are not shown in fig. 3 as they follow steps similar to the ultrasound modality or the MRI modality.
Referring to FIG. 3, each of these three sub-processes is now described. Patient data retrieval sub-process 302 begins with risk of illness module 214 retrieving risk of illness data for the patient from database 116 (step 310). The database may have direct access to CAD application 106 as shown in fig. 1, or it may be necessary to request information from an externally maintained database, such as through a hospital's information system. Next, at step 312, the patient medical record module retrieves patient medical history information from the database 116 in which patient risk data is maintained or from other externally maintained databases. The risk of illness information and patient history information are submitted to the integrated decision engine 114 for use in step 308 to calculate an integrated assessment, as will be described below.
The ultrasound sub-process 304 begins with obtaining ultrasound image data, step 314. Ultrasound image data may be obtained from medical imaging device 102. Alternatively, CAD application 106, i.e., image data module 202 thereof, may request ultrasound image data from image archiving server 136. Generally, the obtained ultrasound image data contains information other than medical images. At this step, a single image is also extracted from the image data. The extracted image is submitted to the image processing module 204.
At step 316, the ultrasound assembly of the processing module 204 processes the image. At this step, the processing module 204 calculates, i.e., extracts and identifies, the body, tissue, morphology and ultrasound image features associated with the object of interest in the separate single image. The object of interest may be defined by the boundaries of an abnormal region, such as a lesion. The ultrasound decision engine 208 analyzes these features to provide classification, lesion type identification, and lesion assessment at step 318. Optionally, the extracted and identified features are displayed to the user for confirmation or correction, at a display and confirmation step 320.
Fig. 4 shows in detail the substeps in processing the morphological features in the ultrasound image by CAD application 106. The ultrasound image can be a 2-dimensional image of a region or a 3-dimensional image of a volume. The image processing step 316 begins with the step of selecting a region of interest ("ROI"), step 402. The ROI is a region in the anatomy that may contain an abnormal object such as a lesion. The ROI may be 2-dimensional when processing 2-dimensional images, or 3-dimensional (also referred to as "VOT", or "volume of interest") when processing imaged volumes. The ROI may be identified in any suitable manner. For example, the user can manually identify the ROI through the user interface 120 on the displayed image. CAD application 106 can extract ROIs that have been identified from another source, such as ROIs identified in a pre-examination and stored in a database. Alternatively, CAD application 106 may perform morphological analysis on the image to identify the ROI and recommend it to the user. In one embodiment, the user selects and identifies an ROI to the system by first selecting a segmentation "seed point," i.e., a starting point in the region of interest. The user may select the segmentation seed point, for example by using a pointing device and selecting the point in the central region of the suspicious lesion. The ROI is then defined by dragging the pointer from the seed point to form a circle around the seed point. The circle limits the area in which the segmentation algorithm is run. When the ROI is large enough to encompass (occlude) the entire suspicious lesion, the user releases the pointing device.
Once the ROI is identified, the ROI is segmented to delineate the boundary of the suspicious lesion at a segmentation step 404. After ROI segmentation, a pattern recognition operation (step 406) is applied to the segmented ROI to identify and extract morphological features from the ROI. Structural features in the ROI are also identified and analyzed during the pattern recognition step 406. They are classified based on their morphology and tissue pattern or characteristics. Local morphological features such as local burrs, local branch topography, local tube extension, and local lobular are identified and indexed. In addition, pixels in the ROI are scanned to identify ultrasound image features such as echo patterns. The local morphological features are combined with a set of ultrasound image features predefined by criteria such as the ACR-BIRADS library to generate a list of features so identified. In the pattern recognition step, the processing module 204 may also analyze the image to identify features such as clusters and contrast pixels in the segmented ROI or analyze the image to introduce some domain knowledge concepts such as information of pixels around the ROI to better identify specific local features.
Next, in a feature extraction step (step 408), the processing module 204 extracts certain special features from these locally identified patterns, which are considered to be relevant for cancer diagnosis, i.e. for distinguishing benign and malignant lesions. Some of these features may include shape, orientation, angular edges, lesion boundaries, and calcification. These features may also include those unique to a particular detection technique. For example, for an ultrasound image, these features may include echo patterns and posterior acoustic wave attenuation features (postero acoustic features).
Next, at a classification step 410, the features and characteristics extracted and identified during the image processing step 316 (sub-steps 402-408) are combined and analyzed. Conveniently, the extracted and identified characteristics or features are generally consistent with a predefined set of features. The predefined set of characteristics and properties is typically developed by medical specialties, such as those associated with disease, such as cancer diagnosis. A description of these features is generally provided along with a definition of these features. One such set of predefined features and libraries is the BI-RADS library. At this step, the extracted and identified features are compared to a feature set of the BI-RADS library to specify a statistical probability that the tissue may exist with any features in the lesion to be analyzed.
Next, at step 412, an assessment of the lesion is calculated. Rules or algorithms may be developed for the purpose of calculating the evaluation. For example, an evaluation may be calculated from the classification and likelihood of features identified and classified according to the BI-RADS library. In one embodiment, a larger set of medical images is processed first. Pattern recognition and feature extraction operations are applied to each image in the set. The identified features are classified and indexed according to the protocols and libraries defined by BI-RADS. The images in the set are diagnosed, for example, based on biopsy results. From the results of the image processing and the known diagnoses, statistical patterns can be developed that correlate the extracted feature sets with the statistical likelihood of diagnosis. A set of rules for calculating the evaluation can be extracted from the pattern, which can then be applied to the results of the analyzed image to generate an evaluation. It should be understood that the calculation of the evaluation is not limited to the use of statistical patterns. The evaluations may also be calculated using a Super Vector Machine (SVM) method or may be generated using an AI engine employing more complex methods such as neural network methods. Whatever method is used, an evaluation is calculated at this step from the identified, extracted and classified features.
Methods and systems related to extracting morphological features from medical images and providing suggested assessments of suspicious lesions based on the extracted and classified morphological features are also described in more detail in co-pending, commonly owned U.S. application serial No. 60/686,397, filed on 2.6.2005, which is incorporated by reference in its entirety.
Returning to FIG. 3, the steps of the MRI subprocess 306 are now described in detail. As shown in FIG. 3, the MRI subprocess 306 begins with a step 322 of obtaining MRI image data. The MRI image data may be provided by the MRI medical imaging device 102 or may be retrieved from the image archive server 136. In one embodiment, the MRI image data is acquired in a plurality of MRI scans, forming a time series of MRI image data. From these series of MRI scans, temporal information associated with suspected abnormalities, such as suspicious lesions, can be extracted in the kinetic analysis.
Generally, medical images are formed by differentiation of medical imaging devices between particular types of tissue. Improving the contrast (contrast) between tissue types helps to provide better image quality. Administration of a contrast enhancer to a patient may selectively affect the imaging performance of certain tissue types and enhance the contrast between normal tissue and tumor tissue, thereby enhancing the contrast of the imaged lesion. Gadolinium-based contrast agents (e.g. Gd-DTPA) are one type of contrast enhancer commonly used for MRI images. Typically, benign or malignant lesions will exhibit different temporal contrast enhancement behavior after administration of a contrast agent. A series of MRI scans performed at regular time intervals, e.g. every 2min, can be performed on the patient after injection of the contrast enhancer to capture the temporal contrast enhancement behavior. The series of MRI scans thus comprises a time series of MRI data. One diagnostic technique is to analyze the contrast change curve constructed from a time series of MRI data. Various kinetic features associated with the model or diagnostic methodology are extracted from the contrast change curve for further analysis.
Fig. 5 schematically illustrates one such time series. Only three images in this time series are schematically shown in fig. 5, although more images will generally be used. A first window 502 illustrates a pre-contrast scan image 504. Which shows the lesion imaged prior to contrast enhancement. The lesion shows visible structure but does not have any details nor its trueness. The second window displays a contrast enhanced image 508. The image shows the imaged lesion in more detail due to the contrast enhancement. It also shows the actual extent of the lesion due to the enhanced contrast between the diseased tissue and its surrounding normal tissue. A third window 510 schematically shows a time delayed image 512. The lesion remains more visible than the image of the pre-contrast scan image 504 due to residual contrast enhancement; however, it is less visible and less detailed to display than contrast enhanced image 508.
A window showing the contrast variation 514 is also illustrated in fig. 5. Contrast curve 514 is a curve showing the real-time change in contrast after the contrast agent has been administered. The plot generally shows an initial enhancement of contrast, followed by a decrease in contrast enhancement, as seen in the MRI images 504, 508, 512 in the time series.
It is believed that the time varying features in general, i.e. the dynamics of the MRI image data, and in particular the characteristics of the contrast variation curve, can play a useful auxiliary role in cancer diagnosis. Relevant kinetic features are generally those global or local criteria that can be derived from the contrast curves and are considered to be important descriptors for or by statistical models. One such dynamic feature is simply the shape of the contrast curve. A display similar to that shown in fig. 5 may be presented to the user. CAD application 106 may analyze contrast curve 514 and provide an assessment of the imaged object, i.e., a suspicious lesion, to assist the user in making a diagnosis.
Fig. 6 shows the general behavior that can be expected for a contrast variation curve. The contrast change curve is generally formed by an up-going segment 602, a transition point 604 and a time delay portion 606. Advantageously, the contrast profile shown in fig. 6 is normalized, i.e. shows only a relative enhancement of the contrast. The normalized curve shows the rate of increase (or decrease) in the percentage of contrast enhancement. This tends to reduce the inter-patient variation.
The initial enhancement of contrast by the contrast enhancer appears as an initial rapid increase in contrast, or sharp up-run 602. The steeper the curve, the more rapid the enhancement. This initial increase is generally associated with an increased level of contrast agent in the vasculature associated with the lesion. After an initial rapid increase in contrast, the rate of increase decreases and generally exhibits one of three different types of behavior, depending on the type of lesion. The transition point 604 on the contrast change curve indicates this drop. The first type is slower but the contrast enhancement continues to increase. The continuous enhancement 608 is generally considered indicative of a benign lesion. The second type is persistent augmentation, or steady state 610. The contrast abruptly stops increasing after the initial sharp increase and maintains a substantially constant rise in contrast during the mid and late post-contrasts. The third type is slow-down, showing a downstream segment 612. Transition point 604 corresponds to a peak enhancement. The contrast abruptly stops increasing after the initial sharp increase and begins to decrease during the middle and late post-contrast periods, creating a descending segment 612. The presence of a plateau 610 or descending segment 612 is considered an indication of tumor angiogenesis and vascular permeability. It is generally believed that the growth and metabolic potential of tumors may be directly related to the extent of peripheral angiogenesis. Thus, analyzing contrast curve 514 may provide additional indications to distinguish benign from malignant lesions.
Following the step 322 of extracting the single image data of the MRI scan, the MRI subprocess 306 bifurcates after the step 322. One branch is similar to processing morphological features in a single ultrasound image described in connection with fig. 4, with the following steps: the image is processed (step 324), the lesion is analyzed and evaluated (step 326) and the results are optionally displayed to the user for confirmation and correction (step 328). These steps are generally the same as described in connection with the ultrasound subprocess 304 and will not be described in further detail herein.
It should be noted, however, that since the MRI data may contain a time series of multiple scans, the step of processing the images (step 324) can introduce temporal information in the morphological analysis. To clarify this, a pre-contrast scan and a post-contrast scan are considered. Subtracting voxel values in the pre-contrast scan from corresponding voxel values in the post-contrast scan advantageously emphasizes regions within the enhanced scanned volume, i.e. regions that may correspond to structures in a suspicious lesion. It should be understood that mathematical operations other than subtraction can also be implemented. Furthermore, a series of mathematical or logical operations may be applied to (or between, if logical operations are employed) the appropriate ones of the scans, including the plurality of back-contrasts, to assist in the morphological analysis.
Another branch of the MRI subprocess 306 includes the following steps: the kinetic data is extracted and processed (step 330), the lesions are classified and an assessment is calculated based on the extracted kinetic features (step 332), and the results are optionally displayed to the user for confirmation and modification (step 334). These steps are described in greater detail below with reference to FIGS. 5-8.
MRI image data generally corresponds to a three-dimensional region or volume, represented by data points (or "voxels") arranged in 3-dimensional coordinates or grids. The 3-D volume represented by the MRI scan can be processed as a single volume in a 3-D process. Alternatively, such 3-dimensional scans can be organized into stacks of planar "slices". The user can select to process stacked slices of slices in a series of 2-D processes.
FIG. 7 is a flow chart detailing the dynamics branch of the MRI subprocess 306 for constructing a contrast variation curve. These steps correspond to steps 322 and 330 shown in fig. 3. The first step, step 702, is to acquire MRI data from the medical imaging device 102 or from the image archiving server 136. Image data acquired from a scan at a first initial time (prior to administration of the contrast enhancer) is first extracted (step 704).
Advantageously, the results from the morphological branch of the MRI sub-process 306 or the morphological analysis of the ultrasound sub-process 304 can be reused here. The same lesions identified during the morphological analysis can be selected for kinetic analysis (step 706). If no morphological analysis is performed and no ROI is identified for MRI scanning, the ROI may be identified manually by the user or from a time sequence. For example, a time series of MRI scans can be processed to identify voxels in a time period that have been labeled with an increase in signal intensity. Time delay behavior (e.g., steady state or down) may also be analyzed. Voxels that show enhanced contrast and exhibit expected time-delayed behavior may be within the center of mass corresponding to the lesion. The ROI encompassing these voxels may be automatically selected. These clusters of voxels can be analyzed to separate different lesions from each other or to group together different structural components belonging to the same lesion. The ROI can be defined to encompass all voxels that may belong to one lesion.
Next, at step 708, morphological operations, including segmentation and pattern recognition, are applied to the ROI to delineate the centroid containing the lesion and to identify structures in the lesion. Also, the results produced by the morphological branches of the MRI subprocess can be reused here. Moreover, as described below, clustering of voxels may already provide good segmentation (segmentation) if the ROI is identified from an analysis of time-dependent contrast enhancement. Next, at step 710, the contrast between the identified morphological features, i.e., the signal strength of the lesion relative to surrounding structures, is evaluated. In one embodiment, the signal intensities of all voxels in the identified centroid are summed to provide an estimate of the suspicious lesion contrast value. However, other ways of representing contrast enhancement can be used. For example, in a mode that accounts for edge enhancement, the total signal intensity may be the sum of voxels located along the lesion boundary. Voxels corresponding to some other structure may be summed when another diagnostic methodology or mode is implemented. In other words, contrast values can be summed over voxels in any particular subset of lesions, depending on the diagnostic methodology or mode performed or supported by the CAD application.
After evaluating the contrast level of the first pre-contrast image, the process continues with the extraction of MRI data for the next scan in the time series. I.e. the process returns to the image extraction step, step 704. After the image extraction step, steps 706-710 are repeated for a first back-contrast scan. First, the same lesion that has been identified is reused here to provide a starting point in ROI identification. The ROI that encloses these voxels can also be reused. After ROI identification at step 706, morphological operations are performed to identify and delineate the centroid containing the lesion at step 708. Next, the contrast between the lesion and its surrounding tissue in the first post-contrast scan is calculated at step 710. These steps are repeated for all MRI scans in the time series until all MRI scans in the time series are processed (step 712). In a final step 714, the initial contrast values are normalized from the lesion contrast values calculated from the series of images, and a contrast variation curve 514 is constructed.
Returning to fig. 3, once the contrast variation is constructed, a quantitative analysis of the contrast variation 514 is performed to extract the time, i.e., kinetic, features from the time series of images to provide a classification of the lesion (step 332). The quantitative analysis of the contrast change curve 514 generally includes the shape analysis and classification of the kinetic curve, i.e., whether the time delay portion 606 is a continuous enhancement 608, a plateau 610, or a down segment 612, the level of enhancement at the transition point 604, the time to reach the transition point 604, i.e., the slope or initial rate of increase of the up segment 602, and the rate of decrease during post-contrast, i.e., the presence or absence of the down segment 612 and its rate of decrease. Potential lesions can be classified based on these kinetic features. In one embodiment, the lesion is simply assigned a value of 0 if continuous enhancement is observed, 1 if a plateau is observed, and 2 if a descending segment is observed, where 0 represents a benign lesion and 2 represents a malignant lesion. More complex classification schemes can be performed by considering other features such as the slope of the ascending line, the peak of the curve, or the rate of the descending line. Such complex schemes can generally be constructed using statistical models, similar to those described earlier in connection with ultrasound images.
Referring to FIG. 3, as a final step, the results from each of these parallel sub-processes are submitted to the comprehensive decision engine 114 for comprehensive evaluation (step 308). The integrated decision engine 114 correlates the identification and extraction of features for lesions from all modalities. The results from all modalities are also combined to provide a comprehensive estimate of lesion severity, to classify lesions and grade lesions, i.e. to provide a staged assessment of lesions according to a predetermined grading scheme.
As previously described, CAD application 106 is modular. Although fig. 3 shows a flowchart for performing two modalities, an ultrasound modality and an MRI modality, other modalities, i.e., other sub-processes, can be readily added to CAD application 106. Any of the ultrasound or MRI modalities may also be replaced or substituted with other modalities. For example, in FIG. 3A, an alternative embodiment of performing an MRS modality is shown. In fig. 3A, MRS sub-process 340 replaces MRI sub-process 306, while ultrasound sub-process 304 is substantially the same as described with reference to fig. 3, and therefore will not be described further herein.
Referring to FIG. 3A, the MRS subprocess 340 begins with obtaining MRS data, step 342. It should be understood that MRS data may be obtained directly from MRS device 102, e.g., a program executed based on results from other modalities. Alternatively, MRS data may be retrieved from image archiving server 136.
Generally, MRS data corresponds to many MRS assays. Each MRS assay result may be a single spectrum corresponding to spectral data acquired from a chemical in a single voxel. MRS assay results may also correspond to spectroscopic data from chemicals in multiple voxels, such as data obtained from 2DCSI or 3DCSI examinations. In either the 2DCSI or 3DCSI examinations, each measurement corresponds to a spectrum of a chemical from a plurality of voxels, each of which may be, for example, 1cm3~1.5cm3The volume of (a). The assay results are extracted from the MRS data for further analysis at step 344.
In the next step, the intensity or concentration of the labeled chemical is identified and calculated in the spectral analysis 344. For example, a spectrum of choline can be isolated or identified from the spectroscopic data. The peak of the characteristic frequency of choline is identified and determined and then converted into an absolute measure of the concentration of choline in the voxel or as a relative intensity or concentration with respect to other chemicals in the voxel. If biochemical information from multiple marker chemicals is required, the spectral data can be further processed to isolate or identify the contribution from each residual marker chemical. Their concentrations or relative intensities can also be calculated from their respective spectral data.
In the next step, the results of the spectral analysis 346, i.e. the concentration or relative intensity of the labeling chemical corresponding to each voxel or voxels, are displayed. These results can be displayed numerically for each measurement. The results can also be plotted as isoconcentration contours to more visually show the distribution of the concentration or intensity of the marker chemical. Advantageously, this concentration or intensity distribution can also be converted into a false-color image and placed on the MRI image.
It should be appreciated that although MRS sub-process 340 is described herein as being performed independently of ultrasound sub-process 304, ultrasound sub-process 304 can advantageously be performed first. The results from the morphological analysis, and in particular the segmentation process, can help identify a set of voxels or centroids that are likely to represent a lesion. An envelope can be generated that encompasses the volume or centroid. Thereafter, only MRS data corresponding to voxels contained within the envelope need be analyzed. As another example, it may often be the case that analysis of image data from one modality, such as ultrasound or MRI, identifies one or more regions suspected of cancer, for example based on an initial evaluation of data from these modalities alone. However, these results may not be conclusive. The MRS procedure is not performed for all anatomical regions or the same regions as in other modalities, but may be performed on a smaller region or regions that merely encompass suspicious lesions identified in other modalities. This helps to improve efficiency. Similarly, preliminary results from MRI or MRS analysis may also provide starting points for data acquisition and analysis in other modalities.
Generally, multimodal systems, such as system 100, can utilize results from one modality and utilize those results as input to improve efficiency. Since the same lesions can generally be observed in multiple modalities, the results of morphological analysis performed in one modality can often be used directly in another modality. Fig. 3B illustrates in flow chart form an alternative process 350 for MRI dynamics analysis using the results of the morphology analysis as input.
As will be remembered, in morphological analysis, first the ROI is identified and then a segmentation process is performed to identify the boundary that may separate the tumor from normal tissue. In a 3-dimensional segmentation process, the boundary is an envelope that encloses a volume or centroid that may correspond to the tumor. As a first step of the process 350, the segmentation results are first obtained, for example, from an ultrasound module (step 352). Next, envelopes enclosing the voxels or centroids are generated (step 354). All voxels contained within the envelope are then analyzed to extract kinetic features.
The steps of obtaining MRI data and separating them into single scans at different times T0, T1, T2. To extract the kinetic features, images scanned at different times, such as T0 and T1, or T1 and T2, were compared. This can be accomplished, for example, by first subtracting the image acquired at T0 from the image acquired at T1. Voxels with positive values then represent voxels of increased contrast, while voxels with negative values represent voxels of reduced contrast. Since the envelope has been determined from morphological analysis, only voxels within the envelope need to be processed to extract kinetic information. Subsequent images or scans in the time series are similarly processed to extract kinetic information (step 356). Limiting the kinetic processing to those voxels contained within the envelope helps eliminate the need to identify voxels corresponding to tumors in a separate run, for example by identifying those voxels that exhibit an initial up-going and then a steady or down-going behavior. This may also avoid the need to process voxels outside the envelope. In this way, the efficiency and accuracy of the multimodal system 100 may be improved.
Fig. 8 is a flow chart showing steps prior to the composite decision engine 114 in the composite scoring process. At step 802, the integrated decision engine 114 first associates all features provided by all modality decision engines. For example, the lesion shapes determined by each modality can be correlated at this step. Each of the classification module 218, the lesion type module 220, and the lesion extent module 222 of the integrated decision engine 114 combine the results from all modalities to classify the lesion at step 804, to determine the lesion type at a type determination step 806, and to estimate the size of the lesion at an extent determination step 808.
The integrated decision engine 114 also scores the lesions by combining the results from all modalities (step 810), i.e., computes a diagnostic evaluation. Since the integrated diagnostic evaluation is generated based on the results of more than one modality, the confidence level in the evaluation will generally be increased. A rule-based process may be followed to calculate a composite rating. For example, a score point may be assigned to a feature that generally indicates malignancy in each modality. By summing the score points obtained from all modalities, an overall score can be obtained. Based on the final total score, a graded assessment can be assigned to the lesion. Generally, a hierarchical evaluation based on features observed in one modality confirmed by features observed in another modality helps to improve confidence in the evaluation.
For example, in one embodiment, the following scoring scheme is employed when the integrated decision engine 114 calculates an integrated rank assessment based solely on results from MRI image data analysis:
similarly, statistical patterns may be constructed for image data obtained from multiple modalities, similar to that constructed for a single modality. For example, for results of a biopsy known from a database of images obtained from multiple modalities, with statistical probabilities assigned to the results, a rule can be constructed to relate the presence of features seen in each of the multiple modalities to the possible grade (stage) of the tumor. This set of rules can be applied to the results from all modalities, lesion types, lesion degrees and classifications generated by the integrated decision engine to compute an integrated rating for the lesion. Likewise, combining scores or ratings allows for a larger set of inputs, which results tend to be more credible. As a general rule, evaluations computed from data obtained more independently tend to be more statistically reliable. Moreover, the evaluation calculated from the results of analyzing image data of a single modality may be strongly influenced by missing data points, for example, when important descriptors contributing to the basis functions of the statistical model cannot be calculated. With results from multiple modalities, results from another modality can provide the required (and missing) information, thus increasing confidence in the calculated assessment.
However, it should be understood that while the results of multiple modalities are beneficial for improving reliability and confidence in the assessment, there are situations where the results from one modality analysis may be sufficient. If the results from one modality are clearly indicated, analysis by other modalities may optionally be skipped in order to improve performance. For example, if the MRI dynamics analysis finds that the lesion is apparently cancerous, the MRI morphology analysis or the morphology analysis by other modalities may optionally be skipped or suspended unless the user specifically requires the morphology analysis. Likewise, if the morphological analysis clearly indicates a cancer conclusion, the MRI kinetic analysis may be skipped or delayed. The results provided by the integrated decision engine will be the same as those provided by the particular modality decision engine in question.
The results of the integration engine are presented to the user for confirmation or correction (step 812). Images from each modality with the extracted features appended to the image may be displayed to the user. The features identified by CAD application 106 can be annotated. The contrast curve 514 may also be displayed to the user at the same time. The results of the identification may be pre-set in a standard report format, following a format built from standards such as the BI-RADS MRI library or any other suitable standard. One such possible display is shown in FIG. 9, which shows a first image 902 obtained from a first modality, a second image 904 obtained from a second modality, and a report 906 containing a composite result, with extracted features and composite rating presets.
A user, such as a physician or radiologist, may confirm the results calculated by CAD application 106, or may correct any results of the automatic detection and evaluation. An annotation module (not shown) may be provided for receiving input from a user. The user may modify or annotate the displayed results through the user interface. For example, the user may reclassify the lesion, may replace the classification generated for CAD application 106, or the user may revise the rating calculated by the CAD application. The user may also reclassify morphological or kinetic features extracted from CAD application 106. In this way, CAD application 106 recalculates as needed to generate a revised composite decision and evaluation.
Once the results are confirmed by the user, a report can be generated (step 814). The generated report is similar to the report generated for each individual modality, except that the result is a composite decision and evaluation. Generally, the report content is based by default on data available in the processed image. In other words, data that may be utilized in results similar to that shown in FIG. 9 is also reflected in the report. The report includes the detected and classified MRI, ultrasound image or CT features, as the case may be, as well as calculated and confirmed assessments, as well as any comments, comments and user corrections. The original medical image and the processed copy are also included. Finally, the report contains the conclusions of the image and the radiologist's evaluations (preferably in a format consistent with the relevant ACR-BIRADS classification).
The report may include authentication and audit information for tracking and audit purposes. The identification and audit information may include a unique report identifier, serial number, date or time stamp, i.e., the time and date of the study or report, patient identification number, study identification number, user ID, unique report identifier, user additions or modifications to the detected characteristics, etc. Conveniently, a key module may be provided to facilitate digital signing of the report by the radiologist. Digital signatures can be included and recorded for each archived case to provide improved auditing capabilities of the report and to prevent accidental revisions to the report.
Preferably, the report is archived as a DICOM Secondary Capture (Secondary Capture). Annotations, comments, image processing results such as lesion boundaries and diagnostic results are archived as part of a DICOM-compatible file. The user can also store the PDF version of the report locally, for example, in a patient sample directory. This is advantageous for facilitating future reference. If the cases described for those combinations are already present in the patient profile, a new case is generated.
CAD application 106 is not limited to analyzing image data taken during a single imaging session or clinical examination. Often, it is necessary to image the patient at intervals of several months. This may be necessary after surgery or treatment of the cancer, as part of a periodic examination, or as part of a follow-up clinical examination. Images from the same modality taken during different diagnostic examinations may need to be analyzed and compared to each other. For example, it may be necessary to determine whether a detected benign lesion becomes malignant. Alternatively, it may be necessary to determine whether a malignant lesion has become smaller or stopped growing after treatment.
Fig. 10 shows the process of processing images from the same modality acquired at different times. This is a process modified from the process shown in fig. 3. Fig. 10 also shows three parallel sub-processes, namely, a patient data retrieval sub-process 302, a first morphology sub-process 1004, and a second morphology sub-process 1006. These sub-processes are also shown as parallel processes. In the course of this revision, the first and second morphological sub-processes 1004, 1006 are substantially identical, with one exception. At the beginning of subprocess 1004, the first accessed (examined) image data is retrieved (step 1014). At the beginning of sub-process 1006, the image data for the second visit is retrieved (step 1020). The processing of the image data (step 1016) and the classification and evaluation of lesions (step 1018) is the same for both sub-processes 1004, 1006 and is substantially the same as described in relation to the ultrasound sub-process 304. The sub-processes 1004, 1006 are therefore not described in detail here.
In a final step 1008, the integrated decision engine 114 combines the image data from the first visit to compute an integrated assessment of the lesion in the image data from the second visit. Since the image data is acquired at different times, the same lesion, even if visible in both image data, is prone to be at different stages of grading, and will need to match the image pattern seen in the images acquired during the two visits. The integrated decision engine 114, in correlating the lesion, will need to take into account the time difference. A temporal projection of lesion formation seen in the first accessed image data may be necessary. Once the features of the lesions in the two image data sets are correlated, the combined assessment can be evaluated as before. However, it should be understood that different patterns or different sets of criteria may be required for correlating features identified in lesions of an image at different times. The results for each single image data set can also be presented to a user, such as a radiologist in a parallel comparison. The parallel comparison can include results such as the type and extent of the lesion and its classification. Such a comparison may help a physician assess the formation of a lesion or the effectiveness of treatment.
Various embodiments of the present invention have now been described in detail. Those skilled in the art will appreciate that numerous modifications, adaptations, and variations may be made to these specific embodiments without departing from the scope of the invention. Since changes and/or additions may be made to the above-described best mode without departing from the nature, spirit or scope of the invention, the invention is not to be limited to those details but only by the appended claims.

Claims (30)

1. A method of analyzing a plurality of medical image data of a region in an anatomy and detecting abnormalities in said region, at least one set of said plurality of medical image data containing temporal information in response to administration of a contrast enhancer, said method comprising the steps of:
obtaining the plurality of medical image data;
identifying a set of data points from the plurality of medical image data representing possible lesions in the region;
extracting features associated with the set of data points from the plurality of medical image data, the features including at least two of a set of morphological features, a set of kinetic features of the temporal information, and a set of biochemical features;
calculating an initial diagnostic evaluation of the likely lesion from the at least two sets of features; and
the initial rating is provided to the user for evaluation.
2. The method of claim 1, further comprising the step of:
receiving from the user a correction to the at least two sets of features, an
Calculating a revised evaluation, the revised evaluation being further calculated in conjunction with the revision; and
providing the revised evaluation to the user for further evaluation.
3. The method of claim 1, wherein the kinetic feature is extracted from a contrast variation curve corresponding to a time-dependent local contrast variation in a subset of the set of data points.
4. A method as claimed in claim 3, wherein said dynamic feature comprises dividing said contrast variation curve into one of a continuous enhancement, plateau and downgrade type.
5. The method of claim 3, wherein the step of extracting the kinetic features comprises identifying an envelope surrounding the possible lesion, and wherein the contrast change curve is extracted by identifying the time-dependent local contrast changes in data points surrounding the envelope.
6. The method of claim 3, further comprising the step of obtaining a plurality of rules relating a set of morphological features, a set of kinetic features of said temporal information and a set of spectral features to possible diagnoses, said initial evaluation and said revised evaluation being calculated from said plurality of rules and said at least two sets of features.
7. The method according to either one of claim 1 and claim 2, wherein the plurality of medical image data includes medical image data acquired from at least two modalities, the calculating and recalculating further comprising the steps of:
for each of the at least two modalities, a calculation is performed based on features extracted from medical image data of the each modality, the modality decision being associated in the calculation and re-calculation.
8. The method of claim 1, wherein the biochemical feature is extracted from a spectral analysis of a subset of MRS of the set of data points.
9. The method of claim 8, wherein the biochemical characteristic includes at least a concentration profile of a labeling chemical.
10. The method of claim 8, wherein the biochemical characteristic comprises at least the relative intensities of two or more marker chemicals obtained from spectroscopic analysis.
11. The method of claim 1, wherein the plurality of medical image data includes at least one image data set selected from the group of X-ray image data, ultrasound image data, MRI image data, MRS data, CT image data, PET/CT image data, digital tomosynthesis image data, and nuclide image data.
12. The method of claim 1, further comprising the step of retrieving patient risk information, wherein the evaluation process incorporates the patient risk information.
13. A system for analyzing a plurality of medical image data of a region in an anatomy, at least one set of said plurality of medical image data containing temporal information in response to administration of a contrast enhancer, said system comprising:
an image data module for retrieving the plurality of medical image data;
a morphology module for identifying possible lesions in the medical image data and extracting and classifying morphological features associated with the possible lesions;
a dynamics module that extracts dynamics of temporal information related to the possible lesion from the plurality of medical image data;
a spectral analysis module that extracts biochemical features related to one or more marker chemicals from the plurality of medical image data;
a comprehensive decision engine that receives the extracted and classified morphological features from the morphological module, the extracted kinetic features from the temporal information of the kinetic module, and the biochemical features from the spectral analysis module, and calculates an initial diagnostic evaluation of the possible lesion from the morphological features, the kinetic features, and the biochemical features in a calculation of the evaluation; and
a graphical user interface for displaying at least a portion of the plurality of medical image data and the initial diagnostic evaluation for user evaluation and modification.
14. The system of claim 13, further comprising:
a morphological decision engine for deriving a morphological assessment from the extracted and classified morphological features;
a dynamics decision engine for deriving a dynamics evaluation from the extracted dynamics features; and
a spectral analysis decision engine for deriving a spectral assessment from the biochemical features, wherein the integrated decision engine correlates and combines in its calculations the morphological assessment, the kinetic assessment, and the spectral assessment.
15. The system of claim 13, further comprising an annotation module for receiving, via the graphical user interface, a revision of at least one of the extracted and classified morphological feature, the kinetic feature, and the biochemical feature, wherein the integrated decision engine recalculates a revised diagnostic evaluation after receiving the revision.
16. The system of claim 13, wherein the dynamics module comprises a curve construction module for constructing a contrast variation curve corresponding to time-dependent local contrast variations in a subset of the set of data points; and a kinetic analysis module for extracting the kinetic features from the contrast change curve.
17. The system of claim 13, wherein the image data module is configured to retrieve medical image data for multiple modalities and the integrated decision engine comprises a module for receiving morphological and kinetic features extracted from the medical image data for each of the multiple modalities.
18. The system of claim 17, wherein the multi-modality includes X-ray image data, ultrasound image data, MRI image data, MRS data, CT image data, PET/CT image data, digital tomosynthesis image data, and nuclide image data.
19. The system of claim 16, wherein the dynamics analysis module is configured to classify the contrast variation curve into one of a continuous enhancement, plateau, and downgrade type, and wherein the dynamics feature comprises the curve type.
20. The system of claim 13, further comprising:
the patient risk of illness module is used for retrieving patient risk of illness information from the database; and
the patient history module is used for retrieving the patient history information;
wherein said assessment of said assessment incorporates said patient risk information and said patient history information.
21. A method of acquiring and analyzing MRS medical image data from a region in a patient's anatomy, said method comprising the steps of:
obtaining a plurality of medical image data of the region;
identifying a set of data points from the plurality of medical image data representing possible lesions in the region;
extracting features related to the possible lesion from the plurality of medical image data;
calculating an initial diagnostic evaluation of the possible lesion from the features; and
once the initial diagnostic evaluation meets a preselected criteria, the following steps are completed:
acquiring MRS medical image data from a selected region comprising the possible lesion;
extracting biochemical features from the MRS medical image data;
recalculating said comprehensive assessment of said possible lesion, further combining said biochemical features in said recalculation; and
the composite rating is provided to the user for evaluation and correction.
22. The method of claim 21, wherein at least the portion of the plurality of medical image data contains temporal information responsive to the administration of a contrast enhancer to the patient, and the feature comprises at least one of a set of morphological features and a set of kinetic features of the temporal information.
23. The method of claim 22, further comprising the step of obtaining a plurality of rules relating said set of morphological features, said set of kinetic features of temporal information and said set of spectral features with possible diagnoses, said initial evaluation and said recalculated evaluation being calculated from said plurality of rules.
24. The method of claim 23, further comprising the step of:
receiving from the user a correction to at least one of the set of morphological features, the set of kinetic features, and the set of spectral features from the user, and
calculating a revised evaluation, said revised evaluation being further calculated in conjunction with said revising; and
providing the revised evaluation to the user for further evaluation and revision.
25. A system for analyzing medical image data of a region in an anatomy, the medical image data being acquired from a plurality of modalities, the system comprising:
an image data module for receiving the medical image data;
a plurality of image processing modules, each for processing image data acquired from one of the plurality of modalities, each of the modules identifying possible lesions in the medical image data and extracting and classifying a set of modality features associated with the possible lesions
A plurality of modality decision engines, each of the modality decision engines calculating a modality evaluation of the possible lesion for a modality of the plurality of modalities from the set of modality features associated with the modality;
a comprehensive decision engine that combines the modality assessments and calculates an initial diagnostic assessment of the possible lesion from the modality assessments; and
a graphical user interface for displaying at least a portion of the medical image data and the initial diagnostic evaluation for user evaluation and modification.
26. The system of claim 25, wherein at least a portion of the medical image data contains temporal information in response to administration of a contrast enhancer to a patient, and the set of modal features includes at least one of a set of morphological features and a set of kinetic features of the temporal information.
27. The system of claim 25, wherein at least a portion of the medical image data contains spectral information obtained from MRS data acquisition and the set of modal features includes at least biochemical features of one or more marker chemicals.
28. The system of claim 27, wherein the biochemical characteristic includes at least a concentration profile of the one or more marker chemicals.
29. The system of any of claims 25-28, wherein in identifying the likely lesion, one of the plurality of image processing modules receives input from at least one of another image processing module of the plurality of image processing modules and another modality decision engine of the plurality of modality decision engines.
30. The system of claim 29, wherein the input is a reference for a set of lesion data points corresponding to the possible lesion.
HK09107041.5A 2005-11-23 2006-11-23 Method and system of computer-aided quantitative and qualitative analysis of medical images HK1129551A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US60/738,999 2005-11-23

Publications (1)

Publication Number Publication Date
HK1129551A true HK1129551A (en) 2009-12-04

Family

ID=

Similar Documents

Publication Publication Date Title
US8391574B2 (en) Method and system of computer-aided quantitative and qualitative analysis of medical images from multiple modalities
Si et al. Fully end-to-end deep-learning-based diagnosis of pancreatic tumors
US7783094B2 (en) System and method of computer-aided detection
US20210210177A1 (en) System and method for fusing clinical and image features for computer-aided diagnosis
He et al. A review on automatic mammographic density and parenchymal segmentation
EP3796210A1 (en) Spatial distribution of pathological image patterns in 3d image data
US7529394B2 (en) CAD (computer-aided decision) support for medical imaging using machine learning to adapt CAD process with knowledge collected during routine use of CAD system
CN103200861B (en) Similar cases indexing unit and similar cases search method
JP7021215B2 (en) CAD System Personalization Methods and Means for Providing Confidence Level Indicators for CAD System Recommendations
US20110137132A1 (en) Mammography Information System
JP2007524461A (en) Mammography automatic diagnosis and decision support system and method
CN1836240A (en) CAD (Computer Aided Decision) Support System and Method
CN111247592B (en) Systems and methods for quantifying tissue over time
CN1934589A (en) Systems and methods providing automated decision support for medical imaging
Lu et al. A review of the role of ultrasound radiomics and its application and limitations in the investigation of thyroid disease
Thomassin-Naggara et al. Artificial intelligence and breast screening: French Radiology Community position paper
WO2021107099A1 (en) Document creation assistance device, document creation assistance method, and program
CN1820274A (en) Using machine learning to adapt the CAD process with knowledge gathered during routine use of the CAD system to provide CAD (Computer Aided Decision) support for medical imaging
CN112508942B (en) Method and system for acquiring BI-RADS grade
WO2023078676A1 (en) Mammography deep learning model
HK1129551A (en) Method and system of computer-aided quantitative and qualitative analysis of medical images
CN112862822B (en) Method, device and medium for ultrasonic breast tumor detection and classification
Zafari et al. MammoClean: Toward Reproducible and Bias-Aware AI in Mammography through Dataset Harmonization
CN120236088A (en) A method, device, computer equipment and medium for extracting medical image features
Gouze et al. Interactive breast cancer segmentation based on relevance feedback: from user-centered design to evaluation