[go: up one dir, main page]

WO2010063010A2 - Système et procédé de visualisation de texture et d'analyse d'image pour différencier des lésions malignes et des lésions bénignes - Google Patents

Système et procédé de visualisation de texture et d'analyse d'image pour différencier des lésions malignes et des lésions bénignes Download PDF

Info

Publication number
WO2010063010A2
WO2010063010A2 PCT/US2009/066022 US2009066022W WO2010063010A2 WO 2010063010 A2 WO2010063010 A2 WO 2010063010A2 US 2009066022 W US2009066022 W US 2009066022W WO 2010063010 A2 WO2010063010 A2 WO 2010063010A2
Authority
WO
WIPO (PCT)
Prior art keywords
image
objects
interest
images
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/US2009/066022
Other languages
English (en)
Other versions
WO2010063010A3 (fr
Inventor
Thomas Ramsay
Eugene Ramsay
Gerard Felteau
Oleksandr Andrushchenko
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guardian Technologies International Inc
Original Assignee
Guardian Technologies International Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guardian Technologies International Inc filed Critical Guardian Technologies International Inc
Publication of WO2010063010A2 publication Critical patent/WO2010063010A2/fr
Publication of WO2010063010A3 publication Critical patent/WO2010063010A3/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/44Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10116X-ray image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30068Mammography; Breast
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30096Tumor; Lesion

Definitions

  • This invention relates to image analysis and, more specifically, to a system and method for the analysis and visualization of normal and abnormal tissues, objects and structures in digital images generated by medical image sources.
  • Texture analysis is an often used method for determining tissue characteristics in medical images.
  • the texture of a region describes the pattern of spatial variation of grey tones in a neighborhood that is small compared to the region. If the intensity values of an image are thought of as elevations, then texture is a measure of surface roughness.
  • a large body of literature exists for texture analysis of medical images such as ultrasound, magnetic resonance imaging (MRI), computer tomography (CT), fluorescence microscopy, light microscopy, and other digital images.
  • Texture analysis techniques can be classified into three groups: (1) statistical technologies - based on region histograms and their moments (measurement of features such as coarseness and contrast); (2) spectral technologies: based on the autocorrelation function or power spectrum of a region (detection of texture periodicity, orientation, etc.); and (3) structural technologies - based on pattern primitives, (placement rules are used to describe the texture).
  • Figure 1 is a flow chart of a known image processing method, in which the Laws Method is used.
  • step 10 an original digital image is input into a computing system for analysis.
  • step 12 the contrast of the image is adjusted using the image's histogram or the tonal distribution in a digital image. This step is often used to expand the differences between two similar tonal values in the image.
  • step 14 using Law's filters, texture analysis filters the original scalar image with a small convolution mask that enhances image spots, edges, or high-frequency components. The masks are frequently 3 x 3 pixels in size.
  • mean smoothing is applied in order to reduce noise and/or to prepare images for segmentation.
  • Filters such as median, Gaussian, pyramidal, and cone filters are examples of such filtering.
  • thresholding is used to remove foreground objects from their background. Luminance values below or above a certain value may be removed.
  • morphology measurements may be made if the isolated regions of interest can be mapped against a template of known shapes.
  • contour mapping is employed when definite edges can be determined and isolated that define the outside boundaries of areas of interest.
  • segmented image masks isolate and often remap the pixels from the regions of interest that were isolated in step 20.
  • the pixels that form only the boundary of a region of interest are mapped so they can be used for feature extraction and classification.
  • the pixels of the original image are compared mathematically with the segmented regions of interest.
  • the output image comprises the pixels of the segmented regions of interest.
  • An object of the invention is to solve at least the above problems and/or disadvantages and to provide at least the advantages described hereinafter.
  • an object of the present invention is to provide a system capable of detecting medical objects of interest in image data with a high degree of confidence and accuracy.
  • Another object of the present invention is to provide a system and method that does not directly rely on predetermined knowledge of an objects shape, volume, texture or density to be able to locate and identify a specific object or object type in an image.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in medical image data that is effective at analyzing images in both two- and three-dimensional representational space using either pixels or voxels.
  • Another object of the present invention is to provide a system and method of distinguishing a class of known objects from objects of similar color and texture whether or not they have been previously explicitly observed by the system.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in medical image data that works with very difficult to distinguish/classify image object types, such as: (i) apparent random data; (ii) unstructured data; and (iii) different object types in original images.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in medical image data that can cause either convergence or divergence (clusterization) of explicit or implicit image object characteristics that can be useful in creating discriminating features/characteristics.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in medical image data that can preserve object self- similarity during transformations.
  • Another object of the present invention is to provide a system and method of identifying objects of interest in medical image data that is stable and repeatable in its behavior.
  • a method of identifying a malignant region in a medical image comprising receiving a medical image, applying at least one non-linear transformation to the medical image to segment and differentiate regions of interest, and determining whether a region of interest represents a malignant region or a benign region.
  • FIG. 1 is a flowchart of a related art image processing method
  • Fig. 2 is a bifurcation diagram
  • Figs. 3A is an original x-ray image of a pipe containing explosives
  • Fig. 3B is the image of Fig. 3A after application of the image transformation divergence process of the present invention
  • FIG. 4 is a block diagram of a system for identifying an object of interest in image data, in accordance with one embodiment of the present invention.
  • Figs. 5A-6C are transfer functions applied to the pixel color of the image, in accordance with the present invention.
  • Fig. 7A is an input x-ray image of a suitcase, in accordance with the present invention
  • Fig. 7B is the x-ray image of Fig. 7a after application of the image transformation divergence process of the present invention
  • FIG. 8 is a block diagram of an image transformation divergence system and method, in accordance with one embodiment of the present invention.
  • Figs. 9A-9M are x-ray images of a suitcase at different stages in the image transformation recognition process of the present invention.
  • Fig. 9N is an example of a divergence transformation applied to an x-ray image during the image transformation divergence process of the present invention.
  • Fig. 1OA is an original mammogram and an associated transform that outputs the original image unchanged;
  • Fig. 1OB is a transform applied to the original image of Fig. 1OA and the image output after application of the transform, in accordance with one embodiment of the present invention
  • Fig. 11 is a transform applied to the image output after the transform of Fig. 1OB, and the resulting image, in accordance with one embodiment of the present invention
  • Fig. 12 is a transform applied to the image output after the transform of Fig. 11, and the resulting image, in accordance with one embodiment of the present invention
  • Figs. 13A and 13B are an original mammogram of a normal breast and the same mammogram after application of the image transformation divergence process of the present invention
  • Figs. 13C-13E are an original mammogram of a breast with a malignant growth and the same mammogram after application of the image transformation divergence process of the present invention.
  • FIGs. 14A-14C are an original mammogram of a dense breast with a benign growth and the same mammogram after application of the image transformation divergence process of the present invention
  • Figs. 14D-14F are an original mammogram of a dense breast with a malignant growth and the same mammogram after application of the image transformation divergence process of the present invention.
  • FIGs. 15A-15C are an original mammogram, taken with poor x-ray exposure, of a breast with a benign growth and the same mammogram after application of the image transformation divergence process of the present invention
  • Figs. 15D-15F are an original mammogram, taken with poor x-ray exposure, of a breast with a malignant growth and the same mammogram after application of the image transformation divergence process of the present invention.
  • Figs. 16A-16C are an original mammogram of a breast with a subtle benign growth and the same mammogram after application of the image transformation divergence process of the present invention
  • Figs. 16D-16F are an original mammogram of a breast with a subtle malignant growth and the same mammogram after application of the image transformation divergence process of the present invention
  • Fig. 17A is a mammogram of a breast with a malignant growth after application of the mammogram transformation divergence process of the present invention
  • FIG. 17B is a mammogram of a normal breast after application of the mammogram transformation divergence process of the present invention.
  • Fig. 18 is a transform applied to an ultrasound image, in accordance with one embodiment of the present invention.
  • FIGs. 19A-19C are an original ultrasound image of a breast with a benign growth and the same image after application of the image transformation divergence process of the present invention.
  • Figs. 19D-19F are an original ultrasound image of a breast with a malignant growth and the same image after application of the image transformation divergence process of the present invention.
  • FIGs. 20A-20C are an original ultrasound image of a breast with a benign growth and the same image after application of the image transformation divergence process of the present invention
  • Figs. 20D-20F are an original ultrasound image of a breast with a malignant growth and the same image after application of the image transformation divergence process of the present invention
  • Fig. 21 is a vector representation that reflects the vertical and non-vertical directions of growth found in cancerous lesions;
  • Fig. 22 are mammograms with malignant and benign growths with the regions of interest having gone through the image transformation divergence process of the present invention, illustrating the number of unique colors present in the malignant and benign growths after the image transformation divergence process;
  • FIG. 23A are images comparing growth of cancer in a culture in a petri dish, cancer in an image taken from a biopsy under a microscope, and cancer in a mammogram, all of which have undergone the image transformation divergence process of the present invention
  • Fig. 23B is a block diagram showing how ITD results from different imaging modalities can be combined and used to improve detection accuracy, in accordance with the present invention.
  • FIG. 24 is a flowchart of a method of creating a Support Vector Machine model, in accordance with one embodiment of the present invention.
  • FIG. 25 is a flowchart of a method of performing a Support Vector Machine operation, in accordance with one embodiment of the present invention.
  • Figs 26A and 26B are x-ray images from a Smith Detection (Smith) x-ray baggage scanner and a Rapiscan x-ray baggage scanner, respectively;
  • Point operation is a mapping of a plurality of data from one space to another space which, for example, can be a point-to-point mapping from one coordinate system to a different coordinate system.
  • Such data can be represented, for example, by coordinates such as (x, y) and mapped to different coordinates ( ⁇ , ⁇ ) values of pixels in an image.
  • Z effective Is the effective atomic number for a mixture/compound of elements. It is an atomic number of a hypothetical uniform material of a single element with an attenuation coefficient equal to the coefficient of the mixture/compound. Z effective can be a fractional number and depends not only on the content of the mixture/compound, but also on the energy spectrum of the x-rays.
  • Hyperspectral data is data that is obtained from a plurality of sensors at a plurality of wavelengths or energies.
  • a single pixel or hyperspectral datum can have hundreds or more values, one for each energy or wavelength.
  • Hyperspectral data can include one pixel, a plurality of pixels, or a segment of an image of pixels, etc., with said content.
  • hyperspectral data can be treated in a manner analogous to the manner in which data resulting from a divergence transformation is treated throughout this application for systems and methods for threat or object recognition, identification, image normalization and all other processes and systems discussed herein.
  • a divergence transformation can be applied to hyperspectral data in order to extract information from the hyperspectral data that would not otherwise have been apparent.
  • Divergence transformations can be applied to a plurality of pixels at a single wavelength of hyperspectral data or multiple wavelengths of one or more pixels of hyperspectral data in order to observe information that would otherwise not have been apparent.
  • Nodal point is a point in an image transformation or series of image transformations where similar pixel values exhibit a significantly distinguishable change in value. Pixels are a unitary value within a 2D or multi-dimensional space (such as a voxel) .
  • Object An object can be a person, place or thing.
  • An object of interest is a class or type of object such as explosives, guns, tumors, metals, knives, camouflage, etc.
  • An object of interest can also be a region with a particular type of rocks, vegetation, etc.
  • Threat A threat is a type of object of interest which typically but not necessarily could be dangerous.
  • Image receiver An image receiver can include a process, a processor, software, firmware and/or hardware that receives image data.
  • Image mapping unit An image mapping unit can be a processor, a process, software, firmware and/or hardware that maps image data to predetermined coordinate systems or spaces.
  • a comparing unit can be hardware, firmware, software, a process and/or processor that can compare data to determine whether there is a difference in the data.
  • Color space is a space in which data can be arranged or mapped.
  • One example is a space associated with red, green and blue (RGB). However, it can be associated with any number and types of colors or color representations in any number of dimensions.
  • HSI color space A color space where data is arranged or mapped by Hue, Saturation and Intensity.
  • Predetermined color space is a space that is designed to represent data in a manner that is useful and that could, for example, cause information that may not have otherwise been apparent to present itself or become obtainable or more apparent.
  • RGB DNA refers to a representation in a predetermined color space of most or all possible values of colors which can be produced from a given image source.
  • the values of colors again are not limited to visual colors but are representations of values, energies, etc., that can be produced by the image system.
  • Signature A signature is a representation of an object of interest or a feature of interest in a predetermined space and a predetermined color space. This applies to both hyperspectral data and/or image data.
  • a template is part or all of an RGB DNA and corresponds to an image source or that corresponds to a feature or object of interest for part or all of a mapping to a predetermined color space.
  • Modality any of the various types of equipment or probes used to acquire images. Radiography, CT, ultrasound and magnetic resonance imaging are examples for modalities in this context.
  • the analysis capabilities of the present invention can apply to a multiplicity of input devices created from different electromagnetic and sound emanating sources such as ultraviolet, visual light, infra-red, gamma particles, alpha particles, etc.
  • the present invention is based on and builds on the Signature Mapping and Image Transformation (IR) or, equivalently, Iterative Transformational Divergence (ITD) techniques described in U.S. Patent No. 7,496,218, U.S. Patent No. 7,492,937, and co- pending U.S. Patent Application No. 11/374,612, filed on March 14, 2006, all of which are incorporated by reference herein.
  • IR Signature Mapping and Image Transformation
  • ITD Iterative Transformational Divergence
  • the ITD process can cause different yet almost identical objects in a single image to diverge in their measurable properties.
  • An aspect of the present invention is the discovery that objects in images, when subjected to special transformations, will exhibit radically different responses based on the pixel values of the imaged objects. Using the system and methods of the present invention, certain objects that appear almost indistinguishable from other objects to the eye or computer recognition systems, or are otherwise identical, generate radically different and significant differences that can be measured.
  • Another aspect of the present invention is the discovery that objects in images can be driven to a point of non-linearity by certain transformation functions.
  • the transformation functions can be applied singly or in a sequence, so that the behavior of the system progresses from one state through a series of changes to a point of rapid departure from stability called the "point of divergence.”
  • FIG. 2 is an example of a bifurcation diagram illustrating iterative uses of divergence transforms, where each node represents an iteration or application of another divergence transform.
  • a single image is represented as a simple point on the left of the diagram.
  • There are several branches in the diagram at lines A, B and C) as the line progresses from the original image representation on the left, indicating node points where bifurcation occurs ("points of bifurcation").
  • three divergence transforms were used in series at points A, B and C.
  • each divergence transform results in a bifurcation of the image objects or data.
  • Another aspect of the present invention is that one can apply the "principle of divergence" to the apparent stability of fixed points or pixels in an image and, by altering one or more parameter values, give rise to a set of new, distinct and clearly divergent image objects. Because each original object captured in an image responds uniquely at its point of divergence, the methods of the present invention can be used in an image recognition system to distinguish and measure objects. It is particularly useful in separating and identifying objects that have almost identical color, density and volume. [102] The system and methods of the present invention provides at least the following advantages over prior image extraction methodologies:
  • the ITD process works with an apparently stable set of fixed points or pixels in an image and, by altering one or more parameter values, giving rise to a set of new, distinct, and clearly divergent image objects. Commonly used and understood transforms work within the domain where images maintain equilibrium.
  • the general ITD algorithmic- approach provides for changes to be made in the transformations that are used, the sequence in which they are applied, and the number that are employed.
  • the ITD process generally incorporates one or more of the following steps as part of any image analysis and visualization application:
  • Segregation process is applied to the AOIs generated in the segmentation process but may also be applied to any or all parts of the original image
  • the ITD method segments the image into objects of interest, then applies different filter sequences to the same original pixels in the identified objects of interest using the process. In this way, the process is not limited to a linear sequence of filter processing.
  • an explosive inside of a metal container can be located by first locating all containers, remapping the original pixel data with known coordinates in the image and then examining the remapped original pixels in the identified object(s) in the image for threats with additional filter sequences.
  • Figure 3A shows an x-ray image of a pipe containing explosive Pyrodex. The Pyrodex in the image appears visually to be all white. After converting the grayscale image to an RGB color space and applying an ITD algorithm to the original image, the texture of the explosive material inside the pipe is now visible, as shown in Figure 3B, and has characteristics that can be effectively analyzed and classified by a processor.
  • the pixel values in the original image that have the highest value (are very bright) are isolated through segmentation (in this case through thresholding) as an object of interest (AOI) and then an ITD algorithm is applied to the AOI to differentiate the pixels inside of the AOI in the segregation process.
  • AOI object of interest
  • the combined color, texture, and pixel pattern signature in the processed image is unique for that combination of pipe and explosive material. It can be distinguished visually and mathematically from signatures of other materials in pipes.
  • the ITD methodologies of the present invention reveal signatures in radiographic image objects that have been previously invisible to the human eye.
  • the application of specific non-linear functions to a grey-scale or color radiographic images is the basis of ITD. Due to the Compton and photoelectric effects, objects in the image exhibit unique, invariant responses to the ITD algorithms based on their physical interactions with the electromagnetic beam.
  • By applying a combination of complementary functions in an iterative fashion objects of very similar grey-scale or color content in the original image significantly diverge at a point of non-linearity. This divergence causes almost statistically equivalent objects in the original image to display significant density, color and pattern differences.
  • Different algorithms are used for distinguishing objects that exhibit different ranges of effective atomic numbers (Z e ff). The algorithms are tuned to be optimal within certain fractional ranges of resultant electromagnetic Compton/photoelectric combinations.
  • the hypercube now contains spectral bands for each object that are the result of the object's response to each ITD iteration. This is quite similar to the creation of hyperspectral data that is collected by sensors from the reflectance of objects.
  • the hypercube data contains both spatial and spectral components that can be used for effective pattern classification rule generation.
  • FIG. 4 is a block diagram of a system 100 for identifying an object of interest in image data, in accordance with one embodiment of the present invention.
  • the system 100 comprises an input channel 110 for inputting image data 120 from an image source (not shown) and an image analysis system 130.
  • the image analysis system 130 generates transformed image data utilizing ITD, in which the object of interest is distinguishable from other objects in the image data.
  • the object of interest can be any type of object.
  • the object of interest can be a medical object of interest, in which case the image data can be computer tomography (CT) image data, x-ray image data, or any other type of medical image data.
  • CT computer tomography
  • the object of interest can be a threat object, such as weapons, explosives, biological agents, etc., that may be hidden in luggage.
  • the image data is typically x-ray image data from luggage screening machines.
  • At least one divergence transformation preferably a point operation, is preferably utilized in the image analysis system 130.
  • a point operation converts a single input image into a single output image. Each output pixel's value depends only on the value(s) of its corresponding pixel in the input image. Input pixel coordinates correlate to output pixel coordinates such that Xi, Yi — > X 0 , Y 0 .
  • a point operation does not change the spatial relationships within an image. This is quite different from local operations where the value of neighboring pixels determines the value of the output pixel.
  • Point operations can correlate both gray levels and individual color channels in images.
  • One example of a point operation is shown in the transfer function of Figure 5A.
  • Fig. 5A 8 bit (256 shades of gray) input levels are shown on the horizontal axis and output levels are shown on the vertical axis. If one were to apply the point operation of Fig. 5A to an input image, there would be a 1 to 1 correlation between the input and the output (transformed) image. Thus, input and output images would be the same.
  • Point operations are predictable in how they modify the histogram of an image. Point operations are typically used to optimize images by adjusting the contrast or brightness of an image. This process is known as contrast enhancing. They are typically used as a copying technique, except that the pixel values are modified according to the specified transfer function. Point operations are also typically used for photometric calibration, contrast enhancement, monitor display calibration, thresholding and clipping to limit the number of levels of gray in an image. The point operation is specified by the transformation function / and can be defined as: where A is an input image and B is an output image.
  • the at least one divergence transformation used in the image analysis system 130 can be either linear or non-linear point operations, or both.
  • Non-linear point operations are used for changing the brightness/contrast of a particular part of an image relative to the rest of the image. This can allow the midpoints of an image to be brightened or darkened while maintaining blacks and white in the picture.
  • Figure 5B is a linear transfer function
  • Figures 5C-5E illustrate transformations of some non-linear point operations.
  • An aspect of the present invention is the discovery that the transfer function can be used to bring an images to a point where two initially close colors become radically different after the application of the transfer function. This typically requires a radical change in the output slope of the resultant transfer function of Figure 6A.
  • the present invention preferably utilizes radical luminance (grayscale), color channel or a combination of luminance and color channel transfer functions to achieve image object differentiation for purposes of image analysis and pattern recognition of objects.
  • the placement of the nodal points in the transfer function(s) is one key parameter. An example of nodal point placements are shown in the transfer function example illustrated in the Figure 6B.
  • the nodal points in the transfer function used in the present invention are preferably placed so as to frequently create radical differences in color or luminance between image objects that otherwise are almost identical.
  • Figure 7A shows an input image
  • Figure 7B shows the changes made to the input image (the transformed image obtained) as a result of applying the transfer function of Fig. 6C.
  • the input image is an x-ray image of a suitcase taken by a luggage scanner.
  • the objects of interest are shoes 300 and a bar of explosives 310 on the left side of the suitcase.
  • Data points connecting the nodes can be calculated using several established methods.
  • a common method of mathematically calculating the data points between nodes is through the use of cubic splines.
  • Additional imaging processes are preferably applied in the process of object recognition to accomplish specific tasks. Convolutions such as median and dilate algorithms cause neighboring pixels to behave in similar ways under the transfer function, and may be applied to assure the objects' integrity during the transformation process.
  • FIG. 8 is a block diagram of one preferred embodiment of the image analysis system 130 of Fig. 4, along with a flowchart of a method for identifying an object of interest in image data using the image analysis system 130.
  • the image analysis system 130 includes an image conditioner 2000 and a data analyzer 3000.
  • Figures 9A-9M are x-ray images of a suitcase at different stages in the image analysis process. These images are just one example of the types of images that can be analyzed with the present invention. Other types of images, e.g., medical images from X-ray machines or CT scanners, can also be analyzed with the system and methods of the present invention, as will discussed in more detail below.
  • the method starts at step 400, where image may optionally be normalized.
  • the normalization process preferably comprises the following processes: (1) referencing; (2) benchmarking; (3) conformity process; and (4) correction process.
  • the referencing process is used to get a reference image containing an object of interest for a given type of X-ray machine. This process consists of passing a container containing one or more objects of interest into a reference X-ray machine to get a reference image. The referencing process is preferably performed once for each X-ray machine model/type/manufacturer.
  • the benchmarking process is used to get a transfer function used to adjust the colors of the reference image taken by a given X-ray machine that is not the reference X-ray machine.
  • This process consists of passing a reference container into any given X-ray machine to get the image of this reference container, which is herein referred to as the "current image.” Then, the current image obtained for this X-ray machine is compared with the reference image. The difference between the current image and the reference container is made to create a transfer function.
  • the benchmarking process determines the transfer function that maps all the colors of the current image color scheme ("current color scheme",) to the corresponding colors that are present in the reference color scheme of the reference image.
  • the transfer function applied to the current image transforms it into the reference image.
  • the conformity process is preferably used to correct the image color representation of any objects that pass through a given X-ray machine.
  • the conformity process corrects the machine's image color representation (color scheme) in such a way that the color scheme of a reference image will fit the reference color scheme of the reference container.
  • the conformity process preferably consists of applying the transfer function to each bag that passes into an X-ray machine to "normalize" the color output of the machine. This process is specific to every X-ray machine because of the machine's specific transfer function. Each time a container passes through the X-ray machine, the conformity process is preferably applied.
  • the correction process is preferably used to correct the images from the X-ray machine. It preferably minimizes image distortions and artifacts. X-ray machine manufacturers use detector topologies and algorithms that could have negative effects on the image geometry and colors. Geometric distortions, artifacts and color changes made by the manufacturer have negative impacts on images that are supposed to rigorously represent the physical aspects and nature of the objects that are passed through the machine.
  • the correction process is preferably the same for all X-ray machines of a given model/type/manufacturer.
  • image processing is performed on the image.
  • image processing techniques including, but not limited to, ITD, spatial and spectral transformations, convolutions, histogram equalization and gamma adjustments, color replacement, band-pass filtering, image sharpening and blurring, region growing, hyperspectral image processing, color space conversion, etc.
  • ITD is used for the image processing step 410, and as such the image is segmented by applying a color determining transform that effect specifically those objects that match a certain color/density/effective atomic number characteristics.
  • Objects of interest are isolated and identified by their responses to the sequence of filters. Image segmentation is preferably performed using a series of sub-steps.
  • Figs. 9B-9H show the image after each segmentation sub-step. The resulting areas of green in Fig. 9G are analyzed to see if they meet a minimum size requirement. This removes the small green pixels. The remaining objects of interest are then re-mapped to a new white background, resulting in the image of Fig. 9H. Most of the background, organic substances, and metal objects are eliminated in this step, leaving the water bottle 500, fruit 510, peanut butter 520 and object of interest 530.
  • step 420 features are extracted by the data analyzer 3000 subjecting the original pixels of the areas of interest identified in step 410 to at least one feature extraction process. It is at this step that at least one divergence transformation is applied to the original pixels of the areas of interest identified in step 410.
  • data conditioning is performed by the data analyzer 3000, in which the data is mathematically transformed to enhance its efficiency for the MLA to be applied at step 440.
  • meta data is created (new metrics from the metrics created in the feature extraction step 420 such as the generation of hypercubes.
  • This metadata can consist of any feature that is derived from the initial features generated from the spatial domain. Meta data are frequently features of the spectral domain, Fourier space, RGB_DNA, and z-effective among others.
  • Machine Learning Algorithms are capable of automatic pattern classification. Pattern classification techniques automatically determine extremely complex and reliable relationships between the image characteristics also called features. These characteristics are use by the Rules-base that exploits the relationships to automatically detect object into the images.
  • machine learning algorithms are applied by the data analyzer 3000. The feature extraction process of step 420 is applied in order to represent the images with numbers. The MLAs applied at step 440 are responsible for generating the detection system that determines if an object of interest is present. In order to work properly, MLAs need structured data types, such as numbers and qualitative/ categorical data as inputs.
  • the Feature Extraction Process is applied to transform the image or segments of an image into numbers. Each number is a metric that represents a characteristic of the image. Each image is associated with a collection of the metrics that represents it. The collection of the metrics related to an image is herein referred to as a vector. MLAs analyze the vector of the metrics for all the images and find the metrics' relationships that make up a "rules-base.”
  • the metrics created by the feature extraction process 420 are used to reflect the image content are, but not limited to, mean, median, standard deviation, rotation cosine measures, kurtosis, Skewness of colors and, spectral histogram, co-occurrence measures, gabor wavelet measures, unique color histograms, percent response, and arithmetic entropy measures.
  • the objects are classified by the data analyzer 3000 based upon the rules-base that classify images into objects of interest and objects not of interest according to the values of their metrics, which were extracted at step 420.
  • the object of interest 530 is measured in this process for its orange content.
  • the peanut butter jar 520 shows green as its primary value, and is therefore rejected.
  • the detected objects of interest 530 are thus distinguished from all other objects (non-detected objects 470). Steps 410-450 may be repeated as many times as desired on the non-detected objects 470 in an iterative fashion in order to improve the detection performance.
  • Determination of distinguishing features between objects of interest and other possible objects is done by the rule-base as a result of the analysis of the vectors of the metrics by the MLAs applied at step 440.
  • MLAs There are hundreds of different MLAs that can be used including, but not limited to, decision trees, neural networks, support vector machines (SVMs) and Regression.
  • the rules-base is therefore preferably entered into code and preferably accessed from an object oriented scripting language, such as Threat Assessment Language (TAL).
  • TAL Threat Assessment Language
  • a sample of TAL is shown below.
  • call show_msg("C4 Process 3a") call set_gray_threshold(255) call set_area_threshold(400) call color_replace_and(image_wrk,dont_care,dont_care,greater_than,0,0,45,255,255,255) call color_replace_and(image_wrk,less_than,dont_care,less_than,128,0,l 5,255,255,255) call apply_curve(image_wrk,purple_path) call color_replace_and(image_wrk,equals,equals,equals,65,65,65,255,255,255) call color_replace_and(image_wrk,equals,equals,equals,0
  • the rules defined above can now eliminate objects identified in process 1.
  • a second process that follows the logic rules will now create objects of new colors for the remaining objects of interest.
  • the vectors of metrics of the transformed objects of interest are examined. Multiple qualitative approaches may be used in the evaluation of the objects, such as prototype performance and figure of merit.
  • Metrics in the spatial domain such as image amplitude (luminance, tri-stimulus value, spectral value) utilizing different degrees of freedom, the quantitative shape descriptions of a first-order histogram, such as standard deviation, mean, median, Skewness, Kurtosis, Energy and Entropy, % Color for red, green, and blue ratios between colors (total number of yellow pixels in the object/the total number of red pixels in the object), object symmetry, arithmetic encoder, wavelet transforms as well as other home made measurements are some, but not all, of the possible measurements that can be used.
  • image amplitude luminance, tri-stimulus value, spectral value
  • a first-order histogram such as standard deviation, mean, median, Skewness, Kurtosis, Energy and Entropy
  • % Color for red, green, and blue ratios between colors total number of yellow pixels in the object/the total number of red pixels in the object
  • object symmetry arithmetic encoder
  • Additional metrics can be created by applying spectrally-based processes, such as Fourier, to the previously modified objects of interest or by analyzing eigenvalue produced from a Principal Components Analysis to reduce the dimension space of the vectors and remove outliers and non-representative data (metrics/images).
  • spectrally-based processes such as Fourier
  • eigenvalue produced from a Principal Components Analysis to reduce the dimension space of the vectors and remove outliers and non-representative data (metrics/images).
  • a color replacement technique is used to further emphasize tendencies of color changes. For example, objects that contain a value on the red channel > 100, can be remapped to a level of 255 red so all bright red colors are made pure red. This is used to help identify metal objects that have varying densities.. [156] This can now help indicate the presence of a certain metal objects regardless of its orientation in the image. It can also be correlated to geometric measurements using tools that determine boundaries and shapes. An example would be the correlation of the pixels with this red value with boundaries and centroid location. Other process may additionally be used as well.
  • the system and methods of the present invention are based on a methodology that is not restricted to a specific image type or imaging modality. It is capable of identifying and distinguishing a broad range of object types across a broad range of imaging applications. It works equally as well in applications such as CT scans, MRI, PET scans, mammography, cancer cell detection, geographic information systems, and remote sensing. It can identify and distinguish metal objects as well.
  • the normalization process is applied, if necessary, by adjusting the black and white levels of the image.
  • the normalization process may be applied to either areas of interest or to the entire image. This allows the algorithm to perform consistently across images of varying exposure and density.
  • FIG. 1OB provides a modified inverse to the values of the original image shown in Figure 1OA.
  • the x values along the bottom are luminance input values, and the y axis represents luminance output values.
  • the gray-colored shadow is a histogram or plot of the distribution of the pixel values before any transform is applied.
  • the diagonal black line indicates that all input values of luminance for each of the red (R), green (G), and blue (B) channels on the x axis will have the identical value for output on the y axis. All values for that image will remain unchanged. It is equivalent to not applying any transform to the image.
  • the transform indicated in Figure 1OB shows that the image will essentially be inversed.
  • a second non-linear transform shown in Figure 11, remaps the resultant or output image from the previous step using the luminance, red and green channels.
  • the x values along the bottom are luminance input values, and the y axis represents luminance output values.
  • the input and output values can be for the combined luminance value or values for each of the color channels of RGB.
  • the blue channel is modified only by the change in luminance.
  • the green and red channel pixel values are additionally modified by the changes indicated by the red and green curves.
  • the final transform shown in Figure 12, remaps the values output from the transform of Figure 11 using the luminance and red channels.
  • the x values along the bottom are luminance input values, and the y axis represents luminance output values.
  • Figure 13A shows the original mammogram of a normal breast.
  • Figure 13B shows the result of processing the original image with the process described above.
  • Figure 13C is an original image that contains a cancerous lesion.
  • Figure 3D shows the result of processing the original image with the above process.
  • Figure 13E is a close up of the region of interest (ROI) where the tumor is located in the mammogram. A dark boundary can be observed in the close up image. This "signature" defines the characteristic boundary observed with cancerous lesions when processed with this algorithm.
  • ROI region of interest
  • Figures 14A-14F show the results of the process using mammography images of a dense breast.
  • Figure 14A shows the original mammogram of a breast with a benign growth, with a close up of the ROI shown in Figure 14B.
  • Figure 14C shows the result of processing the image with the process described above.
  • Figures 14D shows the original mammogram of the dense breast with a malignant growth, with a close up of the ROI shown in Figure 14E.
  • Figure 14F shows the result of processing the image with the process described above.
  • Figures 15A-15F show the results of the process using mammography images of a breast taken with poor x-ray exposure.
  • Figures 15A-15C are with a benign growth
  • Figures 15D-15F are with a malignant growth.
  • Figures 15C and 15F show the result of processing each image with the process described above.
  • Figure 15C shows that this benign mass also has a border; however the signature of the interior of the mass is significantly different from those of cancerous lesions. The visualized differences can be easily observed and quantified for classification purposes.
  • Each of the segments of the mammogram images can be further analyzed using feature extraction to develop rules for classifying the objects. Consequently, the process provides both a visualization tool for human interpretation and optimal tissue differentiation of topological features for both human interpretation and computer-based analysis.
  • the response-based Signature Mapping/ITD process helps to define the extent of the lesions and marks the boundaries of their growth.
  • structures within the lesions itself are differentiated. This can be observed in Figures 15C and 15F.
  • Measurements such as Co-occurrence - Energy, Entropy, Homogeneity and relative statistics such as Correlation and Standard Deviation are then used in the process to mathematically classify the objects of interest. For example, benign and cancerous lesions show significant differences in frequency information, boundaries, and color/grayscale gradients after being processed with Signature Mapping/ITD. After processing, masses in dense breast tissue that initially appeared to be uniform in texture can exhibit significant differences in entropy, linearity, and homogeneity than the surrounding dense breast tissue or benign masses in similarly dense breast tissue.
  • Figures 16A-16F show additional results of the process using mammography images with subtle signs of growths.
  • Figures 16A-16C are with a subtle benign growth
  • Figures 16D-16F are with a subtle malignant growth.
  • Figures 16C and 16F show the result of processing each image with the process described above.
  • FIGs 17A and 17B the mammogram on the left indicates the presence of cancer.
  • Two white arrows in the processed image to its right show the distortions generated from compression of the normal breast tissue. This distortion is not visible in the original image on the left.
  • the two images in Figure 17B show an x-ray of a normal breast and there is no distortions indicated in the processed image on the far right.
  • the Signature Mapping/ITD process can be applied to all imaging modalities, such as sonograms from ultrasound imagers, and then the features from each of the modalities can be combined to obtain an even higher levels of sensitivity for the detection of the disease.
  • FIG. 19A-19F Figures 20A-20F.
  • Figures 19A, 19D, 2OA and 2OD are the original images
  • Figures 19B, 19E, 2OB and 2OE are the regions of interest
  • Figures 19C, 19F, 2OC and 2OF are the processed images. Significant differences can be observed between the benign and malignant processed images, even though the differences between the original images are difficult to distinguish. Similar results to those described above for ultrasound images can be obtained by applying this methodology to breast MRIs.
  • Figure 21 is a vector representation that reflects the vertical and non-vertical directions of growth found in cancerous lesions. Benign lesions do not exhibit such growth patterns.
  • patterns of cancerous lesions can be compared among growth of cancer in a culture in a petri dish (top row of images), patterns of cancer in an image taken from a biopsy under a microscope (two left images in the middle row), and the patterns of cancer in a mammogram (two right images in middle row and bottom row of images).
  • the responses to the Signature Mapping/ITD algorithms indicate the similar structures for the cancer.
  • MM-CADx Multi-Modality Computer Aided Diagnosis
  • ROI Region of Interest
  • features e.g., texture, shape, size, etc.
  • ROI Classification modules to automatically generate diagnosis of tumors (benign vs. malignant) by integrating results from mammograms, sonograms, and MRI images for the same patient.
  • the MM-CADx system also consists of a Lesion Feature Correlator and a Decision Fusion Center to generate correlated features and optimal diagnosis from different modalities, respectively, to allow radiologists to make more accurate and faster diagnosis.
  • a Feature Matching and Image Retrieval module of MM-CADx system allows users to retrieve similar tumors from the reference library to examine the optimal features, and review the diagnosis results to make a final diagnosis.
  • original digital images from three imaging modalities, x-ray (mammograms), ultrasound and MRIs are processed by the Signature Mapping/ITD process using methods that have been described above.
  • Images from each modality are processed with different Signature Mapping/ITD processes, where each process is optimized to differentiate tissues of interest based on the spatial and frequency characteristics for that modality.
  • Features are then extracted from the processed images/regions of interest from each modality and combined for analysis in the Lesion Feature Correlator 1000. While co-occurrence features might represent the highest probability of correctly classifying a lesion in a sonogram, color characteristics might be more relevant in a mammogram.
  • the relevant features from all modalities, and possibly their related probability density functions, are then combined and analyzed in the Decision Fusion Center 1100 using classifiers, such as Support Vector Machines, decision trees or neural networks.
  • MLA Machine Learning Algorithms
  • Contextual imagery not only focuses on the segmented imaged, but on the entire image as well. Context often carries relevant and discriminative information that could determine if an object of interest is present or not in the scene.
  • MLAs analyze the vectors of metrics taken from the images. The choice of metrics is important. Therefore, the feature extraction process preferably includes "data conditioning" to statistically improve the dataset analyzed by the MLA.
  • Image conditioning is preferably carried out as part of the data conditioning.
  • Image conditioning is one of the first steps performed by the image processing function. It initially consists of the removal of obvious or almost obvious objects that are not one of the objects of interest from the image.
  • image processing functions By applying image processing functions to the image, some important observations can also be made. For example, some unobvious portions of the object of interest may be distinguished from other elements that are not part of the object of interest upon the application of certain types of image processing.
  • Image normalization is preferably the first process applied to the image. This consists of the removal of certain image characteristics, such as the artificial image enhancement (artifacts) that is sometimes applied the system that created the image. Image normalization could also include removing image distortions created by the acquisition system, as well as removal of intentional and unintentional artifacts created by the software that constructed the image.
  • image normalization could also include removing image distortions created by the acquisition system, as well as removal of intentional and unintentional artifacts created by the software that constructed the image.
  • SVMs Support Vector Machines
  • the SVM approach exhibits the following advantages: 1. It can be used with data that has a complicated structure for which a simple separating hyperplane is not sufficient for classification purposes. A nonlinear separating surface between the classes can be drawn with the SVM technique.
  • the separating surface is drawn by the SVM technique in an optimal way, maximizing the margin between the classes. In general, this provides a high probability that, with proper implementation, no other separating surface will provide better generalization performance within this framework.
  • the SVM technique is robust to small perturbations and noise in data.
  • the SVM technique relies on the following stages: (1) mapping the initial feature vectors to a new feature space using a nonlinear transformation; and
  • FIG. 24 is a flowchart of a method of creating an SVM model, in accordance with one embodiment of the present invention.
  • the method starts at step 600, where a nonlinear transformation type and its parameters are chosen.
  • the transformation is performed by the use of specific "kernels", which are mathematical functions. Sigmoid, Gaussian or Polynomial kernels are preferably used.
  • a quadratic programming optimization problem for the soft margin is solved efficiently. This requires a proper choice of the optimization procedure parameters as well.
  • Figure 25 is a flowchart of a method of performing an SVM operation, in accordance with one embodiment of the present invention.
  • a feature generation technique is applied at step 700 to yield a vector of the generated features that is used for the analysis.
  • a specified kernel transformation is applied to each of all possible couples of the analyzed vector and a Support Vector.
  • the received values are weighted according to the respective weight coefficients and added all together with the free term.
  • the result of the kernel transformation is used to classify the image.
  • the image is classified as falling in a first class (e.g., a threat) if the final result is larger than or equal to zero, and is otherwise classified as belonging to a second class (e.g., non-threat).
  • a first class e.g., a threat
  • a second class e.g., non-threat
  • RGB-DNA is one of the image processing techniques that can be used in normalization step 400 and the image processing step 410 (Fig. 8).
  • RGB-DNA refers to a representation, in a predetermined color space, of most or all possible values of colors which can be produced from a given image source.
  • values of colors is not limited to visual colors, but refers to representations of values, energies, etc. that can be produced by the imaging system. The use of RGB DNA for image analysis will be described in detail in this section.
  • ⁇ (9) is the integral density, and ratio
  • the surface Z-Z (P, C) is two-dimensional manifold in three-dimensional (P,C,Z) space, as shown in the plot of Figure 22.
  • the surface d—d(P,C) is a two-dimensional manifold in (P,C,d) as well, a shown in the plots of Figures 23A and 23B, which are plots of 2D and 3D views, respectively, of (P, C) space with Zeff (P,C)— const.
  • R,G,B the number of unique colors needed to maintain an acceptable visual quality of a dual energy color image can be quite large and approaches at least the number of colors of a medium class digital camera —1500000. Nevertheless, it was discovered that the number of unique colors in an average baggage color image is approximately 7,000 colors for a Smith HiScan 604Oi baggage scanner and less than 100,000 for a Rapiscan 515 baggage scanner.
  • An aspect of the present invention is the development of tools to visualize the set of unique colors as 3x2D projections to RG, GB and BR planes of the RGB cube, as shown in Figures 26A and 26B.
  • Figs. 26A and 26B are RGB_DNA 3x2D views for a Smith HiScan 604Oi baggage scanner and a Rapiscan 515 baggage scanner, respectively.
  • the phrase "RGB_DN was assigned to the discovered color schemes, where term "DNA” was used because of the fact that all images, at least from the scanners of a particular model, will inherit this unique set of RGB colors.
  • These tools can also be applied to medical imagers, such as medical x-ray imagers, ultrasound imagers and MRI imagers.
  • the image analysis system 130 can be implemented with a general purpose computer. However, it can also be implemented with a special purpose computer, programmed microprocessor or microcontroller and peripheral integrated circuit elements, ASICs or other integrated circuits, hardwired electronic or logic circuits such as discrete element circuits, programmable logic devices such as FPGA, PLD, PLA or PAL or the like. In general, any device on which a finite state machine capable of executing code for implementing the process steps of Figs. 7 can be used to implement the image analysis system 130.
  • Input channel 110 may be, include or interface to any one or more of, for instance, the Internet, an intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network) or a MAN (Metropolitan Area Network), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital Tl, T3, El or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34bis analog modem connection, a cable modem, and ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection.
  • AIN Advanced Intelligent Network
  • SONET synchronous optical network
  • DDS Digital Data Service
  • DSL Digital Subscriber Line
  • Input channel 110 may furthermore be, include or interface to any one or more of a WAP (Wireless Application Protocol) link, a GPRS (General Packet Radio Service) link, a GSM (Global System for Mobile Communication) link, CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access) link such as a cellular phone channel, a GPS (Global Positioning System) link, CDPD (Cellular Digital Packet Data), a RIM (Research in Motion, Limited) duplex paging type device, a Bluetooth radio link, or an IEEE 802.11 -based radio frequency link.
  • WAP Wireless Application Protocol
  • GPRS General Packet Radio Service
  • GSM Global System for Mobile Communication
  • CDMA Code Division Multiple Access
  • TDMA Time Division Multiple Access
  • GPS Global Positioning System
  • CDPD Cellular Digital Packet Data
  • RIM Research in Motion, Limited
  • Bluetooth radio link or an IEEE 802.11 -based radio frequency link.
  • Input channel 110 may yet further be, include or interface to any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection.
  • an RS-232 serial connection an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

L’invention concerne un système et un procédé d’analyse et de visualisation de tissus, d’objets et de structures normaux et anormaux dans des images numériques générées par des sources d’images médicales. La présente invention utilise les principes de la divergence transformationnelle itérative dans laquelle des objets dans des images présenteront, lorsqu’ils sont soumis à des transformations particulières, des réponses radicalement différentes selon les propriétés physiques, chimiques ou numériques de l’objet ou de sa représentation (telle que des images), combinées avec des capacités d’apprentissage machine. Grâce au système et aux procédés de la présente invention, certains objets tels que des croissances cancéreuses, qui apparaissent indiscernables d’autres objets à l’œil nu ou par des systèmes de reconnaissance informatisés ou qui sont par ailleurs presque identiques, produisent des différences radicalement différentes et statistiquement significatives dans les systèmes de description d’image (mesures) qui peuvent être facilement mesurées.
PCT/US2009/066022 2008-11-26 2009-11-27 Système et procédé de visualisation de texture et d'analyse d'image pour différencier des lésions malignes et des lésions bénignes Ceased WO2010063010A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11802708P 2008-11-26 2008-11-26
US61/118,027 2008-11-26

Publications (2)

Publication Number Publication Date
WO2010063010A2 true WO2010063010A2 (fr) 2010-06-03
WO2010063010A3 WO2010063010A3 (fr) 2010-07-22

Family

ID=42226386

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/066022 Ceased WO2010063010A2 (fr) 2008-11-26 2009-11-27 Système et procédé de visualisation de texture et d'analyse d'image pour différencier des lésions malignes et des lésions bénignes

Country Status (1)

Country Link
WO (1) WO2010063010A2 (fr)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104105441A (zh) * 2012-02-13 2014-10-15 株式会社日立制作所 区域提取处理系统
WO2018073784A1 (fr) * 2016-10-20 2018-04-26 Optina Diagnostics, Inc. Procédé et système de détection d'une anomalie dans un tissu biologique
WO2019094857A1 (fr) * 2017-11-13 2019-05-16 The Trustees Of Columbia Univeristy In The City Of New York Système, procédé et support accessible par ordinateur pour déterminer un risque de cancer du sein
RU2734575C1 (ru) * 2020-04-17 2020-10-20 Общество с ограниченной ответственностью "АЙРИМ" (ООО "АЙРИМ") Способ и система идентификации новообразований на рентгеновских изображениях
CN112734723A (zh) * 2021-01-08 2021-04-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置
GB2591177A (en) * 2019-11-21 2021-07-21 Hsiao Ching Nien Method and apparatus of intelligent analysis for liver tumour

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070014452A1 (en) * 2003-12-01 2007-01-18 Mitta Suresh Method and system for image processing and assessment of a state of a heart
US20090324067A1 (en) * 2005-03-15 2009-12-31 Ramsay Thomas E System and method for identifying signatures for features of interest using predetermined color spaces

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104105441A (zh) * 2012-02-13 2014-10-15 株式会社日立制作所 区域提取处理系统
WO2018073784A1 (fr) * 2016-10-20 2018-04-26 Optina Diagnostics, Inc. Procédé et système de détection d'une anomalie dans un tissu biologique
US10964036B2 (en) 2016-10-20 2021-03-30 Optina Diagnostics, Inc. Method and system for detecting an anomaly within a biological tissue
US11769264B2 (en) 2016-10-20 2023-09-26 Optina Diagnostics Inc. Method and system for imaging a biological tissue
WO2019094857A1 (fr) * 2017-11-13 2019-05-16 The Trustees Of Columbia Univeristy In The City Of New York Système, procédé et support accessible par ordinateur pour déterminer un risque de cancer du sein
GB2591177A (en) * 2019-11-21 2021-07-21 Hsiao Ching Nien Method and apparatus of intelligent analysis for liver tumour
RU2734575C1 (ru) * 2020-04-17 2020-10-20 Общество с ограниченной ответственностью "АЙРИМ" (ООО "АЙРИМ") Способ и система идентификации новообразований на рентгеновских изображениях
CN112734723A (zh) * 2021-01-08 2021-04-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置
CN112734723B (zh) * 2021-01-08 2023-06-30 温州医科大学 一种面向多源数据的乳腺肿瘤图像分类预测方法及装置

Also Published As

Publication number Publication date
WO2010063010A3 (fr) 2010-07-22

Similar Documents

Publication Publication Date Title
US20100266179A1 (en) System and method for texture visualization and image analysis to differentiate between malignant and benign lesions
US7817833B2 (en) System and method for identifying feature of interest in hyperspectral data
US8045805B2 (en) Method for determining whether a feature of interest or an anomaly is present in an image
US7492937B2 (en) System and method for identifying objects of interest in image data
US7907762B2 (en) Method of creating a divergence transform for identifying a feature of interest in hyperspectral data
Mughal et al. A novel classification scheme to decline the mortality rate among women due to breast tumor
US10192099B2 (en) Systems and methods for automated screening and prognosis of cancer from whole-slide biopsy images
Tabesh et al. Multifeature prostate cancer diagnosis and Gleason grading of histological images
US6996549B2 (en) Computer-aided image analysis
EP1356421B1 (fr) Analyse d'images assistee par ordinateur
WO2008157843A1 (fr) Système et procédé pour la détection, la caractérisation, la visualisation et la classification d'objets dans des données d'image
US7840048B2 (en) System and method for determining whether there is an anomaly in data
CN102388305A (zh) 基于图像的风险分数-来自数字组织病理学的存活率和结果的预后预估
Biswas et al. Mammogram classification using gray-level co-occurrence matrix for diagnosis of breast cancer
US20060269135A1 (en) System and method for identifying objects of interest in image data
WO2010063010A2 (fr) Système et procédé de visualisation de texture et d'analyse d'image pour différencier des lésions malignes et des lésions bénignes
Nagarajan et al. Feature extraction based on empirical mode decomposition for automatic mass classification of mammogram images
Beheshti et al. Classification of abnormalities in mammograms by new asymmetric fractal features
Alam et al. A novel automated system to detect breast cancer from ultrasound images using deep fused features with super resolution
US20060269161A1 (en) Method of creating a divergence transform for a class of objects
Arymurthy Association technique based on classification for classifying microcalcification and mass in mammogram
Catarious Jr et al. A mammographic mass CAD system incorporating features from shape, fractal, and channelized Hotelling observer measurements: preliminary results
Saranyaraj et al. Region of Interest and Feature-based Analysis to Detect Breast Cancer from a Mammogram Image
Chaiyakhan et al. Feature selection techniques for breast cancer image classification with support vector machine
Sample Computer assisted screening of digital mammogram images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09829831

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09829831

Country of ref document: EP

Kind code of ref document: A2