[go: up one dir, main page]

GB2416944A - Classifying voxels in a medical image - Google Patents

Classifying voxels in a medical image Download PDF

Info

Publication number
GB2416944A
GB2416944A GB0417106A GB0417106A GB2416944A GB 2416944 A GB2416944 A GB 2416944A GB 0417106 A GB0417106 A GB 0417106A GB 0417106 A GB0417106 A GB 0417106A GB 2416944 A GB2416944 A GB 2416944A
Authority
GB
United Kingdom
Prior art keywords
voxels
interest
data set
tissue type
medical image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
GB0417106A
Other versions
GB0417106D0 (en
Inventor
Ian Poole
Andrew John Bissell
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Voxar Ltd
Original Assignee
Voxar Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Voxar Ltd filed Critical Voxar Ltd
Priority to GB0417106A priority Critical patent/GB2416944A/en
Publication of GB0417106D0 publication Critical patent/GB0417106D0/en
Publication of GB2416944A publication Critical patent/GB2416944A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A computer automated method that applies supervised pattern recognition to classify whether voxels in a medical image data set correspond to a tissue type of interest is described. The method comprises a user identifying examples of voxels which correspond to the tissue type of interest and examples of voxels which do not. Characterizing parameters, such as voxel value, local averages and local standard deviations of voxel value are then computed for the identified example voxels. From these characterizing parameters, one or more distinguishing parameters are identified. The distinguishing parameters are those parameters having values which depend on whether or not the voxel with which they are associated corresponds to the tissue type of interest. The distinguishing parameters are then computed for other voxels in the medical image data set, and these voxels are classified on the basis of the value of their distinguishing parameters. The approach allows tissue types which differ only slightly to be distinguished according to a user's wishes.

Description

24 1 6944
TITLE OF THE INVENTION
IMAGE PROCESSING
BACKGROUND OF THE INVENTION
The invention relates to methods of numerically processing a medical image data set.
A general desire when processing medical image data is to identify voxels corresponding to related tissue types. A common situation when a user would like to do this is when displaying an image. In such cases it is known to associate particular signal values with particular colors and opacities (which are two specific examples of visualization parameters) to assist visualization. In this way, voxels which have similar signal values are displayed with similar colors and opacities. The color/opacity mapping is done when using data from a 3D data set (voxel data set) to compute a 2D data set (pixel data set) representing a 2D projection of the voxel data set for display on a computer screen or other conventional 2D display apparatus. This process is known as rendering. In some cases, groups of voxels having signal values falling within a particular range may be rendered as transparent. This is known as sculpting.
In addition to identifying groups of associated voxels to assist in displaying medical image data, a user may also wish to identify voxels belonging to a particular tissue type of interest for reasons other than setting visualization parameters. For example, to determine the volume or other attribute of the tissue type of interest without necessarily displaying the data.
When displaying data such as in medical imaging, the signal values comprising the data set do not usually correspond to what would normally be regarded as visual properties, such as color or intensity, but instead correspond to detected signal values from the measuring system used, such as computer-assisted tomography (CT) scanners, magnetic resonance (MR) scanners, ultrasound scanners and positron- emission-tomography (PET) systems. As an example, signal values from CT scanning will represent tissue opacity, i.e. X-ray attenuation. In order to improve the ease of interpretation of such images it is known to map different colors and opacities to different ranges of display value such that particular features, e.g. bone (which will generally have a relatively high opacity) can be more clearly distinguished from soft tissue (which will generally have a relatively low opacity).
When displaying a 2D projection of a 3D data set, in addition to attributing distinct ranges of color to voxels having particular signal value ranges, voxels within the 3D data set may also be selected for removal from the projected 2D image to reveal other more interesting features. The choice of which voxels are to be removed, or sculpted, from the projected image can also be based on the signal value associated with particular voxels. For example, those voxels having signal values which correspond to soft tissue can be sculpted, i.e. not rendered and therefore "invisible", thereby revealing those voxels having signal values corresponding to bone which would otherwise be visually obscured by the soft tissue.
The determination of the most appropriate color table (known in the art as a preset) to apply to an image derived from a particular 3D data set is not trivial and is dependent on many features of the 3D data set. For example, the details of a suitable color table will depend on the subject, what type of data is being represented, whether (and if so, how) the data are calibrated and what particular features of the 3D data set the user might wish to highlight, which will depend on the clinical application. It can therefore be a difficult and laborious task to produce a displayed image that is clinically useful. Furthermore, there is inevitably an element of user-subjectivity in manually defining a color table and this can create difficulties in comparing and interpreting images created by different users, or even supposedly similar images created by a single user. In addition, the user will generally base the choice of overall color table on a specific 2D projection of the 3D data set rather than on characteristics of the overall 3D data set. A color table chosen for application to one particular projected image will not necessarily be appropriate to another projection of the same 3D data set. A color table which is objectively based on characteristics of the 3D data set rather than a single projection would be preferred.
Accordingly, there is a need in the art for a method of identifying voxels in medical image data which are associated with related tissue types. Such a method would assist both in appropriately setting visualization presets for displaying medical image data and also with segmenting the image data for further analysis. s
SUM1\dARY OF THE INVENTION According to one aspect of the invention there is provided a method of numerically processing a medical image data set comprising voxels, the method comprising: receiving user input to positively and negatively select voxels that are and are not of a tissue type of interest; determining a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and classifying further voxels in the medical image data set on the basis of the distinguishing function. This method thus applies supervised pattern recognition to classify the voxels.
By receiving input in response to a user specifying both positive examples of voxels (i.e. those which do correspond to the tissue type of interest) and negative examples of voxels (i.e. those which do not correspond to the tissue type of interest), the method is able to objectively classify further voxels in the data set. Because of this, the method provides for an easy and intuitive to use technique for allowing users to select regions of interest for further examination or removal from the data set.
The method may include presenting a representative (2D) image derived from the (3D) medical image data set to a user, such as a sagittal, coronal or transverse section view, whereby the user selects voxels by positioning a pointer at appropriate locations in the example image. An example voxel may then be taken to be a voxel whose coordinates in the medical image data set map to the location of the pointer in the example image. Alternatively, for a single positioning of the pointer, a number of example voxels may be selected, for example those in a region surrounding a voxel whose coordinates in the data set map to the location of the pointer in the example image may be taken as being selected. Selecting multiple voxels with a single positioning of the cursor allows for a more statistically significant sample of example voxels to be provided with little additional user input.
At least one of the one or more characterizing parameters of a voxel may be a function of surrounding voxels. For example, a local average, a local standard -s- deviation, gradient magnitude, Laplacian, minimum value, maximum value or any other parameterization may be used. This allows voxels to be classified on the basis of characteristics of their surroundings, rather than simply on the basis of their voxel value. This means that similar tissue types can be properly classified more accurately than with conventional classification methods based on voxel value alone. This is because subtle difference in "texture" in the vicinity of a voxel can help to distinguish it from other voxels having otherwise similar voxel values. It is also noted that for some modalities such as MR there may be multiple voxel values, such as T1 and T2 in multi-spectral MR, which could each be used to define a separate characterizing parameter. These could be used collectively in combination to set the distinguishing function.
Moreover, the user input may additionally include clinical information, such as specification of tissue type or anatomical feature, regarding either the positively or negatively selected voxels, or both. Following this user input, the distinguishing function can then determined from the characterizing parameters having regard to the clinical information input by the user.
Once the voxels have been classified, an image of the data set may be rendered which takes account of the classification of voxels. The rendered image may then be displayed to the user. For example, the positively selected voxels may be tinted with a color in a monochrome gray scale rendering.
In some examples, a binary classification may be used whereby voxels are classified as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest. In these cases, voxels classified as not corresponding to the tissue type of interest may be rendered as transparent or semi-transparent in a displayed image. The general practice of rendering features that are not of interest as semitransparent is sometimes referred to as "dimming" in the art. Alternatively, voxels which are classified as corresponding to the tissue type of interest may be rendered as transparent, or voxels classified as corresponding to the tissue type of interest may be rendered to be displayed in one range of displayable colors and voxels classified as not corresponding to the tissue type of interest being rendered to be displayed in another range of displayable colors.
An image based on rendering a volume data set representing the value of the distinguishing function of the voxels can also be made.
In other examples, rather than using a binary classification, voxels may be classified according to a calculated probability that they correspond to the tissue type of interest. In these cases, an image may be generated by rendering of a volume data set representing the probability that the voxels correspond to the tissue type of interest, rather than rendering based on voxel values themselves. For example, the probability can be mapped onto opacity of the rendered material instead of taking a threshold. Another approach would be to render as transparent any voxels having a probability of corresponding to the tissue type of interest of less than a certain value.
Where the classification provides an estimated probability for each voxel, the probabilities per voxel may themselves be considered as voxel values in a medical image data set which may be re-classified according in a subsequent iteration of the method. This implements a form of relaxation labeling.
The invention further provides a computer program product bearing computer readable instructions for performing the method of the invention.
The invention also provides a computer apparatus loaded with computer readable instructions for performing the method of the invention.
According to another aspect of the invention there is provided an apparatus for numerically processing a medical image data set comprising voxels, the apparatus comprising: storage from which a medical image data set may be retrieved; a user input device configured to receive user input to positively and negatively select voxels that are and are not of a tissue type of interest; and a processor configured to determine a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and to classify further voxels in the medical image data set on the basis of the distinguishing function. r
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the invention and to show how the same may be carried into effect reference is now made by way of example to the accompanying drawings in which: Figure 1 shows a generic computer tomography scanner for generating a 3D data set; Figure 2 shows a computer system for storing, processing and displaying medical image data; Figure 3 shows a histogram of data values within a volume of interest (VOI) within a 3D data set; Figure 4 schematically shows an example display of an image and associated section views which a user may employ to identify a tissue type of interest; Figure 5 is a flow chart schematically showing a method for classifying whether voxels in a volume data set belong to a tissue type of interest according to an embodiment of the invention; and Figures 6A-6D schematically show the distribution of a number of different characterizing parameters computed for example voxels identified by a user as belonging to different tissue types.
DETAILED DESCRIPTION
Figure 1 is a schematic perspective view of a generic CT scanner 2 for obtaining a 3D scan of a region of a patient 4. An anatomical feature of interest (in this case a head) is placed within a circular opening 6 of the CT scanner 2 and a series of X-ray exposures is taken. Raw image data is derived from the CT scanner and could comprise a collection of one hundred 2D 512*512 data subsets, for example.
These data subsets, each representing an X-ray image of the region of the patient being studied, are subject to image processing in accordance with known techniques to produce a 3D representation of the feature imaged such that various user-selected 2D projections of the 3D representation can be displayed (typically on a computer monitor). The techniques for generating such 3D representations of structures from collections of 2D data subsets are known and will not be described further herein.
Figure 2 schematically illustrates a general purpose computer 132 of the type that may be used to perform processing in accordance with the invention. The computer 132 includes a central processing unit 134, a read only memory 136, a random access memory 138, a hard disk drive 140, a display driver 142 and display 144 and a user input/output circuit 146 with a keyboard 148 and mouse 150 all connected via a common bus 152. The central processing unit 134 may execute program instructions stored within the ROM 136, the RAM 138 or the hard disk drive to carry out processing of signal values that may be stored within the RAM 138 or the hard disk drive 140. The program may be written in a wide variety of different programming languages. The computer program itself may be stored and distributed on a recording medium, such as a compact disc, or may be downloaded over a network link (not illustrated). The general purpose computer 132 when operating under control of an appropriate computer program effectively forms an apparatus for processing image data in accordance with embodiments of the invention. The general purpose computer 132 also performs method according to embodiments of the -9 - invention and operates using a computer program product having appropriate code portions (logic) for controlling the processing.
Figure 3 is a histogram which schematically shows the binned frequency distribution F of an example set of voxels within a selected volume of interest (VOI) in a 3D data set as a function of signal value D. The signal values D may be in arbitrary units (typical for MR) or calibrated units (such as Hounsfield units (HU) that are used for CT and other types of X-ray imaging). The histogram shown in Figure 3 represents the voxels within a VOI from which a 2D projected image can be derived and the signal values represent X-ray attenuation calibrated in HUs.
The signal values D of the voxels within the selected VOI are distributed between a minimum value S and a maximum value E. Within this overall range of signal values, four distinct voxel value sub-ranges are evident. A first voxel value sub range I has a narrow peak at relatively high signal values, a second voxel value sub range II has a relatively broad peak, a third voxel value sub-range m has a shoulder on the lower signal value side of the second voxel value sub-range II and a fourth voxel value sub-range IV has a shoulder on the lower signal value side of the third voxel value sub-range m.
The different voxel value sub-ranges I-IV identified in the histogram are likely to relate to different tissue types in the 3D data set and so would benefit from being displayed with different color ranges. For example, one might reasonably infer that sub-range I of high X-ray attenuation corresponds to bone, sub-range II corresponds to blood, sub- range m corresponds to soft tissue and sub-range IV represents the
background tissue type or air.
If distinct voxel value sub-ranges in the histogram can be identified by a numerical analysis, derived images can be shown with different tissue types clearly and consistently displayed without the need for user-driven post-display processing.
US 6,658,080 [1] describes examples of histogram-based numerical analysis schemes for identifying different tissue types in 3D image data.
The image data could take a variety of forms, but the invention is particularly well suited to embodiments in which the image data comprises a collection of 2D images resulting from CT scanning, MRI scanning, ultrasound scanning or PET that are combined to synthesize a 3D object using known techniques. The aided visualization of distinct features within such images can be of significant benefit in the interpretation of those images when they are subsequently projected into 2D representations along arbitrarily selected directions that allow a user to view the synthesized 3D object from any particular angle they choose.
In US 6,658,080, visualization parameter boundaries are determined automatically. However, some level of user input can assist in determining the most appropriate conditions for displaying an image. This is because once an automatic preset has been determined it may be desirable to make an assumption regarding what aspects of the data a user is interested in seeing in a displayed image. For example, in the histogram of CT data shown in Figure 3, four tissue types are identified. As previously noted, it might reasonably be inferred that sub-range I of high X-ray attenuation corresponds to bone, sub-range II corresponds to blood, sub-range m corresponds to soft tissue and sub-range IV represents the background tissue type or air. Once an automatic preset has been determined which identifies the four sub- ranges, an assumption might be made that the user does not wish to view the data corresponding to sub-range IV (background tissue type and air) and so voxels corresponding to this region will be rendered transparent. The displayed image will then show the bone, blood and soft tissue. However, in some situations a user may be interested in viewing bone and blood only with soft tissue rendered transparent. In other situations, the user might wish to view the background tissue and so it should not be rendered transparent. To address this, for example by way of an extension to the methods described in US 6,658,080, the user may be invited to identify in a displayed 2D image one or more examples of areas which are of interest and should be rendered visible, and one or more examples of areas which are not of interest and which should be rendered transparent. The user might identify such example areas by - 1 1 moving a cursor to appropriate parts of a displayed image and selecting the examples by "clicking" with a mouse-like pointer, for example. Once the example areas have been identified, it is possible to determine which sub-ranges they fall within and so set appropriate display conditions for these sub- ranges (e.g. transparent or not transparent).
In addition to employing user supplied examples of tissue types which are and which are not of interest to assist in displaying images in conjunction with the automatic preset determination scheme described in US 6,658,080, such techniques can also be applied more generally to classify different tissue types in medical image volume data.
The technique can be particularly useful where different tissue types appear very similar in the data, for example because they have similar Xray stopping powers for CT data. In the histogram shown in Figure 3, some of the sub-ranges may contain two subtly different tissue types, for example, sub-range I may include distinct regions of bone having subtly different densities from each other. Another example is identification of tumors in organs such as the liver or brain. It can be difficult to properly classify voxels in the volume data which correspond with these different tissue types due to the similarity in the signal values associated with them.
Figure 4 shows an example screen shot of a display 101 of a 2-D image generated from a volume (i.e. 3-D) data set. A main image 100 displays a 2-D image rendered from the volume data. The main image 100 shown in the figure includes a partial wire-frame cuboid to assist a user in interpreting the orientation of the image with respect to the original volume data, and some basic textual information, such as the date and time. The display 101 also contains a sagittal section view 102, a coronal section view 104, and a transverse section view 106 of the volume data to assist in diagnostic interpretation. A number of different tissue types, for example corresponding to bone and brain, are seen in the image. The top portion of the skull has been sculpted away (i.e. rendered transparent) so that the underlying brain can be seen.
A user viewing the display shown in Figure 4 may wish to sculpt away further material so that a particular tissue type of interest within the brain can be viewed. For example, the tissue type of interest might correspond to a feature the user has observed in one of the section views 102, 104, 106 displayed on the left of the display and wishes to examine further. In some cases, it can be difficult for a segmentation algorithm to properly separate voxels in the volume data which correspond to a region of interest (and so should be displayed) from other voxels which do not (and so should not be displayed, i.e. rendered transparent). If there are significant differences in the voxel values associated with voxels corresponding to different types of tissue, for example as seen for bone and soft tissue in a CT scan, it can be relatively easy to classify the voxels. However, in cases where there are more subtle differences between a tissue type of interest and surrounding tissue, segmentation algorithms can often fail to properly classify voxels corresponding to the different tissue types. If segmentation is performed on the basis of voxel values expected for voxels corresponding to the tissue type of interest, a carefully selected window of values needs to be defined. Voxels having values falling within the window are considered to correspond to the tissue type of interest, voxels having voxel values falling outside of the window are considered not to correspond to the tissue type of interest. However, it not an easy task for a segmentation algorithm to select an appropriate window width and this is generally done through a user interactively adjusting window parameters until satisfied with the desired appearance of a displayed image. The inherent subjectivity of this approach means the displayed image is inevitably based on a user's preconceptions of how the image should appear because there is a lack of objective selection as to which voxels correspond to the tissue type of interest and which do not.
Furthermore, in some situations, for example in CT data where a tissue type of interest has a X-ray stopping power which is similar to that of surrounding tissue, the voxel values themselves may not discriminate strongly between different tissue types.
Figure 5 is a flow chart schematically showing a method of identifying voxels in a medical image data set which correspond to a tissue type of interest according to an embodiment of the invention. It will be assumed by way of example that the method is executed in response to a user having being presented with the image shown in Figure 5 identifying in the sagittal section view 102 an anomalous region of brain which appears slightly different to surroundings tissue and which he wants to examine further.
In this example the method is performed by a suitably programmed general purpose computer, such as that shown in Figure 2. The computer may be a stand-alone machine or may form part of a network, for example, a Picture Archiving and Communication System (PACS) network.
In Step 111 of Figure 5, input is received from the user which identifies (selects) voxels corresponding to the tissue type of interest. With reference to Figure 2, this is conveniently performed by the user positioning a cursor ("pointer") displayed on the screen 144 displaying the image 101 over a pixel corresponding to the tissue type of interest in one of the section views 102, 104, 106, the cursor being positioned by manipulation of the mouse 150. However, other input means, such as a light-pen, graphics tablet or track ball, for example, may equally be used to point to the tissue type of interest. Since in this example the user initially noticed the region he wishes to examiner further in the sagittal section view 102, it is assumed he positions the cursor over a pixel within the anomalous region in this view. If the region is also apparent in either of the other section views 104, 106, he may equally position the cursor over an appropriate pixel in those views. Once the cursor is positioned over a desired pixel, the user indicates his selection by pressing ("clicking") a button on the mouse 150.
Any other input means could equally be used. A voxel in the volume data corresponding to the selected pixel is then determined based on the plane of the section view within the volume data and the selected position within the section view.
Depending on the displayed resolution of the section view, the selected pixel might span a number of voxels in the volume data. In this example, the voxel in which the selected pixel is situated is taken as the identified voxel. In other cases all of the voxels within a region of a predetermined size and shape surrounding a central selected voxel might be considered as being identified as corresponding to the tissue type of interest. The user may identify any number of further voxels by clicking elsewhere in the sagittal or other section views. The user may change the particular displayed sagittal, coronal and/or transverse section views to allow for voxels identifying the tissue type of interest to be selected from anywhere within the volume data. Typically five or so voxels corresponding to the tissue type of interest might be identified, though fewer or more may be preferred. These voxels will be referred to as positively selected voxels and the process of identifying them will be referred to as making a positive selection.
It will be appreciated that other schemes for allowing a user to identify voxels can also be used. For example, rather than "click" on an individual pixel in one of the section views, a range of pixels could be identified by a user "clicking" twice to identify opposite corners of a rectangle, or a centre and circumference point of circle, or by defining a shape in some other way. Voxels corresponding to pixels within the perimeter of the shape may then all be deemed to have been identified.
In Step 112, input is received from the user which identifies (selects) voxels not corresponding to the tissue type of interest. Step 112 may be performed in a manner which is similar to Step 111 described above, but in which the user positions the cursor over pixels in the sagittal, coronal and/or transverse sections which do not correspond to the tissue type of interest. The user may indicate his selection by "clicking" a different mouse button to that used to identify the positively selected voxels. Alternatively, the same mouse button might be used in parallel with the pressing of a key on the keyboard 148.
To allow subtly different tissue types to be distinguished, the user should identify voxels which are most similar to the tissue type of interest, but which he wants to exclude nonetheless. This is because voxels which differ more significantly from voxels corresponding to the tissue type of interest are easier to classify as not being of interest. In this case, where the tissue type of interest is an anomalous region ofbrain which appears slightly different from its surroundings in the sagittal section view 102, the user should identify voxels by selecting pixels in the area surrounding the anomalous region. However, if there are other regions which also appear similar to the tissue type of interest, but which are not necessarily in close proximity to it, the user may also identify some voxels corresponding to these regions. For example, five or so voxels not corresponding to the tissue type of interest might be identified.
However, as few as one or many more than five may also be chosen. For example, if there are a number of regions in the data appearing only slightly different from the tissue type of interest, the user may choose to identify a number of voxels in each of these regions. The voxels identified in Step 112 will be referred to as negatively selected voxels, and the process of identifying them will be referred to as making a negative selection.
In Step 113 one or more characterizing parameters are computed for each of the voxels selected in Steps 111 and 112. In this example implementation four characterizing parameters, namely voxel value V, a local average A, a local standard deviation and maximum Sobel edge filter response S over all orientations, are determined for each voxel. In another embodiment, instead of maximum Sobel edge filter response, gradient magnitude is used. In this case the local average and standard deviation are computed for a SxSxS cube of voxels centered on the particular voxel at hand. However, other regions may also be used. For example, a smaller regions may be considered for faster performance. Furthermore, the regions need not be three- dimensional, a SxS square of voxels, or other region, in an arbitrarily chosen or pre- determined plane may equally be used.
In Step 114 the distribution of computed characterizing parameters are analyzed to determine which of them may be used to distinguish between the positively selected and the negatively selected voxels.
Figures 6A-6D show example distributions of voxel value V, local average A, local standard deviation and maximum Sobel edge filter response S respectively for five positively selected and five negatively selected voxels. In each case, the values for the positively selected voxels are marked by "plus" symbols above the horizontal line representing the range of values of the particular characterizing parameter at appropriate positions along the line. The values for the negatively selected voxels are similarly represented by "minus" symbols below the line.
It can be seen from Figure 6A that the voxel values V are similar and fall within roughly the same range for both the positively and negatively selected voxels.
This indicates that voxel value itself is not a good discriminator between the positively and negatively selected voxels in this case.
It can be seen from Figure 6B that the local averages A are also broadly similar for both the positively and negatively selected voxels. There appears to be a slight bias towards higher values of local average for positively selected voxels, but there is still a large degree of overlap.
However, it can be seen from Figure 6C that the computed local standard deviations are significantly different for the positively and negatively selected voxels. In particular, the regions surrounding the positively selected voxels tend to have significantly larger standard deviations than those surrounding the negatively selected voxels. This indicates that the positively selected voxels from the region of tissue type which the user wishes to examine further correspond to regions of greater granularity in the data. It is likely to be this greater degree of granularity which causes the region to appear to human visual perception to be slightly different to the surrounding regions in the section views.
It can be seen from Figure 6D that the computed maximum Sobel edge filter responses S are also different for the positively and negatively selected voxels, although to a lesser extent than the local standard deviations.
From these distributions of the computed characterizing parameters for the positively and negatively selected voxels, it is apparent that local standard deviation is a characterizing parameter which distinguishes well between positively and negatively selected voxels, and as such is considered to be a distinguishing parameter.
In this example implementation only one distinguishing parameter is sought and is chosen on the basis of it being the most able of the computed characterizing parameters to discriminate between the positively and negatively selected voxels. The ability of a given characterizing parameter to discriminate is referred to as its discrimination power and may be parameterized using conventional statistical analysis. In this example, this is done by separately calculating the average and the standard deviation of each characterizing parameter for the positively and the negatively selected voxels. The discriminating power of a given characterizing parameter is then taken to be the difference in the average for the positively and negatively selected voxels divided by the quadrature sum of their standard deviations.
The charactering parameter having the greatest discriminating power is then taken to be the distinguishing parameter. As will be seen further below, in other examples multiple distinguishing parameters may be used, for example all characterizing parameters having a discriminating power greater than a certain level or a fixed number of characterizing parameters having the highest discriminating powers may be used.
In Step 115, the distinguishing parameter (i.e. local standard deviation in this case) is calculated for other voxels in the data. Although this may be done for all of the voxels, it may be more efficient to restrict the calculation to only a subset of voxels. For example, a conventional segmentation algorithm may first be applied to the data to identify which voxels belong to significantly different tissue types (e.g. bone or brain) . Once this is done, the local standard deviation may then be calculated only for those voxels which have been classified by the conventional segmentation algorithm as corresponding to brain. This is because there would be no need to perform the computation for voxels which have already been distinguished from the tissue type of interest by the conventional segmentation algorithm.
Alternatively, the calculation may only be made for voxels in a VOI identified by the user.
In step 116, the distinguishing parameter, i.e. the local standard deviation for the example characterizing parameter distributions seen in Figures 6A-6D, is used to classify each of the other voxels. This is performed in this example by defining a critical local standard deviation cc (marked in Figure 6C) between the average local standard deviation for the positively selected voxels and the average local standard deviation of the negatively selected voxels. If the local standard deviation computed in Step 115 for a particular voxel is greater than ac, the voxel is classified as belonging to the tissue type of interest. If the local standard deviation is less than tic, the voxel is classified as not belonging to the tissue type of interest.
It will be appreciated that although in this particular example the computed value of one of the characterizing parameters (local standard deviation) is itself identified as being able to distinguish between the tissue type of interest and surrounding tissue, this is a special example of the more general case in which a distinguishing functional relationship between characterizing parameters is identified.
For example, for a particular tissue type interest it might be found the ratio of two different characterizing parameters has a greater discriminating power between positively and negatively selected voxels than either of the characterizing parameters themselves. A numerical example of how this can arise is if values generally between 2.5 and 3.5 (arbitrary unit) are found for one characterizing parameter for both positively and negatively selected voxels and values generally between 5 and 7 (arbitrary units) are found for another characterizing parameter, again for both positively and negatively selected voxels. Because of this, neither characterizing parameter alone is able to discriminate properly between positive and negatively selected voxels. However, if for the tissue type of interest the second characterizing parameter is always close to twice the value of the first, whereas for the negatively selected voxels, the two parameters are unrelated, a distinguishing function based on the ratio of the two parameters can be identified.
Depending on clinical application, additional requirements may be imposed on which voxels are to be considered to correspond to the tissue type of interest. For example, a requirement that the tissue type of interest forms a single volume may be made by applying a connectivity requirement. This would mean voxels which are not linked to the positively selected voxels by a chain of voxels classified as corresponding to the tissue type of interest will be classified as not corresponding to this tissue type, even if their distinguishing parameters are such that they would otherwise be considered to do so.
Once the voxels have been classified, the user may proceed to examine those corresponding to the tissue type of interest as desired. For example, the user may render an image showing only the tissue type of interest. In another example, the tissue type of interest may be shown in one color and other tissue types in other colors, that is to say the method shown in Figure 5 may be used as the basis of calculating presets. This could be realized when a monochrome image of the brain is displayed. Here the classification could be used to distinguish between white and gray matter in the brain. Based on the classification, the gray matter is displayed shaded in a semi-transparent blue color wash. In a further example, the selected object can be measured in some way, for example the volume is calculated. Another example is that the unclassified parts ("don't want" regions) are "dimmed", i.e. rendered semi transparent.
In some examples, an image based on the distinguishing parameter itself (or a function thereof) may be rendered (e.g. using the distinguishing parameter as the imaged parameter in the rendering rather than voxel value). In the above described situation, rather than rendering an image based on voxel values in the image data set (i.e. X-ray stopping power for CT data), an image based on the local standard deviation for each of the voxels may be rendered instead. Ranges of color and/or opacity may be associated with different values of local standard deviation and an image rendered accordingly. Visualization presets for the rendered image may be calculated as described in US 6,658, 080, for example. This approach can provide for a displayed image in which a user can easily distinguish the tissue type of interest from surrounding tissue because characteristics of the tissue type of interest which differentiates it from its surroundings are used as the basis for rendering the image.
Rather than display an image, the classification may be used in conjunction with conventional analysis techniques, for example to calculate the volume of the anomalous region corresponding to the tissue type of interest. It will of course be appreciated that in some cases a region of interest might be of interest merely because the user wishes to identify it so it can be discarded from subsequent display or analysis.
It is not necessary for the steps shown in Figure 5 to be performed in the order shown. For example, Step 111 and Step 112 could be reversed, or even intertwined.
That is so say, a user could identify some voxels which correspond to the tissue type of interest, then some voxels which do not correspond to the tissue type of interest, and then some more voxels corresponding to the tissue type of interest and so on (i.e. in effect cycle between Step 111 and Step 112).
Furthermore, the process may return to earlier steps during execution. For example, a user may be alerted at Step 114 if there are no characterizing parameters having a discriminating power above a predetermined level. In response to this, the user may choose to return to Step 111 and/or Step 112 to provide more examples.
IS Alternatively, in such a circumstance the user may instead indicate that additional characterizing parameters should be determined and their discriminating powers examined, or may simply choose to proceed with the classification nonetheless.
The method shown in Figure 5 may be modified in a number of ways. For example, rather than simply having a binary classification (i.e. classifying voxels as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest) a probability classification may be used. Each voxel may be attributed a likelihood of corresponding to the same tissue type as the positively selected voxels on the basis of how much its distinguishing parameter differs from those of the negatively selected voxels. In this scheme, a voxel having a local standard deviation of o shown in Figure 6C would be classified as having a greater probability of belonging to the population of voxels corresponding to the tissue type of interest than one having a local standard deviation of CT2.
Furthermore, more than one distinguishing parameter may be used for the classification. For example, if multiple parameters are identified in Step 114 as being capable of distinguishing between the positively and negatively selected voxels, these multiple distinguishing parameters may each then be computed for the other voxels in Step 115. The classification in Step 116 could then be based on a conventional multi- dimensional expectation maximization (EM) algorithm or other cluster recognition process which takes the distinguishing parameters computed for the positively and negatively selected voxels as seeds for defining for the populations of voxels (i.e. the population of voxels corresponding to the tissue type of interest and the population of voxels not corresponding to the tissue type of interest). Example classification schemes when the distinguishing function has two or more characterizing parameters are multivariant Gaussian maximum likelihood and k-nn (nearest neighbors).
The EM algorithm provides the distributions for the positive and negative cases which then allows, for each voxel, a probability to be determined that the voxel is a member of the population exemplified by the positively selected voxels, that is to say a probability that the voxel corresponds to the tissue type of interest. The EM algorithm may also provide an estimate of the overall fraction of voxels which are members of the population exemplified by the positively selected voxels. This information allows an image of the tissue type of interest to be rendered from the volume data in a number of ways.
One way is to render all voxels having a probability of corresponding to the tissue type of interest lower than a threshold level as transparent, and render the remaining voxels using conventional techniques based on their voxel values (e.g. opacity to X-rays for CT data). The threshold level may be selected arbitrarily, for example at 50%, or may be selected such that the total number of voxels falling above the threshold level corresponds to the overall fraction of voxels which are members of the population exemplified by the positively selected voxels predicted by the EM algorithm.
Another way of generating an image showing the tissue type of interest would be to again render all voxels having a probability of corresponding to the tissue type of interest lower than a threshold level as transparent, but to then render the remaining voxels based on their probability of corresponding to the tissue type of interest, rather than their voxel values. This provides a form of probability image from which a user can immediately identify the likelihood of individual areas being correctly classified as corresponding to the tissue type of interest.
In either case, where an image based on rendering of probabilities is displayed, the user may be presented with the opportunity of manually altering the threshold level. This allows the user to determine an appropriate compromise between including too many false negatives (i.e. voxels which do not correspond to the tissue type of interest) and excluding too many true positives (i.e. voxels which do correspond to the tissue type of interest).
It will be appreciated that in addition to the example characterizing parameters shown in Figures 6A-6D, there is a wide range of other parameters which may be used. For example, parameters based on local averages calculated over differently sized regions, parameters based on local gradients in voxel value, local spatial frequency components, and so on may all be used. It will also be appreciated that the choice of characterizing parameters to compute may depend on the type of data under study. For example, because MR data often show significant variations in sensitivity throughout a volume data set, absolute voxel value can be a poor indicator of tissue type in MR data. Because of this, characterizing parameters such as voxel value or local averages of voxel value might be excluded from use with MR data.
While the above description relates to a situation where a user is interested in further examining only a single tissue type, it will be understood that the method may equally be employed where a user wishes to identify multiple tissue types. This can be achieved by a user making positive selections for each of the different tissue types of interest in Step 111 shown in Figure 5. Depending on the characteristics of the different tissue types of interest, there may be a unique distinguishing feature identified in step 114 that can be used to classify the voxels. However, in some cases it may be necessary to employ multiple distinguishing parameters with voxels classified on the basis of one or other of these. For example, if in addition to the positive selection of voxels corresponding to the anomalous region of brain discussed above, the user is also interested in further examination of a second anomalous region cited elsewhere in the brain, the user simply makes some positive selections of that region. If the second anomalous region is represented by voxels having voxels values which are generally higher than the negatively selected voxels, but having a similar local standard deviation, then, unlike the voxels in the first anomalous region, they cannot be classified on the basis of local standard deviation. This means in Step 114 both local standard deviation c, and voxel value V will be determined to be distinguishing parameters and both will be calculated in Step 115 for other voxels in the data. In Step 116, voxels may then be classified as corresponding to one of the tissue types of interest if either their local standard deviation is different to that of the negatively selected voxels (in which case they relate to the first anomalous region) or if their voxel value is different to that of the negatively selected voxels (in which case they relate to the second anomalous region).
IS The method may also be applied in an iterative manner. For example, following execution of the method shown in Figure 5 a probability image showing the classification of the voxels may be displayed to the user. The user may then decide to refine the classification by re-executing the method on the basis of the probability image. This is a form of relaxation labeling and allows for additional spatial information to be exploited in each subsequent iteration.
In some implementations of the method, the computation of the distinguishing features may include additional analysis techniques to assist in the proper classification of voxels. For example, partial volume effects might cause a boundary between two types of tissue which are not of interest to be wrongly classified. If this is a concern in a particular situation, techniques such a partial volume filtering as described in WO 02/084594 [2] may be employed when computing the distinguishing parameters.
In cases where a user considers that the classification has not performed adequately, for example should one of the negatively selected voxels be attributed a high probability of being a member of the population exemplified by the positively selected voxels, further segmentation analysis techniques may be applied. For example conventional morphological segmentation algorithms may be applied to volume data representing the probability of each voxel comprising the volume data of corresponding to the tissue type of interest.
Additional user input may also be used to assist the classification process. In particular, the user input may additionally include clinical information, such as specification of tissue type or anatomical feature of interest. For example, the user input may adopt the paradigm "want that gray matter - don't want that white matter", or "want that liver - don't want that other (unspecified) tissue", or "want that (liver) tumor - don't want that healthy (liver) tissue", or "want that (unspecified) tissue don't want that fat tissue". This user input can be done by appropriate pointer selection in combination with filling out a text label or selection from a drop down menu of options. Following this user input, the distinguishing function can then determined from the characterizing parameters having regard to the clinical information input by the user. For example, if the positively selected voxels are indicated as belonging to a tumor, local standard deviation may be preferentially selected as the distinguishing function, since this will be sensitive to the enhanced granularity that is an attribute of tumors.
In some clinical studies multiple volume data sets of a single patient may be available, for example from different imaging modalities or from the same imaging modality but taken at different times. If the images can be appropriately registered with one another, it is possible to classify voxels in one of these volume data sets on the basis of positively and negatively selected voxels in another. Distinguishing parameters may even be based on an analysis of voxels in one data set yet be used to classify voxels in another data set. This can help because with more information made available, it is more likely that a good distinguishing parameter can be found.
It will be appreciated that although particular embodiments of the invention have been described, many modifications/additions and/or substitutions may be made within the spirit and scope of the present invention.
Thus, for example, although the described embodiments employ a computer program operating on a general purpose computer, for example a conventional computer workstation, in other embodiments special purpose hardware could be used.
For example, at least some of the functionality could be effected using special purpose circuits, for example a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC) or in the form of a graphics processing unit (GPU).
Also, multi-thread processing or parallel computing hardware could be used for at least some of the processing. For example, different threads or processing stages could be used to calculate respective characterizing parameters.
DJD5T<C! 4 JJA [1] US 6 658 080 (Voxar Limited) [2] WO 02/084594 (Voxar Limited)

Claims (28)

1. A method of numerically processing a medical image data set comprising voxels, the method comprising: (a) receiving user input to positively and negatively select voxels that are and are not of a tissue type of interest; (b) determining a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and (c) classifying further voxels in the medical image data set on the basis of the distinguishing function.
2. The method according to claim 1, further comprising presenting an example image representing the medical image data set to a user, wherein the user positions a pointer at locations in the example image to select corresponding voxels.
3. The method according to claim 2, wherein a selected voxel is taken to be a voxel whose coordinates in the data set map to the location of the pointer in the
example image.
4. The method according to claim 2, wherein selected voxels are taken to be voxels in a region surrounding a voxel whose coordinates in the data set map to the location of the pointer in the example image.
S. The method of any of claims 1 to 4, further comprising rendering an image of the medical image data set, wherein the rendering takes account of the classification of voxels, and displaying the image to the user.
6. The method of any one of claims 1 to 4, further comprising rendering an image of the medical image data set, wherein the rendering is of a volume data set representing values of the distinguishing function.
7. The method of any of claims 1 to 6, wherein at least one of the one or more characterizing parameters is a function of surrounding voxels.
8. The method of any of claims 1 to 7, further comprising further classifying voxels on the basis of the morphology of their respective classifications in the medical image data set.
9. The method of any of claims 1 to 8, wherein the distinguishing function is determined by computing the characterizing parameters for the selected voxels and taking as the distinguishing function the value of at least one characterizing parameter whose value depends on whether its associated voxel has been positively or negatively selected.
10. The method of any of claims 1 to 9, wherein voxels are classified as either corresponding to the tissue type of interest or not corresponding to the tissue type of interest.
11. The method of claim 10, further comprising rendering an image of the medical image data set, wherein the rendering takes account of the classification of voxels, and displaying the image to a user.
12. The method of claim 11, wherein voxels classified as not corresponding to the tissue type of interest are rendered as transparent.
13. The method of claim 11, wherein voxels classified as corresponding to the tissue type of interest are rendered as transparent.
14. The method of claim 11, wherein voxels classified as corresponding to the tissue type of interest are rendered in one range of displayable colors and voxels classified as not corresponding to the tissue type of interest are rendered in another range of displayable colors.
IS. The method of any of claims 1 to 9, wherein voxels are classified by associating with them a probability that they correspond to the tissue type of interest.
16. The method of claim 15, further comprising rendering an image of the medical image data set, wherein the rendering takes account of the classification of voxels, and displaying the image to a user.
17. The method of claim 16, wherein the rendering takes account of the classification by rendering a volume data set representing the probability that the voxels correspond to the tissue type of interest.
18. The method of claim 17, wherein voxels having a probability of corresponding to the tissue type of interest of less than a threshold level are rendered as transparent.
19. The method of claim 18, further comprising adjusting the threshold level and re-rendering the image.
20. The method of claim 17, wherein a pre-determined fraction of the voxels having the lowest probabilities of corresponding to the tissue type of interest are rendered as transparent.
21. The method of any of claims 1 to 20, wherein the user input includes clinical information regarding at least one of the positively and negatively selected voxels, and wherein the distinguishing function is determined from the characterizing parameters having regard to the clinical information.
22. The method of claim 21, wherein the climcal information specifies tissue type.
23. The method of any of claims 1 to 22, wherein the medical image data set comprises a data set representing the probability that the voxels correspond to a tissue type of interest determined in a previous iteration of the method.
24. A computer program product bearing computer readable instructions for performing the method of any of claims 1 to 23.
25. A computer apparatus loaded with computer readable instructions for performing the method of any of claims 1 to 23.
26. Apparatus for numerically processing a medical image data set comprising voxels, the apparatus comprising: (a) storage from which a medical image data set may be retrieved; (b) a user input device configured to receive user input to positively and negatively select voxels that are and are not of a tissue type of interest; and (c) a processor configured to determine a distinguishing function that discriminates between the positively and negatively selected voxels on the basis of one or more characterizing parameters of the voxels; and to classify further voxels in the medical image data set on the basis of the distinguishing function.
27. A method of numerically processing a medical image data set comprising voxels substantially as hereinbefore described with reference to Figures 1 to 6 of the accompanying drawings.
28. An apparatus for numerically processing a medical image data set comprising voxels substantially as hereinbefore described with reference to Figures 1 to 6 of the accompanying drawings.
GB0417106A 2004-07-30 2004-07-30 Classifying voxels in a medical image Withdrawn GB2416944A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB0417106A GB2416944A (en) 2004-07-30 2004-07-30 Classifying voxels in a medical image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0417106A GB2416944A (en) 2004-07-30 2004-07-30 Classifying voxels in a medical image

Publications (2)

Publication Number Publication Date
GB0417106D0 GB0417106D0 (en) 2004-09-01
GB2416944A true GB2416944A (en) 2006-02-08

Family

ID=32947777

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0417106A Withdrawn GB2416944A (en) 2004-07-30 2004-07-30 Classifying voxels in a medical image

Country Status (1)

Country Link
GB (1) GB2416944A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2013088144A1 (en) 2011-12-12 2013-06-20 University Of Stavanger Probability mapping for visualisation and analysis of biomedical images
US8837669B2 (en) 2003-04-25 2014-09-16 Rapiscan Systems, Inc. X-ray scanning system
US8885794B2 (en) 2003-04-25 2014-11-11 Rapiscan Systems, Inc. X-ray tomographic inspection system for the identification of specific target items
US9020095B2 (en) 2003-04-25 2015-04-28 Rapiscan Systems, Inc. X-ray scanners
US9048061B2 (en) 2005-12-16 2015-06-02 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
US9113839B2 (en) 2003-04-25 2015-08-25 Rapiscon Systems, Inc. X-ray inspection system and method
US10295483B2 (en) 2005-12-16 2019-05-21 Rapiscan Systems, Inc. Data collection, processing and storage systems for X-ray tomographic images
US10591424B2 (en) 2003-04-25 2020-03-17 Rapiscan Systems, Inc. X-ray tomographic inspection systems for the identification of specific target items
US20210304460A1 (en) * 2020-03-30 2021-09-30 Vieworks Co., Ltd. Method and appararus for generating synthetic 2d image
US11547490B2 (en) 2016-12-08 2023-01-10 Intuitive Surgical Operations, Inc. Systems and methods for navigation in image-guided medical procedures

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201913832D0 (en) * 2019-09-25 2019-11-06 Guys And St Thomas Hospital Nhs Found Trust Method and apparatus for navigation and display of 3d image data

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0492898A1 (en) * 1990-12-20 1992-07-01 General Electric Company Magnetic resonance imaging
US5412563A (en) * 1993-09-16 1995-05-02 General Electric Company Gradient image segmentation method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0492898A1 (en) * 1990-12-20 1992-07-01 General Electric Company Magnetic resonance imaging
US5412563A (en) * 1993-09-16 1995-05-02 General Electric Company Gradient image segmentation method

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10591424B2 (en) 2003-04-25 2020-03-17 Rapiscan Systems, Inc. X-ray tomographic inspection systems for the identification of specific target items
US8837669B2 (en) 2003-04-25 2014-09-16 Rapiscan Systems, Inc. X-ray scanning system
US8885794B2 (en) 2003-04-25 2014-11-11 Rapiscan Systems, Inc. X-ray tomographic inspection system for the identification of specific target items
US9020095B2 (en) 2003-04-25 2015-04-28 Rapiscan Systems, Inc. X-ray scanners
US11796711B2 (en) 2003-04-25 2023-10-24 Rapiscan Systems, Inc. Modular CT scanning system
US9113839B2 (en) 2003-04-25 2015-08-25 Rapiscon Systems, Inc. X-ray inspection system and method
US9442082B2 (en) 2003-04-25 2016-09-13 Rapiscan Systems, Inc. X-ray inspection system and method
US9618648B2 (en) 2003-04-25 2017-04-11 Rapiscan Systems, Inc. X-ray scanners
US10901112B2 (en) 2003-04-25 2021-01-26 Rapiscan Systems, Inc. X-ray scanning system with stationary x-ray sources
US9675306B2 (en) 2003-04-25 2017-06-13 Rapiscan Systems, Inc. X-ray scanning system
US10175381B2 (en) 2003-04-25 2019-01-08 Rapiscan Systems, Inc. X-ray scanners having source points with less than a predefined variation in brightness
US9048061B2 (en) 2005-12-16 2015-06-02 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
US10295483B2 (en) 2005-12-16 2019-05-21 Rapiscan Systems, Inc. Data collection, processing and storage systems for X-ray tomographic images
US9638646B2 (en) 2005-12-16 2017-05-02 Rapiscan Systems, Inc. X-ray scanners and X-ray sources therefor
US10976271B2 (en) 2005-12-16 2021-04-13 Rapiscan Systems, Inc. Stationary tomographic X-ray imaging systems for automatically sorting objects based on generated tomographic images
WO2013088144A1 (en) 2011-12-12 2013-06-20 University Of Stavanger Probability mapping for visualisation and analysis of biomedical images
US11547490B2 (en) 2016-12-08 2023-01-10 Intuitive Surgical Operations, Inc. Systems and methods for navigation in image-guided medical procedures
US20210304460A1 (en) * 2020-03-30 2021-09-30 Vieworks Co., Ltd. Method and appararus for generating synthetic 2d image

Also Published As

Publication number Publication date
GB0417106D0 (en) 2004-09-01

Similar Documents

Publication Publication Date Title
US20050017972A1 (en) Displaying image data using automatic presets
CN112529834B (en) Spatial distribution of pathological image patterns in 3D image data
US7596267B2 (en) Image region segmentation system and method
US12062429B2 (en) Salient visual explanations of feature assessments by machine learning models
Roettger et al. Spatialized transfer functions.
EP2710958B1 (en) Method and system for intelligent qualitative and quantitative analysis of digital radiography softcopy reading
EP3035287B1 (en) Image processing apparatus, and image processing method
NL2003805C2 (en) Systems, apparatus and processes for automated medical image segmentation using a statistical model.
US8077948B2 (en) Method for editing 3D image segmentation maps
US8150120B2 (en) Method for determining a bounding surface for segmentation of an anatomical object of interest
US20080130968A1 (en) Apparatus and method for customized report viewer
US8611632B2 (en) Method of selecting and visualizing findings within medical images
EP1315125A2 (en) Method and system for lung disease detection
US20040175034A1 (en) Method for segmentation of digital images
US8705821B2 (en) Method and apparatus for multimodal visualization of volume data sets
CN114282588B (en) Provides classification explanations and generative functions
EP3989172A1 (en) Method for use in generating a computer-based visualization of 3d medical image data
CA2577547A1 (en) Method and system for discriminating image representations of classes of objects
CN102197413B (en) Analyze at least three-dimensional medical images
GB2416944A (en) Classifying voxels in a medical image
KR20020079742A (en) Convolution filtering of similarity data for visual display of enhanced image
Castellani et al. Visual MRI: Merging information visualization and non-parametric clustering techniques for MRI dataset analysis
Malu et al. An automated algorithm for lesion identification in dynamic contrast enhanced MRI
Kumar et al. Magnetic Resonance Imaging Digitization for Brain Abnormality Recognition
Anandhi et al. THE INVOLVEMENT OF MORPHOLOGICAL PROCESSING IN IMAGE PROCESSING FOR THE EXACT AND EARLY STAGE IDENTIFICATION OF LUNG CANCER USING MEDIAN FILTERS

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)
WAP Application withdrawn, taken to be withdrawn or refused ** after publication under section 16(1)