WO2011066366A1 - Procédés, système et produits-programmes d'ordinateur pour des conditions de diagnostic utilisant des codes uniques générés à partir d'une image multidimensionnelle d'un échantillon - Google Patents
Procédés, système et produits-programmes d'ordinateur pour des conditions de diagnostic utilisant des codes uniques générés à partir d'une image multidimensionnelle d'un échantillon Download PDFInfo
- Publication number
- WO2011066366A1 WO2011066366A1 PCT/US2010/057975 US2010057975W WO2011066366A1 WO 2011066366 A1 WO2011066366 A1 WO 2011066366A1 US 2010057975 W US2010057975 W US 2010057975W WO 2011066366 A1 WO2011066366 A1 WO 2011066366A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- sample
- unique
- code
- image
- digital code
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
- A61B5/1171—Identification of persons based on the shapes or appearances of their bodies or parts thereof
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
- A61B5/0066—Optical coherence imaging
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/117—Identification of persons
- A61B5/1171—Identification of persons based on the shapes or appearances of their bodies or parts thereof
- A61B5/1172—Identification of persons based on the shapes or appearances of their bodies or parts thereof using fingerprinting
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6814—Head
- A61B5/6821—Eye
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/68—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
- A61B5/6801—Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
- A61B5/6813—Specially adapted to be attached to a specific body part
- A61B5/6825—Hand
- A61B5/6826—Finger
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0062—Arrangements for scanning
- A61B5/0068—Confocal scanning
Definitions
- the present inventive concept relates generally to optical coherence tomography (OCT) and, more particularly, to biometric identification systems that use OCT.
- a very good biometric identifier may be one that is measured quickly and easily and is not easily susceptible to cosmetic modification or falsification. This coupled with the work of Daugman and Downing (Proc, R. Soc. Lond. B 268, 1737- 1740 (2001)) has led to the development of multiple commercially available iris recognition systems.
- iris recognition is a complete paradigm shift from traditional fingerprint recognition systems and, therefore, can be costly to implement and to reconstruct data bases of existing identification files. Furthermore, it may be possible to thwart iris identification using, for example, a patterned contact lens.
- United States Patent No. 5,751,835 to Topping et al discusses the use of the tissue of the fingernail bed as a unique identifier. The tissue under a fingernail is composed of characteristic ridges that form during gestation and are unique to each individual. While the ridges may increase in size as the individual grows, the spacing between the ridges remains consistent over time.
- OCT optical coherence tomography
- OCT allows for micrometer-scale, non-invasive imaging in transparent, translucent, and/or highly- scattering biological tissues.
- the depth ranging capability of OCT is generally based on low-coherence interferometry, in which light from a broadband source is split between illuminating the sample of interest and a reference path.
- the interference pattern of light reflected or backscattered from the sample and light from the reference delay contains information about the location and scattering amplitude of the scatterers in the sample.
- TDOCT time-domain OCT
- A- scan a map of the reflectivity of the sample versus depth
- B-scan a two-dimensional map of reflectivity versus depth and lateral extent typically called a B-scan.
- the lateral resolution of the B-scan is approximated by the confocal resolving power of the sample arm optical system, which is usually given by the size of the focused optical spot in the tissue.
- the first generally termed Spectral-domain or spectrometer-based OCT (SDOCT) uses a broadband light source and achieves spectral discrimination with a dispersive spectrometer in the detector arm.
- SDOCT Spectral-domain or spectrometer-based OCT
- SOCT swept-source OCT
- OFDI optical frequency-domain imaging
- Both of these techniques may allow for a dramatic improvement in SNR of up to 15.0-20.0 dB over time-domain OCT, because they detect all of the backscattered power from the entire relevant sample depth in each measurement interval. This is in contrast to previous-generation time-domain OCT, where destructive interference is typically used to isolate the interferometric signal from only one depth at a time as the reference delay is scanned.
- Some embodiments of the present inventive concept provide methods of providing a diagnosis using a digital code associated with an image, the method including collecting a multidimensional image, the multidimensional image having at least two dimensions; extracting a two dimensional subset of the multidimensional image; reducing the multidimensional image to a first code that is unique to the multidimensional image based on the extracted two dimensional image; comparing the first unique code associated with the subject to a library of reference codes, each of the reference codes in the library of reference codes being indicative of a class of objects; determining if the subject associated with the first unique code falls into at least one of the classes of objects associated with the reference codes based on a result of the comparison; and formulating a diagnostic decision based on the whether the first unique code associated with the subject falls into at least one of the classes associated with the reference code.
- determining if the subject associated with the first unique reference code falls into at least one of the classes may further include determining if the subject associated with the first unique reference code has changed classes over time and formulating the diagnostic decision may include formulating the diagnostic decision based on the change of class over time.
- determining if the subject associated with the first unique code falls into at least one of the classes of objects may include determining that the subject associated with the first unique code falls into at least two of the classes.
- the method may further include applying additional processing to determine which of the at least two classes more accurately identifies the subject associated with the first unique code.
- the classes associated with the reference code may identify at least one of a physical state and a physical object.
- the subject may include at least one of a fingernail bed, a fingerprint, an iris, a cornea, any human tissue and physical object.
- reducing may further include reducing the two dimensional subset to the first unique code based on a defined set of structural or functional information contained with the image.
- the method may further include storing the first unique code; and comparing the first unique code to a second unique code to establish a degree of equivalence of the first and second codes.
- the multidimensional image may include at least one of a volumetric image representative of time invariant information in a sample, slowly time-variant information in a sample, or time variant structural or functional information in a sample.
- reducing the two dimensional subset to the first unique code may include manipulating the
- multidimensional data to extract a region of interest; extracting the structural or the functional information using a filter; translating the filtered information into the first unique code representative of the information content contained in the
- extracting may include extracting using a filter configured to extract the structural or functional information.
- the filter may include at least one of a Gabor filter, a complex filter that consists of real and imaginary parts, a complex filter that consists of a spatial frequency component and a Gaussian window component and a complex filter that has at least three unique degrees of freedom including amplitude, at least one spatial frequency, and at least one Gaussian window standard deviation.
- the filter may be configured to operate on a two dimensional subset using at least one of a convolution in the native domain or a multiplication of the Fourier Transforms of the filter and two dimensional subset and multiple filter scales in which one or more filter degrees of freedom are changed before combination with the image subset.
- the unique code may be obtained from complex data comprising at least one of a complex
- the unique code may be binary such that each pixel of information is represented by one of two states.
- the unique code may have a base greater than 2.
- the unique code may be represented as a one or two dimensional barcode configured to be read by a generic commercial barcode reading technology.
- comparing may include comparing two or more unique codes using cross-correlation or other relational comparison.
- the relational comparison may be an XOR operation.
- the comparison result may be applied to find a Hamming distance.
- the Hamming distance may be used to validate a strength of the comparison against a database of other codes.
- the extracting, translating, manipulating, comparing and assigning steps may be repeated to construct a database of codes.
- a unique identifier threshold may be defined based on sensitivity analysis of the codes in the database.
- the method may further include determining if a calculated code is unique or present in the database by comparing the unique identifier threshold to the Hamming distance between the calculated code and the codes in the database.
- the filter may include any image processing system in which information content is emphasized or extracted from a depth-dependent image.
- the filter may be at least one of a speckle tracking algorithm and a texture-based image analysis algorithm.
- Still further embodiments of the present inventive concept provide methods of providing a diagnosis based on a digital code in an optical coherence tomography imaging system, the method including acquiring interferometric cross- correlation data representative of multidimensional information unique to a sample, the multidimensional information including one, two, or three spatial dimensions plus zero, one or two time dimensions; processing the multidimensional interferometric cross-correlation data into one or more images processed to represent one or more of time invariant information about the sample, slowly time variant information about the sample, or time variant structural or functional information about the sample; and selecting a subset of the multidimensional data; reducing the selected subset of data to a two dimensional subset of data; performing one or more spatial or temporal frequency transforms of the two dimensional subsets to derive a unique representation of the sample; reducing the transform into a unique digital code associated with the sample; comparing the unique digital code associated with the sample to a library of reference codes, each of the reference codes in the library of reference codes being indicative of a class of objects; determining
- the optical coherence tomography system may include an optical source; an optical splitter configured to separate a reference optical signal from a sample optical signal; and an optical detector configured to detect an interferometric cross-correlation between the reference optical signal and the sample optical signal.
- the unique digital code may include a first unique digital code.
- the method may further include storing the digital code; comparing the first unique digital code to a second unique digital code of the sample acquired from a second position within the sample, of the same sample acquired at a different time and/or of a different sample; and establishing a degree of equivalence between the first and second unique digital codes.
- Still further embodiments of the present inventive concept provide methods of providing a diagnosis using a digital code using a Fourier domain optical coherence tomography imaging system, the method including acquiring frequency- dependent interferometric cross-correlation data representative of multidimensional information unique to a sample, the multidimensional information including zero or one frequency dimensions, one, two, or three spatial dimensions and zero, one, or two time dimensions; processing the multidimensional interferometric cross-correlation data into one or more images processed to represent one or more of time invariant information about the sample, slowly time variant information about the sample, or time variant structural or functional information about the sample; selecting a subset of the multidimensional data; reducing the subset of data to a two dimensional subset of data; performing one or more spatial or temporal frequency transforms of the two dimensional subsets to derive a unique representation of the sample; reducing the transform into a unique digital code that provides a unique signature of the multidimensional data; comparing the unique digital code associated with the sample to a library of reference codes, each of the
- the Fourier domain optical coherence tomography imaging system may include an optical source; an optical splitter configured to separate a reference optical signal from a sample optical signal; and an optical detector configured to detect a frequency-dependent interferometric cross-correlation between the reference signal and the sample signal.
- the unique digital code may be a first unique digital code and the method may further include storing the unique digital code; comparing the unique digital code to a second unique digital code of the same sample acquired from a separate position within the sample, of the sample acquired at a separate time, of a different sample; and establishing a degree of equivalence between the first and second unique digital codes.
- processing of frequency-dependent interferometric cross-correlation data may include obtaining a Fourier transformation of the frequency-dependent dimension to provide a spatial dimension.
- the image may be a volumetric image of a fingernail of a subject
- the volumetric image may include a series of one-dimensional lines that provide information on scattering from the fingernail and fingernail bed as a function of depth
- a series of lines optically contiguous may be arrayed in a two-dimensional frame that represents a cross section of the fingernail perpendicular to an axis of the finger
- the method may further include acquiring a series of frames to produce a volume.
- the method may further include segmenting a nailbed from within the volumetric image of the fingernail using an automated or manual segmentation technique; and averaging one or more frames in order to produce an average-valued image of multiple cross- sectional locations of the nailbed or to improve the signal-to-noise ratio of the nailbed image along one cross-section,
- the method may further include processing a digital code from the cross-sectional image of segmented nailbed region of at least one frame.
- Processing the multidimensional interferometric cross-correlation data may include processing an intensity projection from two or more frames of the segmented nailbed, the method further comprising processing a digital code from the intensity projection.
- the image may be a volumetric image of an iris of an eye of a subject
- the volumetric image may include a series of one-dimensional lines that provide information on scattering from the iris as a function of depth
- a series of lines optically contiguous may be arrayed in a two- dimensional frame that represents a cross section of the iris perpendicular to an axis of the eye
- the method may further include acquiring a series of frames to produce a volume.
- the method may further include constructing the volumetric image from a series of concentric circular scans approximately centered on a pupil of the eye.
- the method may further include segmenting one or more layers of the iris from within the volumetric image of the iris using an automated or manual segmentation technique; and averaging one or more frames in order to produce an average-valued image of multiple cross-sectional locations of the iris or to improve the signal-to-noise ratio of the iris image at cross section.
- the method further includes processing a digital code from the cross-sectional image of segmented iris region of at least one frame.
- the method may further include processing an intensity projection from two or more frames of the segmented iris; and processing a digital code from the intensity projection.
- Figure 1 is a block diagram illustrating a raster scanning FDOCT system for fingernail bed imaging according to some embodiments of the present inventive concept.
- Figure 2 is a block diagram of a full-field FDOCT system for fingernail bed imaging according to some embodiments of the present inventive concept.
- Figure 3 is a block diagram of a raster scanning FDOCT system including a digital capture system for fingernail bed imaging according to some embodiments of the present inventive concept.
- Figure 4 is a block diagram of a dual, raster scanning FDOCT system for combined fingernail bed and fingerprint imaging according to some embodiments of the present inventive concept.
- Figure 5 is a block diagram of a dual, full-field FDOCT system for combined fingernail bed and fingerprint imaging according to some embodiments of the present inventive concept.
- Figure 6 is a block diagram of a raster scanning FDOCT system for iris imaging according to some embodiments of the present inventive concept.
- Figure 7 is a block diagram of a full-field FDOCT system for iris imaging according to some embodiments of the present inventive concept.
- Figure 8 is a block diagram of a raster scanning FDOCT system with combined digital capture system for iris imaging according to some embodiments of the present inventive concept.
- Figure 9 is a block diagram illustrating representative reference arm configurations according to some embodiments of the present inventive concept.
- Figure 10 is a block diagram illustrating representative FDOCT detection methods according to some embodiments of the present inventive concept.
- Figure 1 1 are images and graphs illustrating slice planes through a 3D volume in the fingernail bed according to some embodiments of the inventive concept.
- Figure 12 are graphs and images illustrating generation and evolution of the Gabor filter according to some embodiments of the present inventive concept
- Figure 13 is a diagram illustrating decision code generation according to some embodiments of the present inventive concept.
- Figure 14 is a diagram illustrating decision code processing and analysis according to some embodiments of the present inventive concept.
- Figure 15 is a diagram illustrating unique code generation from 1 slice plane through a 3D volume of fingernail bed data according to some embodiments of the present inventive concept.
- Figure 16 is series of images illustrating OCT slices separated in time through finger 9 and the subsets used for unique, matching code generation according to some embodiments of the present inventive concept.
- Figure 17 is a series of images illustrating OCT slices from fingers 9 and 4 and the subsets used for unique, non-matching code generation according to some embodiments of the present inventive concept.
- Figure 18 is a series of images illustrating OCT slices from fingers 3 and 4 and the subsets used for unique, non-matching code generation according to some embodiments of the present inventive concept.
- Figure 19 is a flowchart illustrating processing steps in high-throughput biometric identification in accordance with some embodiments of the present inventive concept.
- Figure 20 a diagram illustrating a series of slice planes through a 3D volume in the iris and representative OCT images from said planes according to some embodiments of the inventive concept.
- Figure 21 are diagrams of unique code generation from 1 slice through a 3D volume of iris data according to some embodiments of the inventive concept.
- Figure 22 is a block diagram of a data processing system suitable for use in some embodiments of the present inventive concept.
- Figure 23 is a more detailed block diagram of a system according to some embodiments of the present inventive concept.
- Figure 24 is a block diagram illustrating processing of N-dimensional data in accordance with some embodiments of the present inventive concept.
- Figure 25 is a block diagram illustrating processing of N-dimensional data in accordance with some embodiments of the present inventive concept.
- Figure 26 is a block diagram illustrating processing of N-dimensional data in accordance with some embodiments of the present inventive concept.
- Figure 27 is a block diagram illustrating generation of reference codes in accordance with some embodiments of the present inventive concept.
- Figure 28 is a diagram illustrating decision code processing and analysis in accordance with some embodiments of the present inventive concept.
- Figure 29 a diagram illustrating decision code processing and analysis in accordance with some embodiments of the present inventive concept.
- Figure 30 is a flowchart illustrating processing steps in accordance with some embodiments of the present inventive concept.
- Figure 31 are images and graphs illustrating slice planes through a 3D volume in the fingerprint according to some embodiments of the present inventive concept.
- Figure 32 is a diagram illustrating unique code generation from 1 slice plane through a 3D volume of fingerprint data according to some embodiments of the present inventive concept.
- Figure 33 are images and graphs illustrating slice planes through a 3D volume in the retina according to some embodiments of the present inventive concept.
- Figure 34 is a diagram illustrating unique code generation from 1 slice plane through a 3D volume of retina data according to some embodiments of the present inventive concept.
- Figure 35 is a diagram illustrating unique code generation from 1 slice plane through a 3D volume of retina data according to some embodiments of the present inventive concept.
- Figure 36 is a diagram illustrating unique code generation from 1 slice plane through a 3D volume of retina vessels according to some embodiments of the present inventive concept
- Figure 37 are images and graphs illustrating slice planes through a 3D volume in the retina vessels according to some embodiments of the present inventive concept.
- time invariant refers to a property that does not change over a time period of interest. For example, the structure of the iris in a normal healthy individual.
- Slowly time variant refers to a property that does not vary measurably within one measurement acquisition, but may change between
- “Time variant” refers to a property that is measurably changing during the course of one measurement event. For example, the pulsatility of an ocular artery.
- Structuretural information refers to morphological information arising from a combination of scattering and absorption in a tissue, without reference to, for example, spectroscopic or phase information.
- “Functional information” refers to information arising out of a physiological or metabolic condition of a sample, for example, wavelength dependent absorption or scattering (spectroscopic),
- phase or flow information e.g. Doppler
- End face refers to a plane parallel to the plane of the surface of the tissue being imaged.
- Intensity projection refers to one of any processing techniques known by those having skill in the art or anticipated to create an en face view of a volumetric image set. It will be further understood that as used herein transformation of an optical information set from frequency to time is equivalent to a transformation between frequency and space.
- a biometri identification system typically uses a unique personal attribute that can be measured with high sensitivity and specificity. To obtain the best results, this attribute should be an attribute that cannot be easily modified or falsified cosmetically and yet is quickly and easily measured via non-invasive means.
- a secondary measurement may be acquired to re-enforce the primary measurement or to correlate with traditional measurement systems.
- the primary attribute may be a human fingernail bed as processed in an FDOCT imaging system, and the secondary attribute may be an image of the fingerprint.
- the image of the secondary attribute may be an analog or digital photograph, or it may be an en face image derived from an OCT image set.
- embodiments of the present inventive concept are discussed herein with respect to the human fingernail bed, embodiments of the present inventive concept are not limited to this attribute.
- the iris may also be used as the attribute without departing from the scope of the present inventive concept.
- FDOCT Fourier Domain Optical Coherence Tomography
- some embodiments of the present inventive concept depth- section the tissue below the nail and generate unique 2-D topographical maps of depth as a function of lateral position at each spatial location across the nail using optical coherence tomography (OCT).
- OCT optical coherence tomography
- some embodiments of the present inventive concept discuss a novel, non-invasive optical biometric classifier.
- Some embodiments discussed herein utilize optical coherence tomography to resolve tissue microstructure in the human fingernail bed.
- the concepts can be applied using other depth-resolved imaging techniques, such as scanning confocal microscopy, without departing from the scope of the present inventive concept.
- an FDOCT system is used to obtain a volumetric image of the target tissue.
- Light from an optical source is directed via an optical splitter through a reference and sample arm path, reflected off of reference reflector in the reference arm and backscattered from the tissue in the sample arm.
- the reference and sample signals are mixed in an optical coupler and the mixed light is directed to an optical detector.
- Interferometric cross-correlation terms are sampled in the frequency domain and processed using methods known to those skilled in the art to produce an array of depth-dependent data at one spatially unique location on the sample; this array of depth resolved data is known as an A-scan.
- An array of A-Scans acquired as a beam scans a direction across the sample and forms a two-dimensional cross-sectional image of a slice of the tissue; this cross-sectional image is known as a B-Scan.
- a B-Scan may be useful for obtaining time-invariant or time slowly-variant images of a sample,
- a collection of multiple B- Scans acquired across the sample defines an OCT volume.
- an array of A- Scans acquired as a function of time at one spatial location is known as an M-Scan.
- An M-scan may be useful for obtaining time-variant images of a sample.
- the OCT image may be obtained by scanning the sample light across the sample and at each spatially unique point to acquire a spectral interferogram whose intensity is a function of the depth-dependent back-scattering intensity of the sample.
- full-field OCT imaging may be provided using a large collimated spot on the sample and a narrow instantaneous linewidth from the source. This can be accomplished using, for example, a swept laser source or a superluminescent diode with a tunable filter.
- spectral images may be acquired in a plane parallel to the surface of the tissue, which are then processed using FDOCT processing techniques known to those having skill in the art.
- Projections through volumetric data sets can be used to decrease the dimensionality of the image data and provide a unique representation of the two dimensional (2D) structure appropriate to the attribute of interest.
- a projection created by processing the image intensity along the depth axis of a three dimensional (3D) volume produces an en face image of the volumetric data. This en face image can then be correlated with other OCT-generated en face images or photographs acquired through other imaging modalities, for example, standard flash photography, to track motion between subsequent volumes or to align the structure in the volume data for longitudinal analysis.
- projecting through the data may be accomplished using the average intensity projection, also known as the volume intensity projection, in which the signal intensity along each A-scan is averaged, collapsing each depth line to a single point representative of the average intensity along each depth line.
- the 2D en face image is composed of all points generated from the processing above.
- generating an intensity projection is the maximum intensity projection, in which the maximum value of each A-Scan is calculated and stored, collapsing each depth line to a single point containing the maximum intensity value along each depth line.
- the 2D en face image is composed of all points generated from the processing above.
- generating an intensity projection is a histogram-based method in which the histogram of the intensity of each A-Scan is calculated and the maximum value of the histogram is stored, collapsing each depth line to a single point containing the maximum value of the histogram for each depth line.
- the 2D en face image is composed of all points generated from the processing above.
- generating an intensity projection is a histogram-based method in which the histogram of the intensity of each A-Scan is calculated and the centroid or center of mass of the histogram is stored, collapsing each depth line to a single point containing the center of mass of the histogram for each depth line.
- the 2D en face image is composed of all points generated from the processing above.
- intensity projections may be used to calculate the intensity projection based on standard algebraic operations.
- the intensity projections stated may be used along axes not parallel to the plane of the tissue surface to create projections through the 3D volume that are not en face, but do create an alternate 2D representation of the 3D volume data.
- the FDOCT system includes a low-coherence source (LCS) 100, an optical coupler (OC) 110, a reference arm (RA) 120, a sample arm (SA) 130, a computer system (CS) 140, a detector (D) 150 and a sample 160.
- the LCS 100 may be, for example, a broadband
- superluminescent diode having a center wavelength in the near-infrared (IR) or IR range with sufficient bandwidth to have less than about 20 ⁇ axial and lateral resolution in tissue.
- Light from the diode is passed through the optical coupler 1 10 to a fixed reference arm 120 discusses below with respect to Figures 9 and 21 and the sample arm 130 that raster scans the beam across the sample 160.
- Reference and sample arm light are mixed in the optical coupler 1 10 and detected in parallel by the detector 150 using a one dimensional (ID) array detector, such as the SDOCT detector 151 of Figure 10.
- ID one dimensional
- a tunable, or swept source may be used in place of the LCS 100, and the spectral interferogram may be collected serially with one or more point detectors. Once the spectral interferogram is acquired and resampled into a wavenumber array, the Fourier processing algorithms for the SDOCT and the SSOCT images are similar.
- the computer system 140 may be used to process and analyze the data.
- 2D slice scans or 3D volume scans may be used to acquire depth slices through the fingernail bed.
- multiple A-Scan one dimensional (ID) depth lines or B-Scan 2D depth slices may be acquired and averaged as a function of time to improve the signal-to-noise ratio of the acquired data.
- FIG. 2 a block diagram of a full-field FDOCT system for imaging in accordance with some embodiments of the present inventive concept will be discussed.
- the LCS 100, the optical coupler 1 10, the reference arm 120, the computer system 140, and the sample 160 are similar to those discussed above with respect to Figure 1.
- a scanning fiber Fabry-Perot (FFP) filter is used to rapidly sweep through the wavelength output range.
- Light from the filter is passed through an optical coupler 110 to a fixed reference arm 120 and a sample arm 200 that expands and collimates the beam incident on a large area of the sample.
- Reference and sample arm light are mixed in the optical coupler 110 and detected using 2-D, full-field array detector 151, such as the FFOCT detector 153 of Figure 10.
- the system of Figure 3 includes a digital capture system 300.
- the digital capture system 300 may be, for example, a high speed video camera or a high resolution still camera.
- the digital capture system 300 illustrated in Figure 3 may be used to capture the fingerprint 161 of the sample while the FDOCT system captures the fingernail bed data.
- the digitally captured image may be used in standard digital fingerprint identification databases to append the unique code to existing identification information.
- FIG 4 a block diagram of a dual raster scanning FDOCT system according to some embodiments of the present inventive concept will be discussed.
- the LCS 100, the optical coupler 1 10, the reference arm 120, the computer system 140, the detector 150 and the sample 160 are similar to those discussed above with respect to Figure 1.
- the FDOCT system of Figure 4 includes two samples arms 130 and 400.
- the FDOCT sample arm 400 is used to raster scan the fingerprint 161 at the same time as the fingernail bed 160 is raster scanned by 130.
- the volumetric FDOCT data may be processed to retrieve an en face image of the fingerprint, which in turn may be used in standard digital fingerprint identification databases to append the unique code to existing identification information.
- Embodiments of the present inventive concept are not limited to the configuration of Figure 4.
- the system may be implemented using multiple reference arm topologies illustrated in Figure 9.
- element 121 is a fixed path length reference arm.
- element 122 is a switched reference arm that alternates between two unique reference arm positions, allowing serial acquisition of the signal from the two unique sample arms.
- element 123 is a static reference arm in which the reference arm light is split between two unique paths matched to the sample arm paths. This implementation typically requires depth multiplexing at the detector and as such a detection method with sufficient depth range to accommodate the two sample arm images must be used.
- FIG. 5 a block diagram of a dual, full-field FDOCT system for combined fingernail bed and fingerprint imaging according to some embodiments of the present inventive concept will be discussed.
- the LCS 100, the optical coupler 1 10, the reference arm 120, the sample arm 200, the computer system 140, the detector 151 and the sample 160 are similar to those discussed above with respect to Figure 2.
- embodiments illustrated in Figure 5 include a second full-field sample arm 500, which uses a switched reference arm 122 discussed above with respect to Figure 9.
- FIG. 6 a block diagram of a raster scanning FDOCT system for iris imaging according to some embodiments of the present inventive concept will be discussed.
- LCS 100 the optical coupler 1 10, the reference arm 120, the sample arm 130, the computer system 140, and the detector 150 are similar to those discussed above with respect to Figure 1.
- the system of Figure 6 is used to image an iris 600. Rectangular volume scans or circular scans acquired through the iris may be used to capture depth slices through the iris according to some embodiments of the present inventive concept as will be discussed further below.
- FIG. 7 a block diagram of a full-field FDOCT system for iris imaging according to some embodiments of the present inventive concept will be discussed.
- the LCS 100, the optical coupler .1 10, the reference arm 120, the sample arm 200, the computer system 140 and the detector 150 are similar to those discussed above with respect to Figure 2.
- the system of Figure 7 is used to image an iris 600.
- FIG. 8 a block diagram of a raster scanning FDOCT system with combined digital capture system according to some embodiments of the present inventive concept will be discussed.
- LCS 100, the optical coupler 1 10, the reference arm 120, the sample arm 130, the computer system 140, the detector 150 and the digital capture system 300 are similar to those discussed above with respect to Figure 3.
- the system of Figure 8 is used to acquire both FDOCT and digital capture 300 images of the iris 600.
- FIG. 1 1 images and graphs illustrating slice planes through a 3D volume in the fingernail bed according to some embodiments of the inventive concept will be discussed.
- Some embodiments of the present inventive concept involve the generation of a unique code from a 2-dimensional slice 1 107 through a 3 -dimensional volume 1 100 acquired at the fingernail bed.
- the orientation axes 1 101 and corresponding fingernail orientation 1102 are illustrated in Figure 1 1.
- the traditional B-Scan consists of spatially unique locations along the azimuth dimension 1101 with a volume consisting of multiple B-Scans at spatially unique planes along the elevation dimension.
- a single depth line 1 103 provides an intensity profile 1 104 at a single, spatially unique scan position 1 105.
- Transverse B- Scan slices 1 106 provide images as a function of azimuth and depth 1107 through the nailbed 1 108. Sagittal slices through the structure 1109 provide similar B-Scan images 1 1 10 through the nailbed 11 1 1.
- coronal slices 1 1 12 and corresponding C-Scan images 1 1 13 can be reconstructed for the nailbed 1 114.
- a raster scan orthogonal to the banded structure in the fingernail bed may be used.
- a volume of data 1100 is generated.
- Generating a projection along the depth axis returns a volume intensity projection.
- FIG. 12 graphs and images illustrating generation and evolution of the Gabor filter according to some embodiments of the present inventive concept will be discussed.
- Some embodiments of the present inventive concept use the Gabor wavelet as the filtering function. Daugman proposes the use of the Gabor wavelet to analyze the localized spatial frequency content of the warped iris image. Similar processing may be applied to the banded structure of the fingernail bed.
- the 1-D Gabor filter is composed of a complex exponential windowed by a Gaussian 1200:
- /( ⁇ , ⁇ , ⁇ , ⁇ ⁇ ) is the filtering function
- A is the amplitude of the function
- / is the imaginary variable
- u is the spatial frequency
- x is the space variable
- XQ is some offset in space that defines the center of the Gaussian envelope
- ⁇ ⁇ is the standard deviation that defines the width of the Gaussian envelope.
- the real and imaginary parts 1201 are sinusoids windowed by a Gaussian function.
- v,x,y, ⁇ ⁇ , ⁇ ⁇ ) is the filtering function
- A is the amplitude of the function
- i is the imaginary variable
- x and y are the space variables and represent the spatial axes along which the image is collected
- XQ and y 0 are offsets in space that define the center of the Gaussian envelope and are typically set to the center of the Gabor wavelet window
- u and v are the spatial frequencies of the complex exponential term and are typically set to some range of spatial frequencies that overlap with spatial frequencies that are likely to occur within the imaged tissue
- ⁇ ⁇ and ⁇ ⁇ are the standard deviations that define the width of the Gaussian envelope along the spatial axes.
- the real 1202 and imaginary parts are 2D sinusoids windowed by a 2D Gaussian function.
- x is typically mapped to the azimuth scan dimension and y is typically mapped to the depth dimension. If the range of x is -2 to 2 millimeters and the range of y is 0 to 2 mm, then xo is set to 0 mm and y Q is set to 1 mm to center the Gabor wavelet in the image window. If spatially varying bands within the tissue have spatial frequencies ranging from 0.01 to 0.05 mm "1 along the azimuthal dimension and 0.2 to 0.3 mm "1 in the depth dimension, then an appropriate range for a would be 0.005 to 0.1 mm "1 and an appropriate range for v would be 0, 1 to 0.4 mm "1 . If the region that contains the bands is 1 mm in azimuth and 0.5 mm in depth, then ⁇ ⁇ and a y may be chosen such that that full width at half maximum of the Gaussian envelope covers this range.
- Scaled versions of a 2-D Gabor wavelet 1303 are applied to the extracted structure 1302.
- the Gabor wavelet may be cross-correlated with the flattened structure in the spatial dimension or the Fourier transforms of the Gabor wavelet and the flattened structure may be multiplied in the spatial frequency dimension.
- the width of the light and dark bands within the extracted structure may be from about 0.13mm to about 0.25 mm.
- the spatial frequency content of the Gabor filter should be varied in fine increments over multiple scales of the Nyquist limit, which in this case is may be about 0.065 mm "1 .
- the standard deviation of the Gaussian, ⁇ should be varied in multiple scales around the Icnown width of the bands.
- a real and imaginary filtered image are generated by multiplying the 2D Gabor filter with the extracted bands.
- the phase of h yields coordinates in the complex plane that are mapped to bit pairs where (1 ,1), (0,1), (0,0), (1 ,0) are mapped to quadrants I-IV, respectively 1304.
- the Hamming distance should be about 0
- a map processed with its inverse returns a Hamming distance of 1
- two randomly generated binary maps should return a Hamming distance of 0.5.
- Similar comparison methods such as the cross-correlation, can be applied to determine the relationship between codes 1403.
- the correlation peak may then be used to determine the similarity between codes.
- a decision threshold can be generated based on a database of true positives and their resultant Hamming distance.
- the Hamming distance distribution 1404 will have a unique shape, and a decision line may be calculated to yield a desired sensitivity and specificity.
- FIG. 15 a diagram illustrating unique code generation from a slice plane through a 3D volume of fingernail bed data according to some embodiments of the present inventive concept will be discussed.
- Scans acquired orthogonal to the banded structure of the nail bed are flattened 1500 by finding the contour of the inner surface of the fingernail and warping the image based on the contour.
- a filter 1501 can then be applied to the data to extract or emphasize information content not readily available in the original image data.
- the banded structure can then be extracted from the area under the fingernail and processed 1502 to return a code 1503 unique to not only each individual but to each finger as well.
- Figure 15 illustrates the processing involved in the image analysis. The processing detailed above is applied to the extracted region, and a binary pair is generated for each set of filter values, yielding a map of bit values as illustrated in the code 1503 that is directly related to the unique spatial frequency content contained in each finger's pattern.
- a depth slice may be acquired from one finger 1600 at two different time points 1601, 1603 and subsets 1602, 1604 from the two slices can be analyzed to determine the Hamming distance for identical finger scans as a function of time. After acquisition of many such scans, the Hamming distance range for unique fingers may be statistically determined for large data sets.
- Depth slices 1601, 1701 may be acquired from two unique fingers 1600, 1700 on different hands and subsets 1602, 1702 from the two slices can be analyzed to determine the Hamming distance for different fingers. After acquisition of many such scans, the Hamming distance range for uniquely different fingers on different hands may be statistically determined for large data sets.
- Figure 18 a series of images illustrating OCT slices from fingers 3 and 4 and the subsets used for unique, non-matching code generation according to some embodiments of the present inventive concept will be discussed.
- Depth slices 1701, 1801 may be acquired from two unique fingers 1700, 1800 on the same hand and subsets 1702, 1802 from the two slices can be analyzed to determine the Hamming distance for different fingers on the same hand. After acquisition of many such scans, the Hamming distance range for uniquely different fingers on the same hand may be statistically determined for large data sets.
- a database may be generated from many codes captured from the same finger as a function of time to determine the variability in the Hamming distance as a function of time and unique spatial position scanned to determine the range of Hamming distances that may be assigned to a positive identification.
- a database may be generated from many codes captured from different fingers as a function of time to determine the variability in the Hamming distance as a function of time and unique spatial content to determine the range of Hamming distances likely in a negative identification.
- the true positive and true negative ranges may be modeled by a normal distribution to approximate the false positive and false negative ranges.
- a receiver operating characteristic (ROC) curve may be generated based on the true positives and estimated false positives to highlight the sensitivity and specificity of the code as implemented.
- Operates begin at block 1900 by logging into the system. After logging into the security system (block 1900), the display will be cleared (block 1901) and the user will be prompted to start the exam (block 1902). The subject will place one or more fingers in the path of the scan beam and the necessary OCT data will be acquired (block 1903). This data will then be processed by extracting a sub-region of the OCT image (block 1904) and generating the code (block 1905) associated with each unique finger. Smart search algorithms will examine a database of stored binary maps to determine if a match exists within the database (block 1906).
- the software will display relevant subject information to the user (block 1909). If no match exists within the database (block 1907), the option will be presented to the user to add the subject to the database (block 1908). It will be determined if another finger needs to be examined (block 191 1). If another finger needs to be examined (block 191 1), operations return to block 1903 and repeat. If, on the other hand, no other fingers need to be examined, operations of the exam may terminate (block 1910).
- the software may be expanded to incorporate advanced security measures, which would include but not be limited to interfacing to national security software for more detailed subject information such as any outstanding warrants or alerting law enforcement if a wanted subject has been identified.
- the results consisting of either the processed OCT data and the resultant code or only the resultant code may be stored on a local or networked computer.
- FIG. 20 a series of slice planes through a 3D volume in the iris and representative OCT images from said planes according to some embodiments of the inventive concept will be discussed. Some embodiments involve the generation of a unique code from a 2-dimensional slice 2006 through a 3-dimensional volume 2000 acquired at the iris. The orientation axes 2001 and corresponding iris orientation 2002 are detailed in Figure 20.
- a single depth line 2003 (A-Scan) provides an intensity profile 2004 at a single, spatially unique scan position 2005.
- Transverse B- Scan slices 2006 provide images as a function of azimuth and depth 2007 through the iris 2008.
- Circular slices through the structure 2009 provide B-Scan images 2010 through the iris 201 1.
- a raster scan orthogonal to the banded structure in the fingernail bed may be used. By acquiring multiple scans orthogonal to the bands in the bed at different positions across the nail, a volume of data 2000 is generated. Generating a projection along the depth axis returns a volume intensity projection.
- FIG. 21 diagrams of unique code generation from 1 slice through a 3D volume of iris data according to some embodiments of the inventive concept will be discussed.
- methods of slicing data to best analyze the spatial frequency content of the tissue topology of the iris will be discussed.
- Circular scans acquired at the iris are flattened 2100 by finding the contour of the inner surface of the iris and warping the image based on the contour.
- a filter 2101 can then be applied to the data to extract or emphasize information content not readily available in the original image data.
- the fine structure can then be extracted from the iris and processed 2102 to return a code 2103 unique to not only each individual but to each eye as well,
- the code generated by some embodiments of the present inventive concept may be used to aid in subject identification.
- This code could be included with any form of personal identification, such as a passport or government-issued ID, as an additional security measure.
- the code could be, for example, read by a barcode scanner interfaced to the OCT database to aid in rapid identification and screening in high-traffic security checkpoints.
- FIG. 1 106 While some embodiments of the present inventive concept are discussed above, other embodiments may also be envisioned without departing from the scope of the present inventive concept, For example, further embodiments of the present inventive concept are illustrated in Figures 1 106, 1 109, and 1 1 12.
- a slice orthogonal to the depth axis 1 112 would return the band pattern as seen from the nail.
- a slice 1 106 would yield a side-long view of the banded structure. Sampling more densely in this dimension could allow for better resolution of the spatial frequency content contained in the bands.
- With the volume of data it may be possible to analyze the data using embodiments discussed herein and then use an additional implementation, such as the analysis using slices 1 106, to improve the confidence level of the result.
- traditional fingerprint data may be collected using systems according to some embodiments of the present inventive concept.
- An SDOCT volume may be acquired over the finger tip; the volume projection of such a volume would yield an image containing traditional fingerprint data.
- this data could be acquired by scanning the top of the nail with the SDOCT system while at the same time capturing an image of the finger tip using a standard still or video camera. With both data types captured concurrently, the new SDOCT- generated biometric could be correlated with and stored alongside traditional fingerprint data contained in law enforcement databases, facilitating easier integration into current security systems.
- data acquired using systems and methods according to some embodiments of the present inventive concept may be processed using a computer system 140 (data processing system).
- a data processing system 2230 configured in accordance with embodiments of the present inventive concept will be discussed with respect to Figure 22.
- the data processing system 2230 may include a user interface 2244, including, for example, input device(s) such as a keyboard or keypad, a display, a speaker and/or microphone, and a memory 2236 that communicate with a processor 2238.
- the data processing system 2230 may further include I/O data port(s) 2246 that also communicates with the processor 2238.
- the I/O data ports 2246 can be used to transfer information between the data processing system 2230 and another computer system or a network using, for example, an Internet Protocol (IP) connection.
- IP Internet Protocol
- These components may be conventional components such as those used in many conventional data processing systems, which may be configured to operate as described herein.
- the processor 2238 communicates with the memory 2236 via an address/data bus 2348, the I/O data ports 2246 via address/data bus 2349 and the electronic display 2339 via address/data bus 2350.
- the processor 2238 can be any commercially available or custom enterprise, application, personal, pervasive and/or embedded microprocessor, microcontroller, digital signal processor or the like.
- the memory 2236 may include any memory device containing the software and data used to implement the functionality of the data processing system 2230.
- the memory 2236 can include, but is not limited to, the following types of devices: ROM, PROM, EPROM, EEPROM, flash memory, SRAM, and DRAM.
- the memory 2236 may include several categories of software and data used in the system: an operating system 2352; application programs 2354; input/output (I/O) device drivers 2358; and data 2356.
- the operating system 2352 may be any operating system suitable for use with a data processing system, such as OS/2, AIX or zOS from International Business Machines Corporation, Armonk, NY, Windows95, Windows98, Windows2000 or WindowsXP, or Windows CE or Windows 7 from Microsoft Corporation, Redmond, WA, Palm OS, Symbian OS, Cisco IOS, Vx Works, Unix or Linux.
- the I/O device drivers 2358 typically include software routines assessed through the operating system 2352 by the application programs 2354 to communicate with devices such as the I/O data port(s) 2246 and certain memory 2236 components.
- the application programs 2354 are illustrative of the programs that implement the various features of the some embodiments of the present inventive concept and may include at least one application that supports operations according to embodiments of the present inventive concept.
- the data 2356 may include acquired scans 2359, subsets 2360, filtered images 2361 and codes 2362, which may represent the static and dynamic data used by the application programs 2354, the operating system 2352, the I/O device drivers 2358, and other software programs that may reside in the memory 2236.
- the application programs 2354 include OCT imaging modules 2365. While the present inventive concept is illustrated with reference to OCT imaging modules 2365 as being application programs in Figure 23, as will be appreciated by those of skill in the art, other configurations fall within the scope of the present inventive concept. For example, rather than being application programs 2354, these circuits and modules may also be incorporated into the operating system 2352 or other such logical division of the data processing system. Furthermore, while the OCT imaging modules 2365 are illustrated in a single system, as will be appreciated by those of skill in the art, such functionality may be distributed across one or more systems.
- the present inventive concept should not be construed as limited to the configuration illustrated in Figure 23, but may be provided by other arrangements and/or divisions of functions between data processing systems.
- Figure 23 is illustrated as having various circuits, one or more of these circuits may be combined without departing from the scope of the present inventive concept.
- the OCT imaging modules 2365 may be used to implement various portions of the present inventive concept capable of being performed by a data processing system.
- the OCT imaging modules may be used to process and assess the images produced by the OCT system according to some embodiments of the present inventive concept.
- Example embodiments are described above with reference to block diagrams and/or flowchart illustrations of methods, devices, systems and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, and/or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
- These computer program instructions may also be stored in a computer- readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks.
- the computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
- example embodiments may be implemented in hardware and/or in software (including firmware, resident software, micro-code, etc.).
- example embodiments may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system.
- a computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
- the computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD-ROM).
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- CD-ROM portable compact disc read-only memory
- the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
- Computer program code for carrying out operations of data processing systems discussed herein may be written in a high-level programming language, such as Java, AJAX (Asynchronous JavaScript), C, and/or C++, for development convenience.
- computer program code for carrying out operations of example embodiments may also be written in other programming languages, such as, but not limited to, interpreted languages.
- Some modules or routines may be written in assembly language or even micro-code to enhance performance and/or memory usage.
- embodiments are not limited to a particular programming language.
- program modules may also be implemented using discrete hardware components, one or more application specific integrated circuits (ASICs), or a field programmable gate array (FPGA), or a programmed digital signal processor, a programmed logic controller (PLC), or microcontroller.
- ASICs application specific integrated circuits
- FPGA field programmable gate array
- PLC programmed logic controller
- the unique reference code generated for a sample may be used in making a diagnostic decision.
- the reference codes may be indicative of a class of objects.
- a "class of objects" refers to any object for which a multiple dimensional image can be acquired and a unique code can be generated.
- the class of objects may be defined by a regular structure that is nominally time invariant.
- time invariant refers to a property that does not change over a time period of interest.
- the class of objects may include a physical state, a physical object (an integrated circuit, a printed circuit board, etc.), human tissue and the like.
- Each class of objects has a reference code, which is derived from aggregate data.
- the subject of the image is not time-invariant.
- a unique reference code may be generated for the subject being imaged and this reference code can be compared to reference codes for each class. The result of this comparison can be used to provide a diagnosis.
- the subject being imaged is a circuit board
- embodiments discussed herein may be used for pattern recognition.
- this diagnosis can be made.
- a state is a condition of being that is identifiable and has attributes that can be captured in a multidimensional image.
- the state must be sufficiently consistent over time to have meaning. Accordingly, embodiments of the inventive concept may be used to diagnose diseases in a human subject, identify manufacturing failures in a manufactured product, identify people using facial recognition and the like as will be discussed further below with respect to Figures 24 through 37.
- N-Dimensional data 2400, 2401 , 2402 can be processed as-is or collapsed to a lower dimensional order 2403 with a kernel operation 2404 that extracts unique identifiers from the data set 2403 and generates a unique code 2405 associated with the data set 2403. This code can then be compared 2406 with other unique identifier codes and a decision 2407 can be made based on the relationship between the code generated 2405 and the reference code or codes.
- data to be processed 2403( Figure 24) can be N-Dimensional.
- the data to be processed 2403 may be 1 dimensional 2500, i. e., originating from one unique location in space; 1 dimensional + Time 2501, i.e., originating from one unique location in space as measured over some finite period of time; 2 dimensional 2502, i.e., originating from multiple spatial locations along a plane or surface; 2 dimensional + Time 2503, i.e., originating from a plane or surface as measured over some finite period of time; 3 dimensional 2504, i. e., originating from multiple spatial locations contained within a three dimensional volume; or 3 dimensional + Time 2505, i. e., originating from multiple spatial locations contained within a three dimensional volume as measured over some finite period of time.
- this dimensional data is provide for example only and that embodiments of the present inventive concept are not limited thereto.
- the kernel applied 2404 may match the dimensionality of the data 2403 ( Figure 24) along 1 dimensional 2600; 1 dimensional + Time 2601 ; 2 dimensional 2602; 2 dimensional + Time 2603; 3 dimensional 2604; or 3 dimensional + Time 2605 spaces.
- reference codes 2406 may be generated by acquiring data of the desired observable 2700 in the form of an image 2701 or other data set, and applying a kernel 2702 to extract feature classes of interest 2703, for example, Class 1, Class 2, Class N and the like.-
- kernels 2702 are optimized across many data sets to increase the likelihood that background variability between data sets containing the feature of interest allows for a clear decision threshold when the feature is or is not present.
- reference codes 2704 for each feature class 2703 may be created along with a decision threshold 2705 plot to indicate how close an input code 2405 (Figure 24) must be to the class reference code to be included within the class (positive result) or excluded from the class (negative result).
- These kernels can be applied 2404 ( Figure 24) to the input data 2403 ( Figure 24) and compared against the reference codes generated 2704 to determine whether or not the feature class exists within the input data 2403 ( Figure 24).
- the "observable" 2700 may be a measured or derived parameter that may include, for example, spectra, structure, phase, flow, polarization, scattering coefficient or cross-section, anisotropy, or other biologically relevant optical parameters.
- Kernels used 2702 may be class-specific and optimized to extract information content from the input data 2701 that is more likely to exist within a specific class 2703 and will select a specific class with high confidence, Other kernels 2702 may be more generic and targeted at determining class membership in one or more classes with lower confidence. For example, kernel 1 may be configured to target a specific feature within the input data and determines whether or not the input data contains that feature, while kernel 2 may be configured to loosely target multiple features within the input data to determine whether or not the input data may belong to a collection of classes.
- kernel 1 may be configured to select for spatial frequencies that are indicative of tortuous vessels in diabetic retinopathy. Images with a low correlation with the "Human adult retina diabetic retinopathy tortuousity classifier" have a level or tortuosity (or lack of tortuosity) that indicates they may not have diabetic retinopathy. Kernel 2 may be configured to select for the general spatial frequency content and textural information associated with adult human retinal images. Images with a high correlation to the "Human adult retina general classifier" code are most likely human adult retina images.
- a family of filter kernels and classifier codes may be generated for disease states and run against the code generated from an input data set to determine if the data set may show some of the features indicative of one or more of the disease states.
- the same technique may be applied with a more general kernel family to determine in a broader sense an image classification such as, "adult human retina” or "adult human cornea.”
- the input code 2405 may be compared to a library of reference codes 2704 ( Figure 27) using, for example, cross- correlation or other integral or binary operation.
- the resultant function 2800 which may be a cross-correlation function, can be obtained for all codes in the reference library 2704 ( Figure 27) or all codes within a desired feature class.
- Each feature class may be linked with a decision curve 2801 as generated during the reference library creation process 2705(Decision Thresholds - Figure 27). If the peak of the resultant function 2800 lies within the positive decision region of only 1 code, then the input data 2403 ( Figure 24) may be deemed to contain features known to reside in the matched class 2802 (identification).
- N-Dimensional data 3000 may require additional processing 3001 before applying a generic or class-specific kernel 3002 to the N or N-m dimensional data. After a code is generated 3003, additional processing 3004 may be required before comparison 3006 with one or more reference codes 3005 before a decision or identification is made 3007.
- FIG. 31 images and graphs illustrating slice planes through a 3D volume in the fingerprint according to some embodiments of the inventive concept will be discussed.
- Some embodiments of the present inventive concept involve the generation of a unique code from a 2-dimensional slice 3107 through a 3-dimensional volume 3100 acquired from the fingerprint.
- the orientation axes 3101 and corresponding fingerprint orientation 3102 are illustrated in Figure 31.
- the traditional B-Scan consists of spatially unique locations along the azimuth dimension 3101 with a volume consisting of multiple B-Scans at spatially unique planes along the elevation dimension 3101.
- a single depth line 3103 (A-Scan) provides an intensity profile 3104 at a single, spatially unique scan position 3105.
- Transverse B-Scan slices 3106 provide images as a function of azimuth and depth 3107 through the fingerprint 3108. Sagittal slices through the structure 3109 provide similar B-Scan images 31 10 through the fingerprint 31 11. Once an FDOCT volume has been acquired, coronal slices 3112 and corresponding C-Scan images 3113 can be reconstructed for the fingerprint 3114.
- a raster scan orthogonal to the banded structure in the fingerprint may be used. By acquiring multiple scans orthogonal to the bands in the fingerprint at different positions across the fingerprint, a volume of data 3100 is generated.
- Generating a projection along the depth axis returns a volume intensity projection.
- Figure 32 illustrates the processing involved in the image analysis. The processing detailed above is applied to the extracted region, and a binary pair is generated for each set of filter values, yielding a map of bit values as illustrated in the code 3203 that is directly related to the unique spatial frequency content contained in each fingerprint's pattern.
- FIG. 1 diagrams of unique code generation from 1 slice through a 3D volume of fingerprint data according to some embodiments of the inventive concept will be discussed.
- methods of slicing data to best analyze the spatial frequency content of the tissue topology of the fingerprint will be discussed.
- Circular scans acquired of the fingerprint are flattened 3200 by finding the contour of the inner surface of fingerprint and warping the image based on the contour.
- a filter 3201 can then be applied to the data to extract or emphasize information content not readily available in the original image data.
- the fine structure can then be extracted from the fingerprint and processed 3202 to return a code 3203 unique to not only each individual but to each fingerprint as well.
- FIG. 33 images and graphs illustrating slice planes through a 3D volume in the retina according to some embodiments of the present inventive concept will be discussed.
- B-scans shown in Figure 33 are examples meant to illustrate the approximate appearance of the data along each plane; the B-scan under the sagittal slice illustration is actually a transverse section.
- the image under the C-scan is actually a projection along the depth axis rather than an individual slice along the coronal plane.
- some embodiments of the present inventive concept involve the generation of a unique code from a 2-dimensional slice 3307 through a 3 -dimensional volume 3300 acquired at the retina.
- the orientation axes 3301 and corresponding retina orientation 3302 are illustrated in Figure 33.
- the traditional B-Scan consists of spatially unique locations along the azimuth dimension 3301 with a volume consisting of multiple B-Scans at spatially unique planes along the elevation dimension.
- a single depth line 3303 (A-Scan) provides an intensity profile 3304 at a single, spatially unique scan position 3305.
- Transverse B-Scan slices 3306 provide images as a function of azimuth and depth 3307 through the retina 3308.
- Sagittal slices through the structure 3309 provide similar B-Scan images 3310 through the retina 331 1. Once an FDOCT volume has been acquired, coronal slices 3312 and corresponding C-Scan images 3313 can be reconstructed for the retina 3314.
- a raster scan orthogonal to the banded structure in the retina may be used. By acquiring multiple scans orthogonal to the bands in the retina at different positions across the retina, a volume of data 3300 is generated. Generating a projection along the depth axis returns a volume intensity projection.
- Figure 34 illustrates the processing involved in the image analysis. The processing detailed above is applied to the extracted region, and a binary pair is generated for each set of filter values, yielding a map of bit values as illustrated in the code 3403 that is directly related to the unique spatial frequency content contained in each retina's pattern.
- the choroidal region in Figure 34 is selected and processed from a 2D frame of image data and the code generated from that image data.
- the choroidal region is selected along a slice plane orthogonal to the depth axis, for example, horizontal line from every image in a volume was used to generate image 3500, from which the code is generated.
- FIG. 36 images and graphs illustrating slice planes through a 3D volume in the retina according to some embodiments of the present inventive concept will be discussed.
- B-scans shown in Figure 36 are examples meant to illustrate the approximate appearance of the data along each plane; the B-scan under the sagittal slice illustration is actually a transverse section.
- the image under the C-scan is actually a projection along the depth axis rather than an individual slice along the coronal plane.
- some embodiments of the present inventive concept involve the generation of a unique code from a 2-dimensional slice 3607 through a 3-dimensional volume 3600 acquired at the retina (vessels).
- the orientation axes 3601 and corresponding retina orientation 3602 are illustrated in Figure 36.
- the traditional B-Scan consists of spatially unique locations along the azimuth dimension 3601 with a volume consisting of multiple B-Scans at spatially unique planes along the elevation dimension.
- a single depth line 3603 (A- Scan) provides an intensity profile 3604 at a single, spatially unique scan position 3605.
- Transverse B-Scan slices 3606 provide images as a function of azimuth and depth 3607 through the retina 3608.
- Sagittal slices through the structure 3609 provide similar B-Scan images 3610 through the retina 361 1. Once an FDOCT volume has been acquired, coronal slices 3612 and corresponding C-Scan images 3613 can be reconstructed for the retina 3614.
- a raster scan orthogonal to the banded structure in the retina may be used. By acquiring multiple scans orthogonal to the bands in the retina at different positions across the retina, a volume of data 3600 is generated. Generating a projection along the depth axis returns a volume intensity projection.
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Surgery (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Heart & Thoracic Surgery (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Animal Behavior & Ethology (AREA)
- General Health & Medical Sciences (AREA)
- Pathology (AREA)
- Veterinary Medicine (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Radiology & Medical Imaging (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Ophthalmology & Optometry (AREA)
- Investigating Or Analysing Materials By Optical Means (AREA)
Abstract
L'invention concerne des procédés permettant de fournir un diagnostic au moyen d'un code numérique associé à une image, lesdits procédés consistant à collecter une image multidimensionnelle, l'image multidimensionnelle comprenant au moins deux dimensions; extraire un sous-ensemble bidimensionnel de l'image multidimensionnelle; réduire l'image multidimensionnelle à un premier code qui est unique pour l'image multidimensionnelle d'après l'image bidimensionnelle extraite; comparer le premier code unique associé au sujet à une bibliothèque de codes de référence, chacun des codes de référence de la bibliothèque de codes de référence indiquant une catégorie d'objets; déterminer si le sujet associé au premier code unique entre dans au moins l'une des catégories d'objets associées aux codes de référence d'après un résultat de la comparaison; et formuler une décision de diagnostic basée sur le fait que le premier code unique associé au sujet entre dans au moins l'une des catégories associées au code de référence. L'invention concerne également des systèmes et des produits-programmes d'ordinateur associés.
Applications Claiming Priority (4)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US26399109P | 2009-11-24 | 2009-11-24 | |
| US12/624,937 US8687856B2 (en) | 2008-11-26 | 2009-11-24 | Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography (OCT) |
| US61/263,991 | 2009-11-24 | ||
| US12/624,937 | 2009-11-24 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2011066366A1 true WO2011066366A1 (fr) | 2011-06-03 |
Family
ID=43903985
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2010/057975 Ceased WO2011066366A1 (fr) | 2009-11-24 | 2010-11-24 | Procédés, système et produits-programmes d'ordinateur pour des conditions de diagnostic utilisant des codes uniques générés à partir d'une image multidimensionnelle d'un échantillon |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2011066366A1 (fr) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN103955057A (zh) * | 2014-03-31 | 2014-07-30 | 中国科学院物理研究所 | 一种关联成像系统 |
| US8879813B1 (en) | 2013-10-22 | 2014-11-04 | Eyenuk, Inc. | Systems and methods for automated interest region detection in retinal images |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5291560A (en) * | 1991-07-15 | 1994-03-01 | Iri Scan Incorporated | Biometric personal identification system based on iris analysis |
| US6631199B1 (en) * | 1998-12-08 | 2003-10-07 | Allen W. L. Topping | Automated identification through analysis of optical birefringence within nail beds |
| US20070115481A1 (en) * | 2005-11-18 | 2007-05-24 | Duke University | Method and system of coregistrating optical coherence tomography (OCT) with other clinical tests |
| WO2010062883A1 (fr) * | 2008-11-26 | 2010-06-03 | Bioptigen, Inc. | Procédés, systèmes et produits de programme informatique d'identification biométrique par imagerie de tissus par tomographie à cohérence optique (oct) |
-
2010
- 2010-11-24 WO PCT/US2010/057975 patent/WO2011066366A1/fr not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5291560A (en) * | 1991-07-15 | 1994-03-01 | Iri Scan Incorporated | Biometric personal identification system based on iris analysis |
| US6631199B1 (en) * | 1998-12-08 | 2003-10-07 | Allen W. L. Topping | Automated identification through analysis of optical birefringence within nail beds |
| US20070115481A1 (en) * | 2005-11-18 | 2007-05-24 | Duke University | Method and system of coregistrating optical coherence tomography (OCT) with other clinical tests |
| WO2010062883A1 (fr) * | 2008-11-26 | 2010-06-03 | Bioptigen, Inc. | Procédés, systèmes et produits de programme informatique d'identification biométrique par imagerie de tissus par tomographie à cohérence optique (oct) |
Non-Patent Citations (1)
| Title |
|---|
| DAUGMANDOWNING, PROC. R. SOC. LOND. B, vol. 268, 2001, pages 1737 - 1740 |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8879813B1 (en) | 2013-10-22 | 2014-11-04 | Eyenuk, Inc. | Systems and methods for automated interest region detection in retinal images |
| US8885901B1 (en) | 2013-10-22 | 2014-11-11 | Eyenuk, Inc. | Systems and methods for automated enhancement of retinal images |
| US9002085B1 (en) | 2013-10-22 | 2015-04-07 | Eyenuk, Inc. | Systems and methods for automatically generating descriptions of retinal images |
| US9008391B1 (en) | 2013-10-22 | 2015-04-14 | Eyenuk, Inc. | Systems and methods for processing retinal images for screening of diseases or abnormalities |
| CN103955057A (zh) * | 2014-03-31 | 2014-07-30 | 中国科学院物理研究所 | 一种关联成像系统 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9361518B2 (en) | Methods, systems and computer program products for diagnosing conditions using unique codes generated from a multidimensional image of a sample | |
| US8687856B2 (en) | Methods, systems and computer program products for biometric identification by tissue imaging using optical coherence tomography (OCT) | |
| US8442356B2 (en) | Methods, systems and computer program products for analyzing three dimensional data sets obtained from a sample | |
| US10094649B2 (en) | Evaluation of optical coherence tomographic data prior to segmentation | |
| CN106651827B (zh) | 一种基于sift特征的眼底图像配准方法 | |
| Liu et al. | A flexible touch-based fingerprint acquisition device and a benchmark database using optical coherence tomography | |
| JP5631032B2 (ja) | 画像処理装置、画像処理システム、画像処理方法、及び画像処理をコンピュータに実行させるためのプログラム | |
| KR101451376B1 (ko) | 공간 스펙트럼 지문 스푸프 검출 | |
| Sun et al. | Synchronous fingerprint acquisition system based on total internal reflection and optical coherence tomography | |
| KR20170035343A (ko) | 생물학적 물질의 샘플로부터 형태학적 특징을 추출하는 방법 | |
| Sousedik et al. | Volumetric fingerprint data analysis using optical coherence tomography | |
| CN107862724B (zh) | 一种改进的微血管血流成像方法 | |
| JP2011194060A (ja) | 画像処理装置、画像処理システム、画像処理方法、及び画像処理をコンピュータに実行させるためのプログラム | |
| US11382537B2 (en) | Spoof detection for biometric validation | |
| CN106473752A (zh) | 利用全像场弱相干层析成像术对身份识别的方法及结构 | |
| Yu et al. | Methods and applications of fingertip subcutaneous biometrics based on optical coherence tomography | |
| Li et al. | A multiscale approach to retinal vessel segmentation using Gabor filters and scale multiplication | |
| CN113706567B (zh) | 一种结合血管形态特征的血流成像量化处理方法与装置 | |
| Sekulska-Nalewajko et al. | The detection of internal fingerprint image using optical coherence tomography | |
| WO2011066366A1 (fr) | Procédés, système et produits-programmes d'ordinateur pour des conditions de diagnostic utilisant des codes uniques générés à partir d'une image multidimensionnelle d'un échantillon | |
| Sharma et al. | Viability of optical coherence tomography for iris presentation attack detection | |
| Li et al. | Automated retinal vessel segmentation using multiscale analysis and adaptive thresholding | |
| Liu et al. | A lightweight and noise-robust method for internal OCT fingerprint reconstruction | |
| US11992329B2 (en) | Processing optical coherence tomography scans | |
| Appasamy et al. | 3D Finger Vein And Iris Pattern Based Verification System Using Photo Acoustic Tomography |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10785291 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 10785291 Country of ref document: EP Kind code of ref document: A1 |