[go: up one dir, main page]

US20110026789A1 - Retinal image analysis systems and methods - Google Patents

Retinal image analysis systems and methods Download PDF

Info

Publication number
US20110026789A1
US20110026789A1 US12/936,702 US93670209A US2011026789A1 US 20110026789 A1 US20110026789 A1 US 20110026789A1 US 93670209 A US93670209 A US 93670209A US 2011026789 A1 US2011026789 A1 US 2011026789A1
Authority
US
United States
Prior art keywords
vessel
retinal image
retinal
segments
vessels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/936,702
Other versions
US8687862B2 (en
Inventor
Wynne Hsu
Mong Li Lee
Tien Yin Wong
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Singapore
Original Assignee
National University of Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Singapore filed Critical National University of Singapore
Priority to US12/936,702 priority Critical patent/US8687862B2/en
Assigned to NATIONAL UNIVERSITY OF SINGAPORE reassignment NATIONAL UNIVERSITY OF SINGAPORE ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HSU, WYNNE, LEE, MONG LI, WONG, TIEN YIN
Publication of US20110026789A1 publication Critical patent/US20110026789A1/en
Application granted granted Critical
Publication of US8687862B2 publication Critical patent/US8687862B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B3/00Apparatus for testing the eyes; Instruments for examining the eyes
    • A61B3/10Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
    • A61B3/12Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for looking at the eye fundus, e.g. ophthalmoscopes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20072Graph-based image processing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30041Eye; Retina; Ophthalmic
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30101Blood vessel; Artery; Vein; Vascular
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/14Vascular patterns

Definitions

  • the present invention relate to systems and methods for analyzing retinal images, and for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition.
  • Arterioles and venules are small branches of the main retinal arteries and veins respectively and their condition is indicative of the smaller blood vessels in the body. Measuring the diameter or widths of the arterioles and venules from detailed digital retinal images and calculating the arteriolar-to-venular diameter ratio (AVR) is one method of quantifying the imbalance between retinal arteriolar and venular calibre size. This measure can vary with different retinal vessels taken into calculation. More importantly, AVR provides information only on one aspect of retinal vascular change, namely retinal vessel calibre, and does not take into account the many structural alterations in the retinal vasculature.
  • AVR arteriolar-to-venular diameter ratio
  • a fractal is a geometrical pattern comprised of smaller parts or units which resemble the larger whole. Fractals have been used to characterise diverse natural shapes such as the branching patterns of trees, the shapes of coastlines, the pattern of electrocardiograph tracings as well as retinal microcirculation.
  • the fractal (or fractional) dimension (D) is one measure associated with fractals and has a range of definitions. However, it can be considered as a statistical quantity that provides an indication of how completely a fractal appears to fill the space occupied by the fractal as finer and finer scales are zoomed in upon. In other words, the fractal dimension can be considered as the number of smaller units comprising the larger unit that fit into that larger unit. The fractal dimension is always smaller than the number of dimensions in which the fractal being considered exists.
  • fractal geometry provides a global and more accurate description of the anatomy of the eye than classical geometry. Fractal patterns characterise how vascular patterns span the retina and can therefore provide information about the relationship between vascular patterns and retinal disease.
  • the present invention aims to provide a platform for automated analysis of a retinal image, including automatically tracing one or more paths of one or more vessels of a retinal image, and for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition.
  • a first aspect of the invention proposes in general terms an automated retinal image analysis system and/or method that permit a plurality of characteristics of the retina to be extracted, in order to provide data which is useful for enabling an evaluation of cardiovascular risk prediction, or even diagnosis of a cardiovascular condition.
  • Preferred embodiments of the invention permit large scale grading of retina images for cardiovascular risk prediction with high intra-grader and inter-grader reliability.
  • the first aspect of the invention proposes a retinal image analysis method including:
  • the first aspect of the invention proposes retinal image analysis system comprising:
  • a processor for:
  • the first aspect of the invention proposes retinal image analysis system comprising:
  • a second aspect of the invention proposes in general terms an automated retinal image analysis system and/or method that use fractal analysis of retinal images to provide disease risk prediction, such as, but not limited to, diabetes and hypertension.
  • the second aspect of the invention provides a retinal image analysis method including:
  • the method may include refining the trace image to remove errors in the trace image and calculating a refined fractal capacity dimension of the refined trace image.
  • the fractal capacity dimension is a multifractal capacity dimension, which is superior to a mono-fractal capacity dimension.
  • the method includes setting a radius of the optic disc of the retinal image prior to automatically tracing the retinal image.
  • the method includes cropping or scaling the retinal image to minimise deleterious aspects of the retinal image and enable retinal image comparisons.
  • the method includes repeating refining of the trace image if errors in the trace image remain after previous refining.
  • the method includes interrogating a data store and selecting diagnostic data based on the calculated refined fractal capacity dimension.
  • the method includes automatically generating a report including the retrieved diagnostic data.
  • the second aspect of the invention provides a retinal image analysis system comprising:
  • a processor for:
  • the system further comprises an output device coupled to be in communication with the processor for displaying the trace image and a refined trace image.
  • the system further comprises a data store coupled to be in communication with the processor for storing diagnostic data.
  • the processor interrogates the data store and selects diagnostic data based on the calculated fractal capacity dimension.
  • the processor automatically generates a report including the retrieved diagnostic data.
  • the second aspect of the invention provides a retinal image analysis system comprising:
  • computer readable program code components configured for generating an estimate for future risk of disease on the basis of the comparison.
  • FIG. 1 is a schematic diagram of a diagnostic retinal image system according to embodiments of the present invention.
  • FIG. 2 is a flow diagram of method which is an embodiment of the first aspect of the invention.
  • FIG. 3 illustrates the structure and operation of a computer program for carrying out the method of FIG. 2 ;
  • FIG. 4 which is composed of FIGS. 4( a ) to 4 ( k ), is a series of screenshots showing the three modes of optic disc detection in the method of FIG. 2 ;
  • FIG. 5 is a screenshot of a digital retinal image produced by the embodiment of FIG. 2 , showing the assigned optic disc;
  • FIG. 6 is composed of FIGS. 6( a ) and 6 ( b ), which respectively show a trace image in which retinal paths have been assigned, and an expanded portion of the trace image used to edit the highlighted vessel segment in the method of FIG. 2 ;
  • FIG. 7 is composed of FIGS. 7( a ), 7 ( b ) and 7 ( c ), which are screenshots showing the toggling capabilities that allow the user to remove visual obstructions when inspecting the image and editing the centerlines that make up the tree structures of the vessel in the method of FIG. 2 ;
  • FIG. 8 is a screenshot showing information generated by the method of FIG. 2 pertaining to a vessel, as well as summary attributes such as the CRAE and CRVE measures for the respective zones;
  • FIG. 9 is a screenshot showing the list of vessels and their standard deviation of vessel widths in the respective zones, generated by the method of FIG. 2 ;
  • FIG. 10 is composed of FIGS. 10( a ) and 10 ( b ), which are screenshots showing the removal of lines from the trace image which cause large variations in the standard deviation of the vessel width in the method of FIG. 2 ;
  • FIG. 11 is composed of FIGS. 11( a ), 11 ( b ) and 11 ( c ), which are screenshots showing the effect of optimization in the method of FIG. 2 to automatically cover bad segment widths according to embodiments of the present invention
  • FIG. 12 is a general flow diagram illustrating a method according to embodiments of the second aspect of the present invention.
  • FIG. 13 is a screenshot of a digital retinal image showing setting a radius of the optic disc according to embodiments of the second aspect of the present invention.
  • FIG. 14 is a screenshot of a trace image generated from the digital retinal image according to embodiments of the second aspect of the present invention.
  • FIG. 15 is a screenshot partially showing an enlargement of the trace image, the digital retinal image and stored data relating to the retinal and trace images according to embodiments of the second aspect the present invention
  • FIG. 16 is a screenshot showing the trace image and an enlarged digital retinal image during refining of the trace image according to embodiments of the second aspect of the present invention.
  • FIG. 17 is a series of screenshots showing the removal of peripapillary atrophy from the trace image during refining of the trace image according to embodiments of the second aspect of the present invention.
  • FIG. 18 is a series of screenshots showing the removal of short lines from the trace image which do not correspond to vessels during refining of the trace image according to embodiments of the second aspect of the present invention.
  • FIG. 19 is a series of screenshots showing the removal of choroidal vessels from the trace image during refining of the trace image according to embodiments of the second aspect of the present invention.
  • FIG. 20 is a series of screenshots showing the removal of artefacts from the trace image during refining of the trace image according to embodiments of the second aspect of the present invention.
  • FIG. 21 is a pair of screenshots showing a retinal image and an ungradable trace image according to embodiments of the second aspect of the present invention.
  • FIG. 22 shows an example of a report produced in accordance with embodiments of the second aspect of the present invention.
  • FIG. 23 shows a distribution of refined multifractal dimension D 0 .
  • FIG. 24 shows a graph of refined multifractal capacity dimension in a whole sample of participants plotted against systolic blood pressure.
  • the system 100 comprises a processor 110 coupled to be in communication with an output device 120 in the form of a pair of displays 130 according to preferred embodiments of the present invention.
  • the system 100 comprises one or more input devices 140 , such as a mouse and/or a keyboard and/or a pointer, coupled to be in communication with the processor 110 .
  • one or more of the displays 130 can be in the form of a touch sensitive screen, which can both display data and receive inputs from a user, for example, via the pointer.
  • the system 100 also comprises a data store 150 coupled to be in communication with the processor 110 .
  • the data store 150 can be any suitable known memory with sufficient capacity for storing configured computer readable program code components 160 , some or all of which are required to execute the present invention as described in further detail hereinafter.
  • the processor 110 is in the form of a Pentium® D, CPU 2.80 GHz and the data store 150 comprises 1 GB of RAM and 232 GB local hard disk space.
  • the pair of displays 130 are in the form of two 21-inch monitors allowing image displays at 1280 ⁇ 1024 resolution. NVIDIA Quadro FX 1400 graphics cards are also employed. The skilled addressee will appreciate that the present invention is not limited to this particular implementation.
  • the data store 150 stores configured computer readable program code components 160 , some or all of which are retrieved and executed by the processor 110 .
  • Embodiments of the present invention reside in a diagnostic retinal image system 100 comprising computer readable program code components configured for automatically tracing one or more paths of one or more vessels of a retinal image.
  • the system 100 also comprises computer readable program code components for generating a trace image comprising one or more traced paths.
  • the system further comprises computer readable program code components configured for refining the trace image to remove errors in the trace image.
  • the system 100 typically includes program code components for automatically identifying an optic disc. Cropping of a consistently defined area relative to optic disc size allows comparison of different images from the same individual taken at different times and permits comparison of images from dissimilar individuals because the defined area relative to optic disc size of the same image is not influenced by image magnification and different angles of photography.
  • the system 100 further comprises a comprehensive reporting facility which automatically generates a report usable by both clinicians and patients and reduces the level of human intervention enabling the efficiency of grading retinal images and reporting to be improved. Embodiments of the present invention allow multiple retinal images of a patient to be linked on a single report.
  • the retinal image system is configured to operate in accordance with either of the aspects of the invention, as described below, thereby performing the series of steps shown in either of FIGS. 2 and 12 respectively.
  • FIG. 2 the steps of a method which is a first embodiment of the invention are illustrated.
  • the structure of the computer program (e.g. as implemented by the system 100 of FIG. 1 ) for carrying out the method is illustrated in FIG. 3 , in which the steps labelled 1 - 11 correspond to those of FIG. 2 .
  • the program comprises a GUI (graphical user interface) module 12 , an image detection module 13 , and an attribute extraction module 14 .
  • the input to the method of FIG. 2 is a disc-centred retinal photograph, for example of the kind which can be produced by a Zeiss FF3 fundus camera, which has been digitized.
  • the region of the photograph corresponding to the optic disc is detected for reliable grading of the retina image.
  • the first mode is a fully automatic detection that is based on wavelet processing and ellipse fitting.
  • the system employs a Daubechies wavelet transform to approximate the optic disc region.
  • an abstract representation of the optic disc is obtained using an intensity-based template. This yields robust results in cases where the optic disc intensity is highly non-homogenous.
  • An ellipse fitting algorithm is then utilized to detect the optic disc contour from this abstract representation (see P. M. D. S Pallawala, Wynne Hsu, Mong Li Lee, Kah-Guan Au Eong. Automated Optic Disc Localization and Contour Detection Using Ellipse Fitting and Wavelet Transform, in 8th European Conference on Computer Vision (ECCV), Prague, Czech Republic, May 2004).
  • the second mode is a semi-automatic detection where the user positions a circle to approximate the cup area. Optic disc detection using the method described above is then carried out beyond the user-specified cup area.
  • the third mode of optic disc detection is primarily manual which requires the user to pinpoint the centre and the edge of the optic disc, following which an outline of the optic disc is traced on the retina image. This can be done using the optic disc guide portion 12 a of the GUI 12 .
  • FIG. 4 is a series of screenshots showing what the GUI displays during the three modes of optic disc detection.
  • FIGS. 4( a ) and 4 ( b ) show the fully-automated mode.
  • the user is initially presented with screen FIG. 4( a ), and upon a command, the unit 13 a , generates the screen FIG. 4( b ).
  • the optic disc is detected as the circular area 40 with radius r.
  • the unit 13 a Based on the optic disc radius r, the unit 13 a defines three zones A, B, C from the centre of the optic disc as follows: Zone A is the area within a circle 41 with radius 2 r ;
  • Zone B is a concentric area formed by subtracting Zone A from a circle 42 with radius 3 r ;
  • Zone C is a concentric area formed by subtracting Zone A from a circle 43 with radius 5 r , and thus overlaps with zone B.
  • FIGS. 4( c ) to 4 ( f ) show the sequence of steps in the semi-automated mode.
  • FIG. 4( c ) is equivalent to FIG. 4( a ).
  • FIG. 4( d ) shows how the user manually positions a circle 44 near the optic disc centre.
  • FIG. 4( e ) shows how the user resizes the circle 44 by dragging a mouse. After doing so, and thereby covering the optic disc cup with the resized circle, he clicks on “Find OD” on the GUI 12 , and the system generates circles 40 , 41 , 42 , 43 , shown in FIG. 4( f ), having the meaning above.
  • FIGS. 4( g ) to 4 ( k ) show the manual mode.
  • FIG. 4( g ) is again equivalent to FIG. 4( a ).
  • FIG. 4( h ) shows how a user uses a mouse to place a circle 44 at the optic disc centre (resizing is not done yet) and then clicks.
  • FIG. 4( i ) shows how the user then indicates the command “OD edge”.
  • FIG. 4( j ) shows how the user then clicks on an edge point of the optic disk he wishes to draw.
  • FIG. 4( k ) shows what the screen then shows: the circles 40 , 41 , 42 , 43 generated from the selected optic disc centre and the selected edge point.
  • the algorithm for vascular structure extraction is performed by the unit 13 b as controlled by a vessel finder unit 12 b of the GUI 12 , and is based on the work in S. Garg, J. Sivaswamy, S. Chandra. Unsupervised Curvature-Based Retinal Vessel Segmentation, Technical Report, Institute of Information Technology, India, 2007.
  • the retinal vessels are modelled as trenches and the centre lines of the trenches are extracted using curvature information (step 2 of FIG. 2 ).
  • the result is an image (labelled 15 on FIG. 3 ) in which each of the trenches is indicated by a centreline.
  • the complete vascular structure is then extracted (step 3 of FIG. 2 ) by an attribute extraction module 14 using a modified region growing method. Seed points are placed at the start of Zone B to initiate the tracing of vessel segments. The vessels are traced by the unit 14 a in a depth first manner whenever a branch point or cross-over is encountered. The widths samples of each vessel segment are automatically determined at fixed intervals. This is done by a unit 14 c.
  • the unit 14 b of the attribute extraction module 14 identifies arteries and veins as follows (the terms “vein” and “venule” are used interchangably in this document, as are the terms “arteries” and “artioles”).
  • first 15 diameter lines for each vessel This is to minimize the effect of local variation in intensity.
  • second 15 diameter lines for each selected diameter line we obtain its intensity profile in the form of a histogram. From the histogram, we select the top five most frequently occurring values and calculate the mean value.
  • the k-means algorithm initially picks two values as seeds. It then iteratively computes the distances from each point to the two centers. Points are re-assigned to the cluster of the nearest center. After several iterations, the clusters will converge.
  • the 15 diameter lines After obtaining the two cluster centers, we label the 15 diameter lines as either veins or arteries depending of its distance to the two centers. We count the number of diameter lines that have been labeled as veins and arteries respectively. A majority voting system is used to finally classify the given vessel as either vein or artery.
  • FIG. 6( a ) shows the overall retinal image with lines indicating vessel segments. Some of the venules are labelled by reference numerals 61 , 62 , 63 , 64 , 65 , 66 , 67 . Some of the arterioles are labelled 71 , 72 , 73 . The arteriole 81 is shown brighter because it has been selected by a user.
  • FIG. 6( b ) shows how the user may display a larger version of a vessel segment, and that if so the width at each point along the vessel segment is displayed by a line orthogonal to the length direction of the vessel segment 44 .
  • the system attempts to sample the widths at equal intervals.
  • Several operations are provided for the user to refine vascular structure that has been extracted and tracked.
  • the user may edit the data (step 6 ) using a vessel editor unit 12 c of the GUI 12 , which is shown in FIG. 3 as receiving user input (“edit vessels”). This may include any one of more of adding missed vessel segments, deleting dangling vessel segments, breaking vessel segments, marking and unmarking crossover vessel segments, and changing vessel type.
  • the result is passed back to the vessel segment detector 13 b .
  • the method loops back to step 3 (optionally many times) until the user is satisfied.
  • Some of the various views shown to the user by the GUI at this time are shown in FIG. 7 . The user can toggle between these views.
  • the unit 14 d optimises the width samples by discarding bad samples based on a special heuristic to improve standard deviation (step 7 ).
  • This heuristic comes in the form of maximising an objective function that balances between standard deviation (lower is better) of the widths and the number of samples retained (higher is better).
  • a standard optimisation technique is used to retain the widths that maximise the objective function.
  • FIG. 11 illustrates is the effect of applying the width optimiser.
  • FIG. 11 a shows a screen shot before this process with FIG. 9 (an enlargement of the right of FIG. 11B ), indicating vessels that have wide variations in sampled widths using dark boxes.
  • FIG. 11C shows a screen shot after this process is applied with the right of the figure containing no dark boxes. This indicates that the current variation of sampled widths is not overly wide.
  • the unit 14 e then computes the attributes of the segments identified and edited.
  • a vessel is a binary tree with the property that each node has exactly zero or two child nodes.
  • Each node in the vessel binary tree denotes a segment.
  • a segment is a list of points on a line that does not include any branching points.
  • the segment at the root node of a vessel is called the root segment and the first point in the segment at the root node is the root point.
  • a segment could branch out into two daughter segments.
  • Zone B measurements considers only the root segment within Zone B, that is, if the root segment extends outside of Zone B, only the part of the segment within Zone B will be computed.
  • v be a vessel, the following Zone B measures are provided:
  • v be a vessel.
  • the following vessel Zone C measurements include all descendent segments of the root segment within the Zone that are combined in a novel way.
  • w ⁇ ⁇ ( s ) w C ⁇ ( s ) ⁇ ⁇ C ⁇ ( s ) + a v ⁇ ( w ⁇ ⁇ ( s 1 ) 2 + w ⁇ ⁇ ( s 2 ) 2 ) ⁇ ⁇ C ⁇ ( s 1 ) + ⁇ C ⁇ ( s 2 ) 2 ⁇ C ⁇ ( s ) + ⁇ C ⁇ ( s 1 ) + ⁇ C ⁇ ( s 2 ) 2
  • a v ⁇ 0.88 if ⁇ ⁇ v ⁇ ⁇ is ⁇ ⁇ an ⁇ ⁇ arteriole 0.95 if ⁇ ⁇ v ⁇ ⁇ is ⁇ ⁇ a ⁇ ⁇ venule
  • ⁇ ⁇ ( s ) ⁇ ⁇ ( s 1 ) 2 + ⁇ ⁇ ( s 2 ) 2 ⁇ ⁇ ( s ) 2
  • ⁇ ⁇ ( s ) ( min ⁇ ( ⁇ ⁇ ( s 1 ) , ⁇ ⁇ ( s 2 ) ) max ⁇ ( ⁇ ⁇ ( s 1 ) , ⁇ ⁇ ( s 2 ) ) 2
  • ⁇ ⁇ ( s ) ⁇ ⁇ ⁇ ( s ) 3 - ⁇ ⁇ ( s 1 ) 3 - ⁇ ⁇ ( s 2 ) 3 ⁇ 3 ⁇ ⁇ ( s ) ,
  • Idr(v) A segment s is qualified to participate in the ratio if and only if it is a segment that occurs after the first branching point of the vessel v but before the second branching point and the entire qualified segment is within Zone C.
  • the length diameter ratio of vessel v is that defined as the ratio of the length of the qualified segment to the average diameter of the segment.
  • FIG. 9 highlights vessels with wide variations in width measurements for the user to edit.
  • the dataset is shown as 17 in FIG. 3 . It includes the segment measurement data produced in step 1.4.1.
  • FIG. 5 shows the overall appearance of the GUI 12 at this time with the outer boundaries of the zones A, B and C indicated by circular lines 51 , 52 , 53 , and the dataset of FIG. 8 inset.
  • FIG. 10 illustrates a manual discarding of bad widths by the user as part of editing the segment in step 10 . Widths involved in computing the measures are shown in bright lines along the segment. In FIG. 10 a , all widths are used. FIG. 10 b shows discarded bad widths as dark lines that were removed by the user. The user may discard or include widths by dragging the mouse across them or clicking on them. For further control, the user may choose to modify each width (bright line in FIG. 10 a ) by manually shifting them or changing their length.
  • Vessels that fail to fall within an acceptable standard deviation for width are highlighted, so that the user may decide whether to further edit them.
  • the editing process allows a user to remove visual obstructions when inspecting the image and editing the centrelines of the trace paths.
  • the loop through steps 8 , 9 and 10 can be performed until the user is satisfied that enough width measurements are taken and the standard deviation is acceptable for each vessel.
  • the extractor unit 14 f of the attribute extraction module 14 then outputs (e.g. in response to the user inputting an instruction “save”) the list of vessels and their standard widths in the respective zones, as illustrated in FIG. 9 .
  • the data may be sent to a data store for subsequent analysis.
  • This aggregated data is labelled 18 in FIG. 3 , and is output from the system as “output attributes”. It is a table of values in comma separated value (csv) format.
  • These output attributes may be used to predict medical conditions, for example in the following way:
  • the output is in table format where each image is an instance (row) in the table and the attributes are the columns. 2. Each instance is labelled with the presence or absence of disease. 3. Standard statistical correlation methods or modern classification methods (such as, Support Vector Machines, Bayesian networks) may be used to build predictive models with the labelled data as training data.
  • Diabetic Retinopathy Study Field 2 (disc centred) retinal photographs are used to calculate the fractal dimension.
  • Colour retinal photographs were taken after pupil dilation using a Zeiss FF3 fundus camera, which were then digitized.
  • the fractal dimension is sensitive to magnification differences, the angle of view and the retinal photography field and these factors should be borne in mind when comparing fractal dimensions.
  • Embodiments of the present invention can calculate a fractal dimension for other retinal photography fields, such as field 1 and macular centred, but the fractal dimension will be different from other fields taken of the same eye.
  • a diagnostic retinal image method 200 which is an embodiment of the present invention will now be described with reference to the general flow diagram shown in FIG. 12 and FIGS. 13 to 22 .
  • the method 200 includes at 210 acquiring the retinal image to be analysed, which can include retrieving a retinal image from the data store 150 .
  • the size of the retinal image can be adjusted and can be sized to fill one of the displays 130 .
  • the method 200 includes at 220 setting a radius of the optic disc of the retinal image.
  • the centre 310 of the optic disc 320 is estimated either automatically using a suitable algorithm or manually, which includes a user highlighting the centre 310 , for example, with a cross as shown in FIG. 13 .
  • An upper edge 330 of the optic disc 320 vertically above the centre 310 is defined either automatically using a suitable algorithm or manually, which includes the user highlighting the upper edge 330 , for example, with a cross as shown in FIG. 13 .
  • the upper edge 330 is defined to be where the optic disc 320 definitely ends and the colouration becomes that of the background retina or peripapillary atrophy. If there is an optic disc halo, the presence of the halo is ignored, either by the algorithm or the user.
  • the method 200 includes at 230 cropping or scaling the retinal image 300 to minimise deleterious aspects of the retinal image and enable retinal image comparisons.
  • This includes setting a radius factor, which crops or scales the retinal image 300 to a multiple of the optic disc radius.
  • the radius factor can be in the range 2.0-5.0.
  • the radius factor is set to 3.5 such that the size of the retinal image is 3.5 optic disc radii.
  • the method 200 represented in FIG. 12 includes automatically tracing one or more paths of one or more vessels of the retinal image 300 .
  • the starting points of the vessels can be detected using a matched Gaussian filter and the detected vessels are traced using a combination of a statistical recursive estimator, such as a Kalman filter, and a Gaussian filter.
  • the network of vessels is detected by performing a non-linear projection to normalize the retinal image 300 . Subsequently, an optimal threshold process is applied to binarize the image. The detected network is then thinned by applying a distance-based centreline detection algorithm.
  • the method 200 further includes at 250 automatically generating a trace image 400 comprising the one or more traced paths, an example of which is shown in FIG. 14 partially overlaid on the retinal image 300 .
  • the method includes automatically calculating a raw fractal capacity dimension of the trace image.
  • a box-counting method known from, for example, Stosic, T. & Stosic, B. D., Multifractal analysis of human retinal vessels, IEEE Trans. Med. Imaging 25, 1101-1107 (2006) is employed.
  • the method includes automatically selecting 1000 points at random on each trace image structure. Each structure has a typical size M 0 of 30,000 pixels and a typical linear size L of 600 pixels. The number of pixels M i inside boxes centred on each point are then automatically counted.
  • the method includes extracting the generalized dimension D q from these numbers for different values of q ( ⁇ 10 ⁇ q ⁇ 10) as slopes of the lines obtained through minimum squares fitting of log ⁇ [M(R)/M 0 ] q ⁇ 1 ⁇ /(q ⁇ 1) as a function of log (R/L) where R is the growing linear dimension of the boxes.
  • Preferred embodiments of the method include repeating the process, for example, 100 times, with the random selection of points repeated each time and the final values of D q calculated from the average of the repetitions.
  • the fractal capacity dimension D is 1.476806, as displayed in FIG. 14 .
  • the raw fractal capacity dimension including all the decimal place values is recorded in the data store 150 along with an identifier for the retinal image file, such as a unique filename, an identification number of the patient to whom the retinal image belongs, whether the image is of the left or right eye and the optic disc radius.
  • this data is recorded in a spreadsheet 500 , but any other suitable file, such as a machine-readable .csv file, can be used, from which a report can be complied as described in further detail hereinafter and which can be used for subsequent research.
  • the trace image 400 and/or the retinal image 300 can be resized and displayed side by side, for example, with one image in each display 130 , for ease of comparison.
  • Some the method 200 includes at 265 refining the trace image 400 to remove errors in the trace image.
  • the vessel trace function of the present invention is sensitive and will pick up fine arterioles and venules as well as artefacts that are not vessels, such as peripapillary atrophy, choroidal vessels or light reflected from the nerve fibre layer.
  • the term “error” in relation to the trace image is used herein to refer to artefacts that are not vessels.
  • Refining of the trace image can be executed automatically by a suitable algorithm or manually. Refining of the trace image 400 will now be described in more detail with reference to FIGS. 16-20 .
  • Refining includes comparing each line tracing in the trace image 400 with the corresponding area in the retinal image 300 .
  • the user can start, for example, at the 12 o'clock position on the trace image 400 and move clockwise around the image.
  • the feature that each tracing is derived from should be identified as a vessel or otherwise. Where it is difficult to determine if a line trace is a vessel or an artefact, the retinal image 300 and/or the trace image 400 can be enlarged as required for improved comparison.
  • An example is shown in FIG. 16 , where vessel tracings at the 10 o'clock position require closer examination to determine if they are artefacts or true vessels. In FIG. 6 , the vessel tracings at the 10 o'clock position are true retinal vessels.
  • the incorrect line tracing or error can be erased from the trace image 400 .
  • This can be executed automatically or manually. Where executed manually, an erase function known from electronic drawing packages can be employed to erase the erroneous line tracing. Any small white pixels left behind in the trace image 400 must also be erased and it may be necessary to enlarge the trace image 400 to ensure all of the tracing has been removed.
  • a range of artefacts can occur and must be removed from the trace image to ensure they do not affect the fractal dimension calculation.
  • abnormalities around the optic disc 320 such as peripapillary atrophy, must be removed, as shown in the series of images in FIG. 7 , to produce a refined trace image 700 .
  • Other artefacts that must also be removed include unusual line tracings that do not correspond to vessels, especially short, thick lines, any clear artefacts, such as horizontal lines at the top or bottom of the image and any isolated short lines at an unusual angle and which do not come off any major vessels.
  • Any pigment abnormalities can be erroneously traced as vessels, particularly with poorly focused retinal images. Such tracings, as well as retinal haemorrhages, should also be removed from the trace image 400 .
  • the series of images in FIG. 18 shows the removal of short lines which do not correspond to vessels to produce a refined trace image 800 . Note their short length, unusual orientation, and lack of connection to existing vessels.
  • the series of images in FIG. 19 shows the removal of choroidal vessels at the 6 o'clock position to produce a refined trace image 900 .
  • the series of images in FIG. 20 shows the removal of artefacts at the 5 o'clock position to produce a refined trace image 1000 .
  • embodiments of the method 200 include at 270 calculating a refined fractal capacity dimension of the refined trace image.
  • the refined fractal dimension D 0 will be equal to or less than the raw fractal dimension.
  • the refined fractal dimension is 1.472857 compared with the raw fractal dimension of 1.476806.
  • the refined fractal dimension and any comments are also recorded in the data store 150 .
  • the method 200 includes at 275 repeating refining of the trace image 400 if errors in the trace image remain after previous refining, i.e. any incorrect tracings remaining in the refined fractal trace image can be erased and a further refined trace image generated along with another refined fractal dimension. Both the further refined trace image and the further refined fractal dimension are also saved in the data store 150 , for example, in the same text file, with the previous data.
  • the cropped/scaled image file and the refined fractal trace image file can be saved.
  • the raw fractal line tracing can be discarded because it can be generated from the cropped/scaled image file. This will allow rechecking of results later if required.
  • FIG. 21 shows an example of an ungradable retinal image 1100 and the initial trace image 1110 . That a retinal image is ungradable is recorded in the data store 150 .
  • the method 200 includes comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension.
  • the benchmark fractal capacity dimension is determined from a large number of measurements of the fractal capacity dimension from a ‘normal’ population without hypertension or diabetes etc.
  • the method 200 includes generating a diagnosis by interrogating a data store and selecting diagnostic data based on, or correlating to, the comparison of the calculated raw or refined fractal capacity dimension with the benchmark fractal capacity dimension.
  • the calculated fractal capacity dimension provides an accurate description of the retinal vasculature and subtle changes in the retinal vasculature are reflected in changes in the fractal capacity dimension. Such changes can be linked to, and be indicators of, cardiovascular disease and other conditions, such as hypertension and diabetes.
  • embodiments of the method 200 include the processor 110 automatically generating a report including the retrieved diagnostic data 1280 , which is discussed in further detail herein.
  • a schematic example of a report 1200 is shown in FIG. 22 .
  • the report 1200 comprises patient data 1210 , such as name, age, date of birth, gender, address, ID number, and the like.
  • the patient data can be entered by the user, or the data fields can be automatically populated by the processor 110 , for example, by retrieving information from the data store 150 or a dedicated patient database (not shown) coupled to be in communication with the processor 110 .
  • the report 1200 also comprises medical information 1220 about the patient, such as their blood pressure, whether they are a diabetic, their smoking status and history and the like. Requesting practitioner details 1230 can also be included, the date the retinal images were taken and the date of the report 1240 . One or more retinal images 1250 of the patient, one or more trace images and/or one or more refined trace images can be included. Associated measurements 1260 determined from the fractal analysis can also be included.
  • the particular diagnostic data 1280 retrieved from the data store 150 and included in the report 1200 is based on, or correlates to, the comparison of the calculated fractal capacity dimension with the benchmark fractal capacity dimension.
  • diagnostic data 1280 in the form of clauses or statements based on the calculated refined fractal capacity dimension are retrieved from the data store 150 and inserted in the report 1200 .
  • the particular diagnostic data retrieved can also depend on one or more other characteristics of the patient, such as, but not limited to, their age, blood pressure, whether they are or have been a smoker, whether they are a diabetic and/or their family medical history.
  • a calculated refined fractal capacity dimension within a specific range and/or above or below a specific threshold possibly combined with the one or more items of the aforementioned patient data 1210 , cause specific diagnostic data 1280 to be retrieved from the data store 150 and inserted in the report 1200 .
  • An estimate for future risk of cardiovascular disease and other diseases on the basis of the comparison is thus provided.
  • specific diagnostic data 1280 include, but are not limited to “Increased risk of hypertension and coronary heart disease”, “Progression of diabetic retinopathy, kidney diseases and increased deaths from stroke”, “High risks of stroke, coronary heart disease and cardiovascular mortality” and “High risk of glaucoma”. References to medical reports and papers supporting the diagnosis can also be included.
  • pathology 1290 can also be included in the report 1200 . In some embodiments, this can be produced from diagnostic data retrieved from the data store 150 and from the data recorded for the patient.
  • the diagnostic data 1280 and other pathology 1290 can be stored, for example, in a look-up table or via any other suitable means known in the art.
  • the fractal capacity dimension can be a mono-fractal capacity dimension, but in preferred embodiments, the fractal capacity dimension is a multifractal capacity dimension because this more appropriately describes fractal characteristics of the retinal vasculature.
  • a multifractal can be considered as multiple monofractals embedded into each other, the multifractal having a hierarchy of exponents rather than a single fractal dimension.
  • Results obtained with embodiments of the present invention were based on a random sample of 60 black and white optic disc centred retinal photographs of right eyes from a study comprising 30 participants without hypertension and diabetes, 15 participants with hypertension only and 15 participants with diabetes only.
  • SBP systolic blood pressure
  • DBP diastolic blood pressure
  • WHO World Health Organization
  • ISH International Society of Hypertension
  • the subject was considered hypertensive grade 2 or above (severe hypertension) if the subject was previously diagnosed as hypertensive and currently using anti-hypertensive medications, or had a SBP ⁇ 160 mmHg or a DBP ⁇ 100 mmHg at examination. Diabetes was defined based on a physician diagnosis of diabetes, or a fasting blood sugar ⁇ 7 mmol/L.
  • the value of the refined multifractal dimension D o is approximately normally distributed.
  • the mean multi-fractal dimension in this study sample is 1.447 with a standard deviation of 0.017. This is lower than the often quoted fractal dimension of 1.7 because embodiments of the present invention calculate the multifractal capacity dimension, which is lower than the diffusion limited aggregation (monofractal) dimension of 1.7.
  • Reliability estimates of embodiments of the present invention have been determined from the aforementioned study sample. Colour photographs of the same right eye retinal field of the 30 participants without hypertension and diabetes were also graded and the results compared with the identical, but black and white, photographs. Comparison was made between three graders and agreement assessed used Pearson correlation coefficients. Table 2 below shows that the intra- and inter-grader reliability estimates were generally high, with correlation of over 0.90. Reliability estimates were higher for refined fractal dimension D 0 compared to the raw fractal dimension.
  • embodiments of the present invention can be applied to both colour images and black and white images. There is a small discrepancy between the calculated fractal dimension, but the correlation between the raw fractal dimension calculated from colour and black and white photographs is moderately high (0.70-0.79). TIFF and JPEG format photos had very high correlation of 0.97. Embodiments of the present invention exhibit robustness to use by different users (graders). Even with the raw multi-fractal dimension, i.e. after setting the optic disc radius, but before removing artefacts, the intra-grader reliability is high (correlation 0.93), while the refined dimension, i.e. after refining the trace image 400 to remove artefacts, shows the same correlation.
  • the correlation of the raw and refined multifractal capacity dimension D 0 with a range of systemic and ocular factors were examined, including age, SBP, DBP, refractive error and arteriolar and venular calibre.
  • the arteriolar and venular calibre is represented by the central retinal arteriole and venule equivalents (CRAE and CRVE respectively).
  • the arteriolar and venular calibres were calculated using a computer-assisted method as described in Liew, G. et al. Measurement of retinal vascular calibre: issues and alternatives to using the arteriole to venule ratio. Invest Opthalmol Vis. Sci. 48, 52-57 (2007). The results are shown in Table 3 below and the numbers in the table refer to the Pearson correlation coefficients.
  • Both the refined and raw D 0 showed moderate correlation with age, SBP and DBP.
  • the refined D 0 was more highly correlated with both SBP and DBP than arteriolar calibre.
  • Refractive error had very low correlation with raw and refined D 0 .
  • FIG. 24 shows a graph of refined D 0 in the whole sample plotted against SBP.
  • the inverse relationship of the refined D 0 with SBP shows D 0 decreasing by 0.004 per 10 mmHg increase in SBP.
  • the multi-fractal dimension shows strong correlation with SBP, DBP and age, as well as with CRAE and CRVE. Indeed, the correlation of multi-fractal dimension with SBP and DBP is even higher than that of CRAE with SBP and DBP suggesting that calculating the multi-fractal capacity dimension is better than CRAE for detecting early changes in CVD. Embodiments of the present invention can also detect differences in the multifractal dimension in persons with and without hypertension even in this small sample.
  • arterioles and venules of the detailed digital retinal images 300 could be isolated and the raw and refined fractal capacity dimensions calculated for arterioles only and venules only thus potentially providing an even stronger correlation between the fractal capacity dimension and vascular diseases, such as, but not limited to, diabetes and hypertension.
  • the system when the system is configured to perform the second aspect of the invention it further comprises computer readable program code components configured for calculating a refined fractal capacity dimension of the refined trace image.
  • the system also comprises computer readable program code components configured for comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension and computer readable program code components configured for generating an estimate for future risk of cardiovascular disease on the basis of the comparison.
  • the diagnostic retinal image system and method of the second aspect of present invention accurately measures the fractal dimension capacity of retinal images automatically to provide an estimate for future risk of vascular diseases, such as, but not limited to, diabetes and hypertension.
  • the superior accuracy of fractals in describing the anatomy of the eye enables accurate measurements of the fractal dimension capacity to reveal subtle changes in the retinal vasculature and thus provide indications of vascular diseases.
  • Embodiments of the second aspect of the invention show strong correlations even with the raw fractal dimension capacity demonstrating that the method and system are robust to grader error.
  • the terms “comprises”, “comprising”, “including” or similar terms are intended to mean a non-exclusive inclusion, such that a method, system or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed.
  • the term “automatic” is used here to mean that an operation is performed without human interaction (although a human may initiate the operation), whereas the term “semi-automatic” is used to describe an operation which involves human interaction with a computer processing system.
  • An “automated” process may comprise one or more automatic operations and/or one or more semi-automatic operations, so the term “automated” is equivalent to the term “computer-implemented”.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Medical Informatics (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Eye Examination Apparatus (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

A platform is proposed for automated analysis of retinal images, for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition. A first aspect of the invention proposes that a plurality of characteristics of the retina are extracted, in order to provide data which is useful for enabling an evaluation of cardiovascular risk prediction, or even diagnosis of a cardiovascular condition. A second aspect uses fractal analysis of retinal images to provide vascular disease risk prediction, such as, but not limited to, diabetes and hypertension.

Description

    FIELD OF THE INVENTION
  • The present invention relate to systems and methods for analyzing retinal images, and for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition.
  • BACKGROUND TO THE INVENTION
  • Existing non-invasive methods for monitoring cardiovascular disorders utilize blood pressure (see for example U.S. Pat. Nos. 5,961,467, 4,669,485, 5,365,924, 6,135,966, 5,634,467, 5,178,151). However, clinical studies have provided indications that changes in retinal vasculature (e.g., narrowing of retinal arterioles and widening of retinal venules) may be an early indicator of cardiovascular disease (CVD) and other conditions, such as hypertension and diabetes. The conditions of retinal arterioles and venules reflect the conditions of the blood vessels in the rest of the body. The ability to quantify the characteristics of retinal vessels is important to determine the severity of retinal arteriolar narrowing and other conditions.
  • Arterioles and venules are small branches of the main retinal arteries and veins respectively and their condition is indicative of the smaller blood vessels in the body. Measuring the diameter or widths of the arterioles and venules from detailed digital retinal images and calculating the arteriolar-to-venular diameter ratio (AVR) is one method of quantifying the imbalance between retinal arteriolar and venular calibre size. This measure can vary with different retinal vessels taken into calculation. More importantly, AVR provides information only on one aspect of retinal vascular change, namely retinal vessel calibre, and does not take into account the many structural alterations in the retinal vasculature. However, it is difficult to quantify the above characteristics of retinal vessels on a large scale as the process would involve repeated measurements of the diameters of the arterioles and venules in the retinal images by trained human graders. This is labour intensive and the results can vary when different human graders are used. For that reason, to our knowledge, no platform presently exists for non-invasive observation of cardiovascular orders using retinal image analysis.
  • It is also known that the branching patterns of retinal arterial and venous systems have fractal characteristics. A fractal is a geometrical pattern comprised of smaller parts or units which resemble the larger whole. Fractals have been used to characterise diverse natural shapes such as the branching patterns of trees, the shapes of coastlines, the pattern of electrocardiograph tracings as well as retinal microcirculation. The fractal (or fractional) dimension (D) is one measure associated with fractals and has a range of definitions. However, it can be considered as a statistical quantity that provides an indication of how completely a fractal appears to fill the space occupied by the fractal as finer and finer scales are zoomed in upon. In other words, the fractal dimension can be considered as the number of smaller units comprising the larger unit that fit into that larger unit. The fractal dimension is always smaller than the number of dimensions in which the fractal being considered exists.
  • It was suggested in Patton N, Aslam T, MacGillivray T, Pattie A, Deary I J, Dhillon B., Retinal vascular image analysis as a potential screening tool for cerebrovascular disease: a rationale based on homology between cerebral and retinal microvasculatures. J. Anat. 2005; 206:319-348, that fractals offer a natural, global, comprehensive description of the retinal vascular tree because they take into account both the changes in retinal vessel calibre and changes in branching patterns.
  • In Mainster M. A., The fractal properties of retinal vessels: embryological and clinical implications, Eye, 1990, 4 (Pt 1):235-241, the analysis of digitised fluorescein angiogram collages revealed that retinal arterial and venous patterns have fractal dimensions of 1.63±0.05 and 1.71±0.07 respectively, which is consistent with the 1.68±0.05 dimension known from diffusion limited aggregation.
  • In Daxer A, The fractal geometry of proliferative diabetic retinopathy: implications for the diagnosis and the process of retinal vasculogenesis. Curr Eye Res. 1993; 12:1103-1109, retinal vessel patterns with neovascularisation at or near the optic disc (NVD) were compared with the vascular patterns of normal eyes. The presence of NVD in an eye is a high risk characteristic for severe visual loss requiring laser treatment. Fractal dimensions were calculated from digitised photographs using a density-density correlation function method. The mean fractal dimension D for vessel patterns with NVD was significantly higher (D=1.845±0.056) compared with the control group (D=1.708±0.073). A cut-off value for the fractal dimension is suggested to be 1.8, with higher values being potentially indicative of proliferative changes.
  • Hence, fractal geometry provides a global and more accurate description of the anatomy of the eye than classical geometry. Fractal patterns characterise how vascular patterns span the retina and can therefore provide information about the relationship between vascular patterns and retinal disease.
  • SUMMARY OF THE INVENTION
  • The present invention aims to provide a platform for automated analysis of a retinal image, including automatically tracing one or more paths of one or more vessels of a retinal image, and for obtaining from them information characterizing retinal blood vessels which may be useful in forming a diagnosis of a medical condition.
  • A first aspect of the invention proposes in general terms an automated retinal image analysis system and/or method that permit a plurality of characteristics of the retina to be extracted, in order to provide data which is useful for enabling an evaluation of cardiovascular risk prediction, or even diagnosis of a cardiovascular condition. Preferred embodiments of the invention permit large scale grading of retina images for cardiovascular risk prediction with high intra-grader and inter-grader reliability.
  • In one expression, the first aspect of the invention proposes a retinal image analysis method including:
  • (i) automatically tracing one or more paths of one or more vessels of a retinal image;
  • (ii) automatically generating a trace image comprising the one or more traced paths;
  • (iii) automatically identifying a plurality of vessel segments which are portions of said vessels;
  • (iv) using the vessel segments to calculate automatically a plurality of parameters; and
  • (v) outputting the plurality of parameters.
  • In another expression, the first aspect of the invention proposes retinal image analysis system comprising:
  • a processor for:
  • (i) automatically tracing one or more paths of one or more vessels of a retinal image;
  • (ii) automatically generating a trace image comprising the one or more traced paths;
  • (iii) automatically identifying a plurality of vessel segments which are portions of said vessels;
  • (iv) using the vessel segments to calculate automatically a plurality of parameters; and
  • (v) outputting the plurality of parameters.
  • In another expression, the first aspect of the invention proposes retinal image analysis system comprising:
  • (i) computer readable program code components configured for automatically tracing one or more paths of one or more vessels of a retinal image;
  • (ii) computer readable program code components configured for automatically generating a trace image comprising the one or more traced paths;
  • (iii) computer readable program code components configured for automatically identifying a plurality of vessel segments which are portions of said vessels;
  • (iv) computer readable program code components configured for using the vessel segments to calculate automatically a plurality of parameters; and
  • (v) computer readable program code components configured for outputting the plurality of parameters.
  • A second aspect of the invention proposes in general terms an automated retinal image analysis system and/or method that use fractal analysis of retinal images to provide disease risk prediction, such as, but not limited to, diabetes and hypertension.
  • In one expression, the second aspect of the invention provides a retinal image analysis method including:
  • automatically tracing one or more paths of one or more vessels of a retinal image;
  • automatically generating a trace image comprising one or more traced paths;
  • automatically calculating a fractal capacity dimension of the trace image;
  • comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension; and
  • generating an estimate for future risk of disease (such as cardiovascular disease) on the basis of the comparison.
  • The method may include refining the trace image to remove errors in the trace image and calculating a refined fractal capacity dimension of the refined trace image.
  • Preferably, the fractal capacity dimension is a multifractal capacity dimension, which is superior to a mono-fractal capacity dimension.
  • Preferably, the method includes setting a radius of the optic disc of the retinal image prior to automatically tracing the retinal image.
  • Preferably, prior to automatically tracing the retinal image, the method includes cropping or scaling the retinal image to minimise deleterious aspects of the retinal image and enable retinal image comparisons.
  • Suitably, the method includes repeating refining of the trace image if errors in the trace image remain after previous refining.
  • Suitably, the method includes interrogating a data store and selecting diagnostic data based on the calculated refined fractal capacity dimension.
  • Suitably, the method includes automatically generating a report including the retrieved diagnostic data.
  • In another expression, the second aspect of the invention provides a retinal image analysis system comprising:
  • a processor for:
  • automatically tracing one or more paths of one or more vessels of a retinal image;
  • automatically generating a trace image comprising one or more traced paths;
  • automatically calculating a fractal capacity dimension of the trace image;
  • comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension; and
  • generating an estimate for future risk of disease on the basis of the comparison.
  • Preferably, the system further comprises an output device coupled to be in communication with the processor for displaying the trace image and a refined trace image.
  • Preferably, the system further comprises a data store coupled to be in communication with the processor for storing diagnostic data.
  • Suitably, the processor interrogates the data store and selects diagnostic data based on the calculated fractal capacity dimension.
  • Suitably, the processor automatically generates a report including the retrieved diagnostic data.
  • In a further expression, the second aspect of the invention provides a retinal image analysis system comprising:
  • computer readable program code components configured for automatically tracing one or more paths of one or more vessels of a retinal image;
  • computer readable program code components configured for automatically generating a trace image comprising one or more traced paths;
  • computer readable program code components configured for automatically calculating a fractal capacity dimension of the trace image;
  • computer readable program code components configured for comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension; and
  • computer readable program code components configured for generating an estimate for future risk of disease on the basis of the comparison.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • By way of example only, preferred embodiments of the invention will be described more fully hereinafter with reference to the accompanying drawings, wherein:
  • FIG. 1 is a schematic diagram of a diagnostic retinal image system according to embodiments of the present invention;
  • FIG. 2 is a flow diagram of method which is an embodiment of the first aspect of the invention;
  • FIG. 3 illustrates the structure and operation of a computer program for carrying out the method of FIG. 2;
  • FIG. 4, which is composed of FIGS. 4( a) to 4(k), is a series of screenshots showing the three modes of optic disc detection in the method of FIG. 2;
  • FIG. 5 is a screenshot of a digital retinal image produced by the embodiment of FIG. 2, showing the assigned optic disc;
  • FIG. 6 is composed of FIGS. 6( a) and 6(b), which respectively show a trace image in which retinal paths have been assigned, and an expanded portion of the trace image used to edit the highlighted vessel segment in the method of FIG. 2;
  • FIG. 7 is composed of FIGS. 7( a), 7(b) and 7(c), which are screenshots showing the toggling capabilities that allow the user to remove visual obstructions when inspecting the image and editing the centerlines that make up the tree structures of the vessel in the method of FIG. 2;
  • FIG. 8 is a screenshot showing information generated by the method of FIG. 2 pertaining to a vessel, as well as summary attributes such as the CRAE and CRVE measures for the respective zones;
  • FIG. 9 is a screenshot showing the list of vessels and their standard deviation of vessel widths in the respective zones, generated by the method of FIG. 2;
  • FIG. 10 is composed of FIGS. 10( a) and 10(b), which are screenshots showing the removal of lines from the trace image which cause large variations in the standard deviation of the vessel width in the method of FIG. 2;
  • FIG. 11 is composed of FIGS. 11( a), 11(b) and 11(c), which are screenshots showing the effect of optimization in the method of FIG. 2 to automatically cover bad segment widths according to embodiments of the present invention;
  • FIG. 12 is a general flow diagram illustrating a method according to embodiments of the second aspect of the present invention;
  • FIG. 13 is a screenshot of a digital retinal image showing setting a radius of the optic disc according to embodiments of the second aspect of the present invention;
  • FIG. 14 is a screenshot of a trace image generated from the digital retinal image according to embodiments of the second aspect of the present invention;
  • FIG. 15 is a screenshot partially showing an enlargement of the trace image, the digital retinal image and stored data relating to the retinal and trace images according to embodiments of the second aspect the present invention;
  • FIG. 16 is a screenshot showing the trace image and an enlarged digital retinal image during refining of the trace image according to embodiments of the second aspect of the present invention;
  • FIG. 17 is a series of screenshots showing the removal of peripapillary atrophy from the trace image during refining of the trace image according to embodiments of the second aspect of the present invention;
  • FIG. 18 is a series of screenshots showing the removal of short lines from the trace image which do not correspond to vessels during refining of the trace image according to embodiments of the second aspect of the present invention;
  • FIG. 19 is a series of screenshots showing the removal of choroidal vessels from the trace image during refining of the trace image according to embodiments of the second aspect of the present invention;
  • FIG. 20 is a series of screenshots showing the removal of artefacts from the trace image during refining of the trace image according to embodiments of the second aspect of the present invention;
  • FIG. 21 is a pair of screenshots showing a retinal image and an ungradable trace image according to embodiments of the second aspect of the present invention;
  • FIG. 22 shows an example of a report produced in accordance with embodiments of the second aspect of the present invention;
  • FIG. 23 shows a distribution of refined multifractal dimension D0; and
  • FIG. 24 shows a graph of refined multifractal capacity dimension in a whole sample of participants plotted against systolic blood pressure.
  • DETAILED DESCRIPTION OF THE EMBODIMENTS
  • Referring to FIG. 1, there is provided a diagnostic retinal image system 100 in accordance with embodiments of the present invention. The system 100 comprises a processor 110 coupled to be in communication with an output device 120 in the form of a pair of displays 130 according to preferred embodiments of the present invention. The system 100 comprises one or more input devices 140, such as a mouse and/or a keyboard and/or a pointer, coupled to be in communication with the processor 110. In some embodiments, one or more of the displays 130 can be in the form of a touch sensitive screen, which can both display data and receive inputs from a user, for example, via the pointer.
  • The system 100 also comprises a data store 150 coupled to be in communication with the processor 110. The data store 150 can be any suitable known memory with sufficient capacity for storing configured computer readable program code components 160, some or all of which are required to execute the present invention as described in further detail hereinafter.
  • According to one embodiment, the processor 110 is in the form of a Pentium® D, CPU 2.80 GHz and the data store 150 comprises 1 GB of RAM and 232 GB local hard disk space. The pair of displays 130 are in the form of two 21-inch monitors allowing image displays at 1280×1024 resolution. NVIDIA Quadro FX 1400 graphics cards are also employed. The skilled addressee will appreciate that the present invention is not limited to this particular implementation.
  • The data store 150 stores configured computer readable program code components 160, some or all of which are retrieved and executed by the processor 110. Embodiments of the present invention reside in a diagnostic retinal image system 100 comprising computer readable program code components configured for automatically tracing one or more paths of one or more vessels of a retinal image. The system 100 also comprises computer readable program code components for generating a trace image comprising one or more traced paths. The system further comprises computer readable program code components configured for refining the trace image to remove errors in the trace image.
  • As described in the following, the system 100 typically includes program code components for automatically identifying an optic disc. Cropping of a consistently defined area relative to optic disc size allows comparison of different images from the same individual taken at different times and permits comparison of images from dissimilar individuals because the defined area relative to optic disc size of the same image is not influenced by image magnification and different angles of photography. The system 100 further comprises a comprehensive reporting facility which automatically generates a report usable by both clinicians and patients and reduces the level of human intervention enabling the efficiency of grading retinal images and reporting to be improved. Embodiments of the present invention allow multiple retinal images of a patient to be linked on a single report.
  • The retinal image system is configured to operate in accordance with either of the aspects of the invention, as described below, thereby performing the series of steps shown in either of FIGS. 2 and 12 respectively.
  • 1. First Aspect of the Invention
  • Turning to FIG. 2, the steps of a method which is a first embodiment of the invention are illustrated. The structure of the computer program (e.g. as implemented by the system 100 of FIG. 1) for carrying out the method is illustrated in FIG. 3, in which the steps labelled 1-11 correspond to those of FIG. 2. The program comprises a GUI (graphical user interface) module 12, an image detection module 13, and an attribute extraction module 14.
  • The input to the method of FIG. 2 is a disc-centred retinal photograph, for example of the kind which can be produced by a Zeiss FF3 fundus camera, which has been digitized.
  • 1.1 Optic Disc Detection (Step 1)
  • In a first step of the method of FIG. 2, the region of the photograph corresponding to the optic disc is detected for reliable grading of the retina image. We here discuss three modes for the optic disc detection, any one of which can be used. The process is performed by the unit 12 a of the GUI 12 and the optic disc detector 13 a of the Image detection module 13, with the proportion of the work performed by each of the units 12 a, 13 a depending upon which of the modes is used.
  • The first mode is a fully automatic detection that is based on wavelet processing and ellipse fitting. The system employs a Daubechies wavelet transform to approximate the optic disc region. Next, an abstract representation of the optic disc is obtained using an intensity-based template. This yields robust results in cases where the optic disc intensity is highly non-homogenous. An ellipse fitting algorithm is then utilized to detect the optic disc contour from this abstract representation (see P. M. D. S Pallawala, Wynne Hsu, Mong Li Lee, Kah-Guan Au Eong. Automated Optic Disc Localization and Contour Detection Using Ellipse Fitting and Wavelet Transform, in 8th European Conference on Computer Vision (ECCV), Prague, Czech Republic, May 2004).
  • The second mode is a semi-automatic detection where the user positions a circle to approximate the cup area. Optic disc detection using the method described above is then carried out beyond the user-specified cup area.
  • The third mode of optic disc detection is primarily manual which requires the user to pinpoint the centre and the edge of the optic disc, following which an outline of the optic disc is traced on the retina image. This can be done using the optic disc guide portion 12 a of the GUI 12.
  • FIG. 4 is a series of screenshots showing what the GUI displays during the three modes of optic disc detection.
  • FIGS. 4( a) and 4(b) show the fully-automated mode. The user is initially presented with screen FIG. 4( a), and upon a command, the unit 13 a, generates the screen FIG. 4( b). The optic disc is detected as the circular area 40 with radius r. Based on the optic disc radius r, the unit 13 a defines three zones A, B, C from the centre of the optic disc as follows: Zone A is the area within a circle 41 with radius 2 r; Zone B is a concentric area formed by subtracting Zone A from a circle 42 with radius 3 r; Zone C is a concentric area formed by subtracting Zone A from a circle 43 with radius 5 r, and thus overlaps with zone B.
  • FIGS. 4( c) to 4(f) show the sequence of steps in the semi-automated mode. FIG. 4( c) is equivalent to FIG. 4( a). FIG. 4( d) shows how the user manually positions a circle 44 near the optic disc centre. FIG. 4( e) shows how the user resizes the circle 44 by dragging a mouse. After doing so, and thereby covering the optic disc cup with the resized circle, he clicks on “Find OD” on the GUI 12, and the system generates circles 40, 41, 42, 43, shown in FIG. 4( f), having the meaning above.
  • FIGS. 4( g) to 4(k) show the manual mode. FIG. 4( g) is again equivalent to FIG. 4( a). FIG. 4( h) shows how a user uses a mouse to place a circle 44 at the optic disc centre (resizing is not done yet) and then clicks. FIG. 4( i) shows how the user then indicates the command “OD edge”. FIG. 4( j) shows how the user then clicks on an edge point of the optic disk he wishes to draw. FIG. 4( k) shows what the screen then shows: the circles 40, 41, 42, 43 generated from the selected optic disc centre and the selected edge point.
  • 1.2 Vascular Structure Extraction and Tracing
  • The algorithm for vascular structure extraction is performed by the unit 13 b as controlled by a vessel finder unit 12 b of the GUI 12, and is based on the work in S. Garg, J. Sivaswamy, S. Chandra. Unsupervised Curvature-Based Retinal Vessel Segmentation, Technical Report, Institute of Information Technology, Hyderabad, India, 2007. The retinal vessels are modelled as trenches and the centre lines of the trenches are extracted using curvature information (step 2 of FIG. 2). The result is an image (labelled 15 on FIG. 3) in which each of the trenches is indicated by a centreline.
  • The complete vascular structure is then extracted (step 3 of FIG. 2) by an attribute extraction module 14 using a modified region growing method. Seed points are placed at the start of Zone B to initiate the tracing of vessel segments. The vessels are traced by the unit 14 a in a depth first manner whenever a branch point or cross-over is encountered. The widths samples of each vessel segment are automatically determined at fixed intervals. This is done by a unit 14 c.
  • 1.3 Classification of Retinal Arterioles and Venules (Step 4)
  • The unit 14 b of the attribute extraction module 14 identifies arteries and veins as follows (the terms “vein” and “venule” are used interchangably in this document, as are the terms “arteries” and “artioles”). First, we select the first 15 diameter lines for each vessel. This is to minimize the effect of local variation in intensity. Then, for each selected diameter line, we obtain its intensity profile in the form of a histogram. From the histogram, we select the top five most frequently occurring values and calculate the mean value.
  • Next, we apply the k-means algorithm to cluster these mean values of all the vessels into two classes, vein and artery. The k-means algorithm initially picks two values as seeds. It then iteratively computes the distances from each point to the two centers. Points are re-assigned to the cluster of the nearest center. After several iterations, the clusters will converge.
  • After obtaining the two cluster centers, we label the 15 diameter lines as either veins or arteries depending of its distance to the two centers. We count the number of diameter lines that have been labeled as veins and arteries respectively. A majority voting system is used to finally classify the given vessel as either vein or artery.
  • In step 5 the result is displayed by the GUI, as shown in FIG. 6 (this is the image shown as 16 in FIG. 3). FIG. 6( a) shows the overall retinal image with lines indicating vessel segments. Some of the venules are labelled by reference numerals 61, 62, 63, 64, 65, 66, 67. Some of the arterioles are labelled 71, 72, 73. The arteriole 81 is shown brighter because it has been selected by a user. FIG. 6( b) shows how the user may display a larger version of a vessel segment, and that if so the width at each point along the vessel segment is displayed by a line orthogonal to the length direction of the vessel segment 44. The system attempts to sample the widths at equal intervals. Several operations are provided for the user to refine vascular structure that has been extracted and tracked. The user may edit the data (step 6) using a vessel editor unit 12 c of the GUI 12, which is shown in FIG. 3 as receiving user input (“edit vessels”). This may include any one of more of adding missed vessel segments, deleting dangling vessel segments, breaking vessel segments, marking and unmarking crossover vessel segments, and changing vessel type.
  • The result, is passed back to the vessel segment detector 13 b. The method loops back to step 3 (optionally many times) until the user is satisfied. Some of the various views shown to the user by the GUI at this time are shown in FIG. 7. The user can toggle between these views.
  • 1.4 Computation of Measurements (Step 8)
  • The unit 14 d optimises the width samples by discarding bad samples based on a special heuristic to improve standard deviation (step 7). This heuristic comes in the form of maximising an objective function that balances between standard deviation (lower is better) of the widths and the number of samples retained (higher is better). A standard optimisation technique is used to retain the widths that maximise the objective function. FIG. 11 illustrates is the effect of applying the width optimiser.
  • In FIG. 11 a the bright lines (such as lines 83) along the segment are the sample widths retained and the dark lines (such as lines 85) are the sample widths discarded by the width optimiser. Notice that the discarded sample widths are inaccurate representations of the width of the segment. The width optimiser is applied to all segments. FIG. 118 shows a screen shot before this process with FIG. 9 (an enlargement of the right of FIG. 11B), indicating vessels that have wide variations in sampled widths using dark boxes. FIG. 11C shows a screen shot after this process is applied with the right of the figure containing no dark boxes. This indicates that the current variation of sampled widths is not overly wide.
  • The unit 14 e then computes the attributes of the segments identified and edited. Note that a vessel is a binary tree with the property that each node has exactly zero or two child nodes. Each node in the vessel binary tree denotes a segment. A segment is a list of points on a line that does not include any branching points. The segment at the root node of a vessel is called the root segment and the first point in the segment at the root node is the root point. A segment could branch out into two daughter segments.
  • 1.4.1 Segment Measurements
  • Let s be a segment, the following measurements are defined as functions of s.
      • Mean Width ω(s): Sample widths of s are taken at fixed intervals. The mean width of s is a simple average of the sample widths. ωB(s) and ωC(s) denote averages of sample widths within zones B and C respectively.
      • Standard Deviation σ(s): Sample widths of s are taken at fixed intervals. The standard deviation of width of s is calculated from those samples. σB(s) and σC(s) denote the standard deviations of samples within Zone B and C respectively.
      • Length λ(s): The length of s is computed as the sum of the pairwise Euclidean distance between two adjacent points in s, that is, it is the arc of the segment. λB(s) and λC(s) denote the lengths within zones B and C respectively.
      • Simple Tortuosity τ(s): Simple tortuosity is calculated as the arc-chord ratio, τ(s)=λs)/C(s) where C is the chord of s, that is, the Euclidean distance between the first and last point in s. Similarly, τB(s)=λB(s)/CB(s) and τB(s)=λC(s)/CC(s) for Zone B and C respectively, where the chord within Zones B and C is measured using the first and last point in s within them.
      • Curvature Tortuosity ζ(s): This is computed from formula T 4 given in W. E. Hart, M. Goldbaum, B. Cote, P. Kube, M. R. Nelson. Automated measurement of retinal vascular tortuosity, Proceedings AMIA Fall Conference, 1997. Since the points in the segment are discrete, the integral is estimated using summation to give,
  • ϛ ( s ) = i = 1 s [ x ( t i ) y ( t i ) - x ( t i ) y ( t i ) ] 2 [ y ( t i ) 2 + x ( t i ) 2 ] 3 λ ( s )
        • where ti is the i-th point in s and |s| is the number of points in s. The first order parametric differentials are estimated using f′(ti)=f(ti+1)−f(ti−1) and the second order is estimated using f″(ti)=f′(ti+1)−f′(ti−1) where f is the parametric function x or y. For tortuosity within Zone C, we have
  • ϛ C ( s ) = i = 1 s [ x ( t i ) y ( t i ) - x ( t i ) y ( t i ) ] 2 [ y ( t i ) 2 + x ( t i ) 2 ] 3 λ C ( s )
        • where |sC| is the last segment point within Zone C.
    1.4.2 Zone B Vessel Measurements
  • All Zone B measurements considers only the root segment within Zone B, that is, if the root segment extends outside of Zone B, only the part of the segment within Zone B will be computed. Let v be a vessel, the following Zone B measures are provided:
      • Mean Width ωB(v): The mean width of a vessel v, in Zone B, is the mean width of the root segment.
      • Standard Deviation σB(v): The standard deviation of a vessel v in Zone B is the standard deviation of the root segment.
      • CRVEB: The widths of the six largest venules are combined using the formula given in M. D. Knudtson, K. E. Lee, L. D. Hubbard, T. Y. Wong, R. Klein, B. E. Klein. Revised formulas for summarizing retinal vessel diameters. Curr Eye Res., 2003; 27: 143-149. This gives a single value for the whole of zone B.
      • CRAEB: The widths of the six largest arterioles are combined using the formula given in M. D. Knudtson, K. E. Lee, L. D. Hubbard, T. Y. Wong, R. Klein, B. E. Klein (cited above). This too gives a single value for the whole of zone B.
      • AVRB:This ratio is given by CRVEB/CRAEB.
    1.4.3 Zone C Vessel Measurements
  • Let v be a vessel. The following vessel Zone C measurements include all descendent segments of the root segment within the Zone that are combined in a novel way.
      • Mean Width ωC(v): The mean width of a vessel v, in Zone C, is a recursive combination of the root segment and its descendants, where
  • w ^ ( s ) = w C ( s ) λ C ( s ) + a v ( w ^ ( s 1 ) 2 + w ^ ( s 2 ) 2 ) λ C ( s 1 ) + λ C ( s 2 ) 2 λ C ( s ) + λ C ( s 1 ) + λ C ( s 2 ) 2
        • Where s1 and s2 are the daughter segments of s; and
  • a v = { 0.88 if v is an arteriole 0.95 if v is a venule
      • Standard Deviation ωC(v): The standard deviation of a vessel v in Zone c is given as follows:
  • sd C ( v ) = s v σ C ( s ) λ C ( s ) s v λ C ( s )
      • CRVEC: The widths of the six largest venules in Zone C are combined using the formula given in M. D. Knudtson, K. E. Lee, L. D. Hubbard, T. Y. Wong, R. Klein, B. E. Klein (cited above). This gives a single value for the whole of zone C.
      • CRAEC: The widths of the six largest arterioles in Zone C are combined using the formula given in M. D. Knudtson, K. E. Lee, L. D. Hubbard, T. Y. Wong, R. Klein, B. E. Klein (cited above). This too gives a single value for the whole of zone C.
      • AVRC: This ratio is given by CRVEC/CRAEC.
      • Simple Tortuousity st(v): The measure is the weighted average of the simple tortuotisty of the vessel segments of v in Zone C as given by:
  • st ( v ) = s v T C ( s ) λ C ( s ) s v λ C ( s )
      • Curvature Tortuousity ct(v): The measure is the weighted average of the curvature tortuotisty of the vessel segments of v in Zone C as given by:
  • ct ( v ) = s v ϛ C ( s ) λ C ( s ) s v λ C ( s )
  • 1.4.4 Branching Measurements
  • The following measurements describe the branching properties of the root segment and its daughter segments.
      • Branching Coefficient, bc(v) where bc(v) is defined on vessels. bc(v)=beta(root(v)) where root(v) refers to the root segment of v.: Beta(s) is a function of s, where s is an element from the set of all segments. Beta of a segment s is defined as
  • β ( s ) = ω ( s 1 ) 2 + ω ( s 2 ) 2 ω ( s ) 2
        • where s1 and s2 are daughters of s and a was adapted from the area ratio described in M. Zamir, J. A. Medeiros, T. K. Cunningham. Arterial Bifurcations in the Human Retina, Journal of General Physiology, 1979 October, 74(4): 537-48.
      • Asymmetry Ratio ar(v): The asymmetry ratio, presented in M. Zamir, J. A. Medeiros, T. K. Cunningham (cited above), is given as ar(v)=α(root(v)) where root(v) refers to the root segment of vessel, v.
  • α ( s ) = ( min ( ω ( s 1 ) , ω ( s 2 ) ) max ( ω ( s 1 ) , ω ( s 2 ) ) ) 2
      • where s1 and s2 are daughters of s.
      • Junctional Exponent Deviation, je(v): The junctional exponent is given in R. Maini, T. MacGillivary, T. Aslam, I. Deary, B. Dhillon. Effect of Axial Length on Retinal Vascular Network Geometry, American Journal of Opthalmology, Volume 140, Issue 4, Pages 648.e1-648.e7, and has a theoretical value of 3. The junctional exponent deviation expresses the deviation from the value of 3 as a percentage of the root segment where:
  • ρ ( s ) = ω ( s ) 3 - ω ( s 1 ) 3 - ω ( s 2 ) 3 3 ω ( s ) ,
      • s1 and s2 are daughters of s and je(v)=ρ(root(v)).
      • Branching Angle (as described in M. Zamir, J. A. Medeiros, T. K. Cunninghan, cited above), ba(v): The branching angle of the vessel is computed by measuring the angles between the centreline of the segments, s, and its daughter segments s1, s2 near the branching point. For each segment, s, s1, s2, 9 pairs of points are used to measure the gradient and the average gradient is used to calculate the daughter angles whose sum give the branching angle.
      • Angular Assymmetry, aa(v): This measures the absolute difference in the daughter angles.
    1.4.5 Other Measurements
  • Other measurements may be obtained that do not relate directly to Zones B and C or vessels. One possibility is the length Diameter Ratio (described in N. Witt, T. Y. Wong, A. D. Hughes, N. Chaturvedi, B. E. Klein, R. Evans, Ma. McNamara, S. A. McG Thom, R Klein. Abnormalities of Retinal Microvascular Structure and Risk of Mortality From Ischemic Heart Disease and Stroke, Hypertension, 2006, 47:975-981), Idr(v): A segment s is qualified to participate in the ratio if and only if it is a segment that occurs after the first branching point of the vessel v but before the second branching point and the entire qualified segment is within Zone C. The length diameter ratio of vessel v is that defined as the ratio of the length of the qualified segment to the average diameter of the segment.
  • 1.5 Further Editing
  • The set of data generated by the attribute constructor 14 e in relation to a vessel is displayed (step 9) as illustrated in FIGS. 8 and 9. FIG. 9 highlights vessels with wide variations in width measurements for the user to edit. The dataset is shown as 17 in FIG. 3. It includes the segment measurement data produced in step 1.4.1.
  • FIG. 5 shows the overall appearance of the GUI 12 at this time with the outer boundaries of the zones A, B and C indicated by circular lines 51, 52, 53, and the dataset of FIG. 8 inset.
  • There is then a user-interactive process of refinement of the data using the segment editor 12 d, thereby editing the segments (step 10), such as editing their widths. FIG. 10 illustrates a manual discarding of bad widths by the user as part of editing the segment in step 10. Widths involved in computing the measures are shown in bright lines along the segment. In FIG. 10 a, all widths are used. FIG. 10 b shows discarded bad widths as dark lines that were removed by the user. The user may discard or include widths by dragging the mouse across them or clicking on them. For further control, the user may choose to modify each width (bright line in FIG. 10 a) by manually shifting them or changing their length. Vessels that fail to fall within an acceptable standard deviation for width are highlighted, so that the user may decide whether to further edit them. The editing process allows a user to remove visual obstructions when inspecting the image and editing the centrelines of the trace paths. The loop through steps 8, 9 and 10 can be performed until the user is satisfied that enough width measurements are taken and the standard deviation is acceptable for each vessel.
  • 1.6 Export of Data (Step 11)
  • Under the control of a data exporter unit 12 e of the GUI, the extractor unit 14 f of the attribute extraction module 14 then outputs (e.g. in response to the user inputting an instruction “save”) the list of vessels and their standard widths in the respective zones, as illustrated in FIG. 9. The data may be sent to a data store for subsequent analysis.
  • This aggregated data is labelled 18 in FIG. 3, and is output from the system as “output attributes”. It is a table of values in comma separated value (csv) format.
  • These output attributes may be used to predict medical conditions, for example in the following way:
  • 1. The output is in table format where each image is an instance (row) in the table and the attributes are the columns.
    2. Each instance is labelled with the presence or absence of disease.
    3. Standard statistical correlation methods or modern classification methods (such as, Support Vector Machines, Bayesian networks) may be used to build predictive models with the labelled data as training data.
  • 2. Second Aspect of the Invention
  • In the examples described herein, Diabetic Retinopathy Study Field 2 (disc centred) retinal photographs are used to calculate the fractal dimension. Colour retinal photographs were taken after pupil dilation using a Zeiss FF3 fundus camera, which were then digitized. The fractal dimension is sensitive to magnification differences, the angle of view and the retinal photography field and these factors should be borne in mind when comparing fractal dimensions. Embodiments of the present invention can calculate a fractal dimension for other retinal photography fields, such as field 1 and macular centred, but the fractal dimension will be different from other fields taken of the same eye.
  • A diagnostic retinal image method 200 which is an embodiment of the present invention will now be described with reference to the general flow diagram shown in FIG. 12 and FIGS. 13 to 22.
  • The method 200 includes at 210 acquiring the retinal image to be analysed, which can include retrieving a retinal image from the data store 150. The size of the retinal image can be adjusted and can be sized to fill one of the displays 130.
  • The method 200 includes at 220 setting a radius of the optic disc of the retinal image. With reference to the retinal image 300 shown in FIG. 13, the centre 310 of the optic disc 320 is estimated either automatically using a suitable algorithm or manually, which includes a user highlighting the centre 310, for example, with a cross as shown in FIG. 13. An upper edge 330 of the optic disc 320 vertically above the centre 310 is defined either automatically using a suitable algorithm or manually, which includes the user highlighting the upper edge 330, for example, with a cross as shown in FIG. 13. The upper edge 330 is defined to be where the optic disc 320 definitely ends and the colouration becomes that of the background retina or peripapillary atrophy. If there is an optic disc halo, the presence of the halo is ignored, either by the algorithm or the user.
  • The method 200 includes at 230 cropping or scaling the retinal image 300 to minimise deleterious aspects of the retinal image and enable retinal image comparisons. This includes setting a radius factor, which crops or scales the retinal image 300 to a multiple of the optic disc radius. According to some embodiments, the radius factor can be in the range 2.0-5.0. In preferred embodiments, the radius factor is set to 3.5 such that the size of the retinal image is 3.5 optic disc radii. By cropping the retinal image to a consistent area defined relative to optic disc size, different images from the same individual taken at different times can be compared. Images from different individuals can also be compared because cropping/scaling corrects somewhat for differences in image magnification, refractive error and different angles of photography. A radius factor of 3.5 optic disc radii is used for some of the embodiments because it was found that beyond this radius factor with the equipment used, artefacts, such as shadowing, photograph halos and the like, occur which degrade the image quality.
  • At 240, the method 200 represented in FIG. 12 includes automatically tracing one or more paths of one or more vessels of the retinal image 300. According to some embodiments, the starting points of the vessels can be detected using a matched Gaussian filter and the detected vessels are traced using a combination of a statistical recursive estimator, such as a Kalman filter, and a Gaussian filter. According to other embodiments, the network of vessels is detected by performing a non-linear projection to normalize the retinal image 300. Subsequently, an optimal threshold process is applied to binarize the image. The detected network is then thinned by applying a distance-based centreline detection algorithm.
  • The method 200 further includes at 250 automatically generating a trace image 400 comprising the one or more traced paths, an example of which is shown in FIG. 14 partially overlaid on the retinal image 300.
  • At 260, the method includes automatically calculating a raw fractal capacity dimension of the trace image. According to some embodiments, a box-counting method known from, for example, Stosic, T. & Stosic, B. D., Multifractal analysis of human retinal vessels, IEEE Trans. Med. Imaging 25, 1101-1107 (2006) is employed. For each trace image 400, the method includes automatically selecting 1000 points at random on each trace image structure. Each structure has a typical size M0 of 30,000 pixels and a typical linear size L of 600 pixels. The number of pixels Mi inside boxes centred on each point are then automatically counted. The method includes extracting the generalized dimension Dq from these numbers for different values of q (−10<q<10) as slopes of the lines obtained through minimum squares fitting of log {[M(R)/M0]q−1}/(q−1) as a function of log (R/L) where R is the growing linear dimension of the boxes. Preferred embodiments of the method include repeating the process, for example, 100 times, with the random selection of points repeated each time and the final values of Dq calculated from the average of the repetitions. The fractal capacity dimension D0, of course, corresponds to q=0.
  • In this example, the fractal capacity dimension D is 1.476806, as displayed in FIG. 14. The raw fractal capacity dimension including all the decimal place values is recorded in the data store 150 along with an identifier for the retinal image file, such as a unique filename, an identification number of the patient to whom the retinal image belongs, whether the image is of the left or right eye and the optic disc radius. In the example shown in FIG. 15, this data is recorded in a spreadsheet 500, but any other suitable file, such as a machine-readable .csv file, can be used, from which a report can be complied as described in further detail hereinafter and which can be used for subsequent research. The trace image 400 and/or the retinal image 300 can be resized and displayed side by side, for example, with one image in each display 130, for ease of comparison.
  • Some the method 200 includes at 265 refining the trace image 400 to remove errors in the trace image. The vessel trace function of the present invention is sensitive and will pick up fine arterioles and venules as well as artefacts that are not vessels, such as peripapillary atrophy, choroidal vessels or light reflected from the nerve fibre layer. The term “error” in relation to the trace image is used herein to refer to artefacts that are not vessels. Refining of the trace image can be executed automatically by a suitable algorithm or manually. Refining of the trace image 400 will now be described in more detail with reference to FIGS. 16-20.
  • Refining includes comparing each line tracing in the trace image 400 with the corresponding area in the retinal image 300. Manually, the user can start, for example, at the 12 o'clock position on the trace image 400 and move clockwise around the image. The feature that each tracing is derived from should be identified as a vessel or otherwise. Where it is difficult to determine if a line trace is a vessel or an artefact, the retinal image 300 and/or the trace image 400 can be enlarged as required for improved comparison. An example is shown in FIG. 16, where vessel tracings at the 10 o'clock position require closer examination to determine if they are artefacts or true vessels. In FIG. 6, the vessel tracings at the 10 o'clock position are true retinal vessels.
  • If a line tracing in the trace image 400 cannot be linked to a retinal vessel in the retinal image 300, the incorrect line tracing or error can be erased from the trace image 400. This can be executed automatically or manually. Where executed manually, an erase function known from electronic drawing packages can be employed to erase the erroneous line tracing. Any small white pixels left behind in the trace image 400 must also be erased and it may be necessary to enlarge the trace image 400 to ensure all of the tracing has been removed.
  • When refining the trace image 400, a range of artefacts can occur and must be removed from the trace image to ensure they do not affect the fractal dimension calculation. For example, abnormalities around the optic disc 320, such as peripapillary atrophy, must be removed, as shown in the series of images in FIG. 7, to produce a refined trace image 700. Other artefacts that must also be removed include unusual line tracings that do not correspond to vessels, especially short, thick lines, any clear artefacts, such as horizontal lines at the top or bottom of the image and any isolated short lines at an unusual angle and which do not come off any major vessels. Any pigment abnormalities can be erroneously traced as vessels, particularly with poorly focused retinal images. Such tracings, as well as retinal haemorrhages, should also be removed from the trace image 400.
  • The series of images in FIG. 18 shows the removal of short lines which do not correspond to vessels to produce a refined trace image 800. Note their short length, unusual orientation, and lack of connection to existing vessels. The series of images in FIG. 19 shows the removal of choroidal vessels at the 6 o'clock position to produce a refined trace image 900. The series of images in FIG. 20 shows the removal of artefacts at the 5 o'clock position to produce a refined trace image 1000.
  • Once all incorrect tracings or errors in the trace image 400 have been erased embodiments of the method 200 include at 270 calculating a refined fractal capacity dimension of the refined trace image. The refined fractal dimension D0 will be equal to or less than the raw fractal dimension. In the aforementioned example, the refined fractal dimension is 1.472857 compared with the raw fractal dimension of 1.476806. The refined fractal dimension and any comments are also recorded in the data store 150.
  • The method 200 includes at 275 repeating refining of the trace image 400 if errors in the trace image remain after previous refining, i.e. any incorrect tracings remaining in the refined fractal trace image can be erased and a further refined trace image generated along with another refined fractal dimension. Both the further refined trace image and the further refined fractal dimension are also saved in the data store 150, for example, in the same text file, with the previous data.
  • According to some embodiments, the cropped/scaled image file and the refined fractal trace image file can be saved. In some embodiments, the raw fractal line tracing can be discarded because it can be generated from the cropped/scaled image file. This will allow rechecking of results later if required.
  • An image is defined as ungradable if the program cannot trace one or more of the major vessels. FIG. 21 shows an example of an ungradable retinal image 1100 and the initial trace image 1110. That a retinal image is ungradable is recorded in the data store 150.
  • At 280, the method 200 includes comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension. The benchmark fractal capacity dimension is determined from a large number of measurements of the fractal capacity dimension from a ‘normal’ population without hypertension or diabetes etc.
  • At 285, the method 200 includes generating a diagnosis by interrogating a data store and selecting diagnostic data based on, or correlating to, the comparison of the calculated raw or refined fractal capacity dimension with the benchmark fractal capacity dimension. The calculated fractal capacity dimension provides an accurate description of the retinal vasculature and subtle changes in the retinal vasculature are reflected in changes in the fractal capacity dimension. Such changes can be linked to, and be indicators of, cardiovascular disease and other conditions, such as hypertension and diabetes.
  • With reference to FIG. 12, at 290, embodiments of the method 200 include the processor 110 automatically generating a report including the retrieved diagnostic data 1280, which is discussed in further detail herein. A schematic example of a report 1200 is shown in FIG. 22. According to some embodiments, the report 1200 comprises patient data 1210, such as name, age, date of birth, gender, address, ID number, and the like. The patient data can be entered by the user, or the data fields can be automatically populated by the processor 110, for example, by retrieving information from the data store 150 or a dedicated patient database (not shown) coupled to be in communication with the processor 110. The report 1200 also comprises medical information 1220 about the patient, such as their blood pressure, whether they are a diabetic, their smoking status and history and the like. Requesting practitioner details 1230 can also be included, the date the retinal images were taken and the date of the report 1240. One or more retinal images 1250 of the patient, one or more trace images and/or one or more refined trace images can be included. Associated measurements 1260 determined from the fractal analysis can also be included.
  • The particular diagnostic data 1280 retrieved from the data store 150 and included in the report 1200 is based on, or correlates to, the comparison of the calculated fractal capacity dimension with the benchmark fractal capacity dimension. For example, diagnostic data 1280 in the form of clauses or statements based on the calculated refined fractal capacity dimension are retrieved from the data store 150 and inserted in the report 1200. The particular diagnostic data retrieved can also depend on one or more other characteristics of the patient, such as, but not limited to, their age, blood pressure, whether they are or have been a smoker, whether they are a diabetic and/or their family medical history. For example, a calculated refined fractal capacity dimension within a specific range and/or above or below a specific threshold, possibly combined with the one or more items of the aforementioned patient data 1210, cause specific diagnostic data 1280 to be retrieved from the data store 150 and inserted in the report 1200. An estimate for future risk of cardiovascular disease and other diseases on the basis of the comparison is thus provided. Examples of specific diagnostic data 1280 include, but are not limited to “Increased risk of hypertension and coronary heart disease”, “Progression of diabetic retinopathy, kidney diseases and increased deaths from stroke”, “High risks of stroke, coronary heart disease and cardiovascular mortality” and “High risk of glaucoma”. References to medical reports and papers supporting the diagnosis can also be included.
  • Other pathology 1290 can also be included in the report 1200. In some embodiments, this can be produced from diagnostic data retrieved from the data store 150 and from the data recorded for the patient. The diagnostic data 1280 and other pathology 1290 can be stored, for example, in a look-up table or via any other suitable means known in the art.
  • The fractal capacity dimension can be a mono-fractal capacity dimension, but in preferred embodiments, the fractal capacity dimension is a multifractal capacity dimension because this more appropriately describes fractal characteristics of the retinal vasculature. A multifractal can be considered as multiple monofractals embedded into each other, the multifractal having a hierarchy of exponents rather than a single fractal dimension.
  • Results obtained with embodiments of the present invention were based on a random sample of 60 black and white optic disc centred retinal photographs of right eyes from a study comprising 30 participants without hypertension and diabetes, 15 participants with hypertension only and 15 participants with diabetes only. At the same visit that retinal photography was performed, the systolic blood pressure (SBP) and diastolic blood pressure (DBP) of each participant was measured using the same mercury sphygmomanometer with appropriate adult cuff size, after seating the participant for at least 10 minutes. The 2003 World Health Organization (WHO)/International Society of Hypertension (ISH) guidelines were adapted to define hypertension. The subject was considered hypertensive grade 2 or above (severe hypertension) if the subject was previously diagnosed as hypertensive and currently using anti-hypertensive medications, or had a SBP≧160 mmHg or a DBP≧100 mmHg at examination. Diabetes was defined based on a physician diagnosis of diabetes, or a fasting blood sugar ≧7 mmol/L.
  • Table 1 below shows the baseline characteristics of the sample population (n=60). The age range was 50-86 years and 52% of the sample was male.
  • TABLE 1
    Mean (range or %)
    Age (yrs) 62.4 (50-86)
    Systolic blood pressure (mmHg)  135 (110-200)
    Diastolic blood pressure (mmHg)   79 (53-118)
    Male   31 (52)
    Hypertensive   15 (25)
    Diabetic   15 (25)
  • With reference to FIG. 23, the value of the refined multifractal dimension Do is approximately normally distributed. The mean multi-fractal dimension in this study sample is 1.447 with a standard deviation of 0.017. This is lower than the often quoted fractal dimension of 1.7 because embodiments of the present invention calculate the multifractal capacity dimension, which is lower than the diffusion limited aggregation (monofractal) dimension of 1.7.
  • Reliability estimates of embodiments of the present invention have been determined from the aforementioned study sample. Colour photographs of the same right eye retinal field of the 30 participants without hypertension and diabetes were also graded and the results compared with the identical, but black and white, photographs. Comparison was made between three graders and agreement assessed used Pearson correlation coefficients. Table 2 below shows that the intra- and inter-grader reliability estimates were generally high, with correlation of over 0.90. Reliability estimates were higher for refined fractal dimension D0 compared to the raw fractal dimension.
  • TABLE 2
    Correlation coefficient
    Raw multifractal Refined multifractal
    dimension dimension (D0)
    Intra-grader
    Grader 1 (1 week apart) 0.93 0.93
    Grader 2 (1 week apart) 0.95 0.93
    Grader 3 (1 week apart) 0.95 0.95
    Inter-grader
    Grader
    1 vs grader 2 0.92 0.91
    Grader 1 vs grader 3 0.92 0.90
    Grader 2 vs grader 3 0.94 0.93
    TIFF photos vs JPEG photos 0.97 0.97
    Colour photos vs black & white 0.70 0.79
  • It will be noted that embodiments of the present invention can be applied to both colour images and black and white images. There is a small discrepancy between the calculated fractal dimension, but the correlation between the raw fractal dimension calculated from colour and black and white photographs is moderately high (0.70-0.79). TIFF and JPEG format photos had very high correlation of 0.97. Embodiments of the present invention exhibit robustness to use by different users (graders). Even with the raw multi-fractal dimension, i.e. after setting the optic disc radius, but before removing artefacts, the intra-grader reliability is high (correlation 0.93), while the refined dimension, i.e. after refining the trace image 400 to remove artefacts, shows the same correlation.
  • The correlation of the raw and refined multifractal capacity dimension D0 with a range of systemic and ocular factors were examined, including age, SBP, DBP, refractive error and arteriolar and venular calibre. The refractive error is calculated as the spherical equivalent refractive error (SER)=spherical power+½ cylindrical power). The arteriolar and venular calibre is represented by the central retinal arteriole and venule equivalents (CRAE and CRVE respectively). The arteriolar and venular calibres were calculated using a computer-assisted method as described in Liew, G. et al. Measurement of retinal vascular calibre: issues and alternatives to using the arteriole to venule ratio. Invest Opthalmol Vis. Sci. 48, 52-57 (2007). The results are shown in Table 3 below and the numbers in the table refer to the Pearson correlation coefficients.
  • TABLE 3
    Correlation coefficients
    Mean Mean
    Refined arteriolar venular
    Raw multifractal multifractal calibre calibre
    dimension dimension (D0) (μm) (μm)
    Age (yrs) −0.39 −0.41 −0.50 −0.19
    Systolic blood −0.52 −0.53 −0.36 −0.02
    pressure
    (mmHg)
    Diastolic blood −0.32 −0.29 −0.27 −0.02
    pressure
    (mmHg)
    Mean arteriolar 0.37 0.40 1.0 0.47
    calibre (μm)
    Mean venular 0.21 0.24 0.47 1.0
    calibre (μm)
    Right eye SER 0.08 0.04
  • Both the refined and raw D0 showed moderate correlation with age, SBP and DBP. Of note, the refined D0 was more highly correlated with both SBP and DBP than arteriolar calibre. Refractive error had very low correlation with raw and refined D0.
  • With reference to Table 4 below, the refined D0 were compared in participants with and without hypertension. Mean refined D0 was 0.020 (95% confidence interval 0.013 to 0.028) lower in participants with hypertension compared to those without and this difference was highly significant (p<0.0001). In Table 4, Cl refers to confidence intervals and * denotes using the t-test.
  • TABLE 4
    Mean refined
    multifractal
    No. of dimension, D0 Difference
    Hypertension Eyes (95% CI) (95% CI) P value*
    Absent 45 1.453
    (1.449-1.458)
    Present 15 1.433 0.020 <0.0001
    (1.427-1.440) (0.013 to 0.028)
  • FIG. 24 shows a graph of refined D0 in the whole sample plotted against SBP. The inverse relationship of the refined D0 with SBP shows D0 decreasing by 0.004 per 10 mmHg increase in SBP.
  • The multi-fractal dimension shows strong correlation with SBP, DBP and age, as well as with CRAE and CRVE. Indeed, the correlation of multi-fractal dimension with SBP and DBP is even higher than that of CRAE with SBP and DBP suggesting that calculating the multi-fractal capacity dimension is better than CRAE for detecting early changes in CVD. Embodiments of the present invention can also detect differences in the multifractal dimension in persons with and without hypertension even in this small sample.
  • It is envisaged that the arterioles and venules of the detailed digital retinal images 300 could be isolated and the raw and refined fractal capacity dimensions calculated for arterioles only and venules only thus potentially providing an even stronger correlation between the fractal capacity dimension and vascular diseases, such as, but not limited to, diabetes and hypertension.
  • With reference to FIG. 1, when the system is configured to perform the second aspect of the invention it further comprises computer readable program code components configured for calculating a refined fractal capacity dimension of the refined trace image. The system also comprises computer readable program code components configured for comparing the calculated fractal capacity dimension with a benchmark fractal capacity dimension and computer readable program code components configured for generating an estimate for future risk of cardiovascular disease on the basis of the comparison. Hence, the diagnostic retinal image system and method of the second aspect of present invention accurately measures the fractal dimension capacity of retinal images automatically to provide an estimate for future risk of vascular diseases, such as, but not limited to, diabetes and hypertension. The superior accuracy of fractals in describing the anatomy of the eye enables accurate measurements of the fractal dimension capacity to reveal subtle changes in the retinal vasculature and thus provide indications of vascular diseases. Embodiments of the second aspect of the invention show strong correlations even with the raw fractal dimension capacity demonstrating that the method and system are robust to grader error.
  • DEFINITIONS
  • Throughout the specification the aim has been to describe the invention without limiting the invention to any one embodiment or specific collection of features. Persons skilled in the relevant art may realize variations from the specific embodiments that will nonetheless fall within the scope of the invention.
  • In this specification, the terms “comprises”, “comprising”, “including” or similar terms are intended to mean a non-exclusive inclusion, such that a method, system or apparatus that comprises a list of elements does not include those elements solely, but may well include other elements not listed. The term “automatic” is used here to mean that an operation is performed without human interaction (although a human may initiate the operation), whereas the term “semi-automatic” is used to describe an operation which involves human interaction with a computer processing system. An “automated” process may comprise one or more automatic operations and/or one or more semi-automatic operations, so the term “automated” is equivalent to the term “computer-implemented”.

Claims (15)

1.-28. (canceled)
29. A retinal image analysis method including:
(i) defining an optic disc for a retinal image;
(ii) automatically tracing one or more vessels of the retinal image, each vessel comprising a group of vessel segments including a root segment and a plurality of daughter segments connected to the root segment in a vessel binary tree in which at a plurality of branching points one of said vessel segments branches into two said daughter segments;
(iii) automatically generating a trace image comprising the one of more traced vessels;
(iv) using the vessel segments to calculate automatically a plurality of parameters; and
(v) outputting the plurality of parameters.
30. A method according to claim 29 in which the parameters comprise parameters which describe a corresponding one of the vessel segments, and comprise one or more of:
a mean width of the vessel segment;
a value indicative of the variation in the width of the vessel segment along at least part of the length of the vessel segment;
the length of the vessel segment; and
the tortuosity of the vessel segment.
31. A retinal image analysis method according to claim 29 comprising deriving a plurality of zones of the retinal image, said step (iv) including deriving one of more said parameters for any portions of the vessel segments which lie within each of the zones.
32. a retinal image analysis method according to claim 31 in which the zones encircle an optic disc region of the retinal image, and have different respective diameters.
33. A retinal image analysis method according to claim 29 further comprising obtaining a parameter characterizing the group of vessel segments in which the parameters characterizing the group of vessel segments are selected from the list consisting of a branching coefficient;
an asymmetry ratio;
a junctional exponent deviation;
a branching angle indicative of the an average of the angles between the extension direction of the daughter segments and that of the root segment;
an angular asymmetry value indicative of the variance in said angles; and a fractal capacity dimension of the trace image.
34. A retinal image analysis method according to claim 31 in which the group of vessel segments extend within one of the zones.
35. A retinal image analysis method according to claim 29 further including classifying vessels into veins and arteries.
36. A retinal image analysis method according to claim 35 further including obtaining for a least one said zone a said parameter indicative of the width of a number of the widest veins or of the width of a number of the widest arteries in the zone.
37. A retinal image analysis method according to claim 29 including an editing step of interacting with a user to edit the trace image produced in step (iii) and/or the group of vessels segments steps (iv) and (v) being performed and/or repeated after said editing step.
38. A retinal image analysis method according to claim 29 including indicating any of said vessel segments which are determined in step (iv) to have a variance in their width above a threshold.
39. A retinal image analysis method according to claim 29 including a step of obtaining user annotation of features of the retinal image.
40. A retinal image analysis method according to claim 29 further including generating an estimate for future risk of cardiovascular disease using the plurality of parameters.
41. A retinal image analysis system comprising:
a process for:
(i) defining an optic disc for the retinal image;
(ii) automatically tracing one or more vessels of the retinal image, each vessel comprising a group of vessel segments including a root segment and a plurality of daughter segments connected to the root segment in a vessel binary tree in which at a plurality of branching points one of said vessel segments branches into two said daughter segments;
(iii) automatically generating a trace image comprising the one of more traced vessels;
(iv) using the vessel segments to calculate automatically a plurality of parameters; and
(v) outputting the plurality of parameters.
42. A retinal image analysis system comprising:
(i) computer readable program code components configured for defining an optic disc for a retinal image;
(ii) computer readable program code components configured for automatically tracing one or more vessels of the retinal image, each vessel comprising a group of vessel segments including a root segment and a plurality of daughter segments connected to the root segment in a vessel binary tree in which at a plurality of branching points one of said vessel segments branches into two said daughter segments;
(iii) computer readable program code components configured for automatically generating a trace image comprising the one or more traced vessels;
(iv) computer readable program code components configured for using the vessel segments to calculate automatically a plurality of parameters; and
(v) computer readable program code components configured for outputting the plurality of parameters.
US12/936,702 2008-04-08 2009-02-03 Retinal image analysis systems and methods Expired - Fee Related US8687862B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/936,702 US8687862B2 (en) 2008-04-08 2009-02-03 Retinal image analysis systems and methods

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12336008P 2008-04-08 2008-04-08
US12/936,702 US8687862B2 (en) 2008-04-08 2009-02-03 Retinal image analysis systems and methods
PCT/SG2009/000040 WO2009126112A1 (en) 2008-04-08 2009-02-03 Retinal image analysis systems and methods

Publications (2)

Publication Number Publication Date
US20110026789A1 true US20110026789A1 (en) 2011-02-03
US8687862B2 US8687862B2 (en) 2014-04-01

Family

ID=41162106

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/936,702 Expired - Fee Related US8687862B2 (en) 2008-04-08 2009-02-03 Retinal image analysis systems and methods

Country Status (7)

Country Link
US (1) US8687862B2 (en)
EP (1) EP2262410B1 (en)
JP (1) JP5492869B2 (en)
CN (1) CN102014731A (en)
AU (1) AU2009234503B2 (en)
SG (1) SG191664A1 (en)
WO (1) WO2009126112A1 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110222731A1 (en) * 2008-11-21 2011-09-15 Henry Hacker Computer Controlled System for Laser Energy Delivery to the Retina
US20120257164A1 (en) * 2011-04-07 2012-10-11 The Chinese University Of Hong Kong Method and device for retinal image analysis
US20130265549A1 (en) * 2012-04-04 2013-10-10 Carl Zeiss Meditec Ag Method for determining at least one diagnosis or risk assessment parameter related to amd
WO2014074178A1 (en) * 2012-11-08 2014-05-15 The Johns Hopkins University System and method for detecting and classifying severity of retinal disease
ITRM20130038A1 (en) * 2013-01-22 2014-07-23 Univ Calabria METHOD FOR THE DETERMINATION OF A FRACTAL SIZE OF A PREDETERMINED AREA AROUND THE OPTICAL NERVE
US20140341426A1 (en) * 2013-05-14 2014-11-20 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method and medical imaging device
US9008391B1 (en) * 2013-10-22 2015-04-14 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities
US20150161783A1 (en) * 2013-12-06 2015-06-11 International Business Machines Corporation Tracking eye recovery
US20150278575A1 (en) * 2012-11-07 2015-10-01 bioMérieux Bio-imaging method
WO2015187861A1 (en) * 2014-06-03 2015-12-10 Socialeyes Corporation Systems and methods for retinopathy workflow, evaluation and grading using mobile devices
US20170178405A1 (en) * 2011-09-09 2017-06-22 Calgary Scientific Inc. Image display of a centerline of tubular structure
US9836667B2 (en) 2012-04-11 2017-12-05 University Of Florida Research Foundation, Inc. System and method for analyzing random patterns
CN108062981A (en) * 2018-01-15 2018-05-22 深港产学研基地 A kind of modeling method in the kind of the asymmetric tree of point of shape blood vessel with inter-species scale
USD833008S1 (en) 2017-02-27 2018-11-06 Glaukos Corporation Gonioscope
CN109003279A (en) * 2018-07-06 2018-12-14 东北大学 Fundus retina blood vessel segmentation method and system based on K-Means clustering labeling and naive Bayes model
WO2019013779A1 (en) * 2017-07-12 2019-01-17 Mohammed Alauddin Bhuiyan Automated blood vessel feature detection and quantification for retinal image grading and disease screening
WO2019032558A1 (en) * 2017-08-07 2019-02-14 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
US10489909B2 (en) * 2016-12-13 2019-11-26 Shanghai Sixth People's Hospital Method of automatically detecting microaneurysm based on multi-sieving convolutional neural network
US10499809B2 (en) 2015-03-20 2019-12-10 Glaukos Corporation Gonioscopic devices
US10565350B2 (en) 2016-03-11 2020-02-18 International Business Machines Corporation Image processing and text analysis to determine medical condition
CN110889846A (en) * 2019-12-03 2020-03-17 哈尔滨理工大学 Diabetes retina image optic disk segmentation method based on FCM
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
US10674906B2 (en) 2017-02-24 2020-06-09 Glaukos Corporation Gonioscopes
CN111789572A (en) * 2019-04-04 2020-10-20 奥普托斯股份有限公司 Determining hypertension levels from retinal vasculature images
US10873681B2 (en) 2016-02-08 2020-12-22 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
US20210140882A1 (en) * 2015-08-19 2021-05-13 Nanoscope Technologies Llc Cancer Diagnosis by Refractive Index Multifractality
CN117078720A (en) * 2023-08-31 2023-11-17 齐鲁工业大学(山东省科学院) Fast tracking method for tubular structures fused with neural network
US20240042110A1 (en) * 2021-02-11 2024-02-08 Fresenius Medical Care Deutschland Gmbh Vessel Analysis-Based Medical System for Specifying Adjustable Values of a Blood Treatment Apparatus
US20240096489A1 (en) * 2022-09-19 2024-03-21 Alan Andrew Norman Method for Analyzing a Retinal Image of an Eye

Families Citing this family (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2792354A1 (en) 2009-03-06 2010-09-10 Bio-Tree Systems, Inc. Vascular analysis methods and apparatus
WO2011059409A1 (en) * 2009-11-16 2011-05-19 Jiang Liu Obtaining data for automatic glaucoma screening, and screening and diagnostic techniques and systems using the data
US7856135B1 (en) 2009-12-02 2010-12-21 Aibili—Association for Innovation and Biomedical Research on Light and Image System for analyzing ocular fundus images
US20110129133A1 (en) 2009-12-02 2011-06-02 Ramos Joao Diogo De Oliveira E Methods and systems for detection of retinal changes
EP3669933B1 (en) 2010-02-26 2022-08-31 Cornell University Retina prosthesis
CN103140875B (en) * 2010-08-02 2016-08-10 皇家飞利浦电子股份有限公司 Utilize live feedback that interior tissue is carried out the system and method for multi-modal segmentation
WO2012026597A1 (en) * 2010-08-27 2012-03-01 ソニー株式会社 Image processing apparatus and method
EP2611401A4 (en) 2010-08-31 2014-03-19 Univ Cornell RETINA GRAFT
US9302103B1 (en) 2010-09-10 2016-04-05 Cornell University Neurological prosthesis
DE102010042387B4 (en) * 2010-10-13 2017-10-12 Siemens Healthcare Gmbh Evaluation method for a vascular system descriptive image data set
WO2012078636A1 (en) 2010-12-07 2012-06-14 University Of Iowa Research Foundation Optimal, user-friendly, object background separation
US12367578B2 (en) 2010-12-07 2025-07-22 University Of Iowa Research Foundation Diagnosis of a disease condition using an automated diagnostic model
EP2665406B1 (en) 2011-01-20 2021-03-10 University of Iowa Research Foundation Automated determination of arteriovenous ratio in images of blood vessels
WO2012122606A1 (en) * 2011-03-16 2012-09-20 Centre For Eye Research Australia Disease and disease predisposition risk indication
CN103890781B (en) 2011-08-25 2017-11-21 康奈尔大学 Retina Encoder for Machine Vision
CN104869884B (en) 2012-12-19 2017-03-08 奥林巴斯株式会社 Medical image processing device and medical image processing method
ITRM20130039A1 (en) * 2013-01-22 2014-07-23 Univ Calabria METHOD FOR THE DETERMINATION OF TORTUOSITY INDICES AND MEDIUM DIAMETERS OF OCULAR BLOOD VASES.
JP6215555B2 (en) * 2013-03-29 2017-10-18 株式会社ニデック Fundus image processing apparatus and fundus image processing program
JP6469387B2 (en) * 2014-08-26 2019-02-13 株式会社トプコン Fundus analyzer
CN104573716A (en) * 2014-12-31 2015-04-29 浙江大学 Eye fundus image arteriovenous retinal blood vessel classification method based on breadth first-search algorithm
CN104573712B (en) * 2014-12-31 2018-01-16 浙江大学 Arteriovenous retinal vessel sorting technique based on eye fundus image
WO2016116370A1 (en) 2015-01-19 2016-07-28 Statumanu Icp Aps Method and apparatus for non-invasive assessment of intracranial pressure
US20160278983A1 (en) * 2015-03-23 2016-09-29 Novartis Ag Systems, apparatuses, and methods for the optimization of laser photocoagulation
CN104881862B (en) * 2015-04-03 2018-08-03 南通大学 It is a kind of based on the retinal vessel tortuosity computational methods of ophthalmoscope image and its application
EP3291780B1 (en) 2015-04-20 2025-06-04 Cornell University Machine vision with dimensional data reduction
US9757023B2 (en) 2015-05-27 2017-09-12 The Regents Of The University Of Michigan Optic disc detection in retinal autofluorescence images
JP6922151B2 (en) * 2015-10-21 2021-08-18 株式会社ニデック Ophthalmology analyzer, ophthalmology analysis program
EP3463055A4 (en) * 2016-05-02 2020-04-01 Bio-Tree Systems, Inc. SYSTEM AND METHOD FOR DETECTING A NET DISEASE
CN107229937A (en) * 2017-06-13 2017-10-03 瑞达昇科技(大连)有限公司 Method and device for classifying retinal blood vessels
KR102469720B1 (en) 2017-10-31 2022-11-23 삼성전자주식회사 Electronic device and method for determining hyperemia grade of eye using the same
CN108205807A (en) * 2017-12-29 2018-06-26 四川和生视界医药技术开发有限公司 Retinal vascular images edit methods and retinal vascular images editing device
WO2019156975A1 (en) 2018-02-07 2019-08-15 Atherosys, Inc. Apparatus and method to guide ultrasound acquisition of the peripheral arteries in the transverse plane
CN108447552A (en) * 2018-03-05 2018-08-24 四川和生视界医药技术开发有限公司 The edit methods of retinal vessel and the editing device of retinal vessel
EP3787480A4 (en) * 2018-04-30 2022-01-26 Atherosys, Inc. METHOD AND DEVICE FOR AUTOMATIC DETECTION OF ATHEROMEA IN PERIPHERAL ARTERIES
CN108765418A (en) * 2018-06-14 2018-11-06 四川和生视界医药技术开发有限公司 The equivalent caliber of retina arteriovenous is than detection method and detection device
JP2018198968A (en) * 2018-08-27 2018-12-20 株式会社トプコン Fundus analysis device
JP2019107485A (en) * 2019-02-26 2019-07-04 株式会社トプコン Ophthalmic examination equipment
CN109978796B (en) * 2019-04-04 2021-06-01 北京百度网讯科技有限公司 Fundus blood vessel picture generation method and device and storage medium
CN110364256A (en) * 2019-06-21 2019-10-22 平安科技(深圳)有限公司 A disease prediction system and method for blood vessel image recognition based on big data
CN110807736A (en) * 2019-07-25 2020-02-18 北京爱诺斯科技有限公司 Eye pupil image preprocessing device
KR102233768B1 (en) * 2019-11-08 2021-03-30 동아대학교 산학협력단 Method, apparatus, computer program, and computer readable medium for quantification of retinal blood vessel tortuosity by analysing fundus photographs
US11950847B1 (en) 2020-03-24 2024-04-09 Al Optics Inc. Portable medical diagnostics device with integrated artificial intelligence capabilities
CN111681242B (en) * 2020-08-14 2020-11-17 北京至真互联网技术有限公司 Retinal vessel arteriovenous distinguishing method, device and equipment
CN116157829A (en) * 2020-08-17 2023-05-23 香港大学 Method and system for automated cloud-based quantitative assessment of retinal microvasculature using optical coherence tomography angiography images
US12178392B2 (en) 2021-03-03 2024-12-31 AI Optics Inc. Interchangeable imaging modules for a medical diagnostics device with integrated artificial intelligence capabilities
CN116110589B (en) * 2022-12-09 2023-11-03 东北林业大学 A predictive method for diabetic retinopathy based on retrospective correction
CN117877692B (en) * 2024-01-02 2024-08-02 珠海全一科技有限公司 Personalized difference analysis method for retinopathy

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4109237A (en) * 1977-01-17 1978-08-22 Hill Robert B Apparatus and method for identifying individuals through their retinal vasculature patterns
US7177486B2 (en) * 2002-04-08 2007-02-13 Rensselaer Polytechnic Institute Dual bootstrap iterative closest point method and algorithm for image registration
US20080309881A1 (en) * 2007-06-15 2008-12-18 University Of Southern California Pattern analysis of retinal maps for the diagnosis of optic nerve diseases by optical coherence tomography
US7646903B2 (en) * 2005-06-22 2010-01-12 Siemens Medical Solutions Usa, Inc. System and method for path based tree matching

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05123296A (en) * 1991-11-08 1993-05-21 Canon Inc Image processor
JPH07210655A (en) * 1994-01-21 1995-08-11 Nikon Corp Ophthalmic image processing device
JPH10243924A (en) 1997-03-05 1998-09-14 Nippon Telegr & Teleph Corp <Ntt> How to measure the ratio of arteriovenous diameter of the fundus
GB9909966D0 (en) * 1999-04-29 1999-06-30 Torsana Diabetes Diagnostics A Analysis of fundus images
WO2003020112A2 (en) * 2001-08-30 2003-03-13 Philadelphia Ophthalmic Imaging Systems System and method for screening patients for diabetic retinopathy
US8098908B2 (en) * 2004-09-21 2012-01-17 Imedos Gmbh Method and device for analyzing the retinal vessels by means of digital images
JP4624122B2 (en) * 2005-01-31 2011-02-02 株式会社トーメーコーポレーション Ophthalmic equipment
JP2007097740A (en) 2005-10-03 2007-04-19 Hitachi Omron Terminal Solutions Corp Fundus image diagnosis support device
JP2008010304A (en) 2006-06-29 2008-01-17 Toshiba Corp Composite electrolyte membrane and fuel cell
JP2008022929A (en) * 2006-07-19 2008-02-07 Gifu Univ Image analysis apparatus and image analysis program
JP5007420B2 (en) 2006-09-21 2012-08-22 タック株式会社 Image analysis system and image analysis program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4109237A (en) * 1977-01-17 1978-08-22 Hill Robert B Apparatus and method for identifying individuals through their retinal vasculature patterns
US7177486B2 (en) * 2002-04-08 2007-02-13 Rensselaer Polytechnic Institute Dual bootstrap iterative closest point method and algorithm for image registration
US7646903B2 (en) * 2005-06-22 2010-01-12 Siemens Medical Solutions Usa, Inc. System and method for path based tree matching
US20080309881A1 (en) * 2007-06-15 2008-12-18 University Of Southern California Pattern analysis of retinal maps for the diagnosis of optic nerve diseases by optical coherence tomography

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Heneghan, Conor, et al. "Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis." Medical Image Analysis 6.4 (2002): 407-429. *

Cited By (52)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8433117B2 (en) * 2008-11-21 2013-04-30 The United States Of America As Represented By The Secretary Of The Army Computer controlled system for laser energy delivery to the retina
US20110222731A1 (en) * 2008-11-21 2011-09-15 Henry Hacker Computer Controlled System for Laser Energy Delivery to the Retina
US8787638B2 (en) * 2011-04-07 2014-07-22 The Chinese University Of Hong Kong Method and device for retinal image analysis
US20120257164A1 (en) * 2011-04-07 2012-10-11 The Chinese University Of Hong Kong Method and device for retinal image analysis
TWI578977B (en) * 2011-04-07 2017-04-21 香港中文大學 Device for retinal image analysis
US10535189B2 (en) * 2011-09-09 2020-01-14 Calgary Scientific Inc. Image display of a centerline of tubular structure
US20170178405A1 (en) * 2011-09-09 2017-06-22 Calgary Scientific Inc. Image display of a centerline of tubular structure
US9179839B2 (en) * 2012-04-04 2015-11-10 Carl Zeiss Meditec Ag Method for determining at least one diagnosis or risk assessment parameter related to AMD
US20130265549A1 (en) * 2012-04-04 2013-10-10 Carl Zeiss Meditec Ag Method for determining at least one diagnosis or risk assessment parameter related to amd
US9836667B2 (en) 2012-04-11 2017-12-05 University Of Florida Research Foundation, Inc. System and method for analyzing random patterns
US20150278575A1 (en) * 2012-11-07 2015-10-01 bioMérieux Bio-imaging method
US9576181B2 (en) * 2012-11-07 2017-02-21 Biomerieux Bio-imaging method
US20150265144A1 (en) * 2012-11-08 2015-09-24 The Johns Hopkins University System and method for detecting and classifying severity of retinal disease
WO2014074178A1 (en) * 2012-11-08 2014-05-15 The Johns Hopkins University System and method for detecting and classifying severity of retinal disease
US9775506B2 (en) * 2012-11-08 2017-10-03 The Johns Hopkins University System and method for detecting and classifying severity of retinal disease
ITRM20130038A1 (en) * 2013-01-22 2014-07-23 Univ Calabria METHOD FOR THE DETERMINATION OF A FRACTAL SIZE OF A PREDETERMINED AREA AROUND THE OPTICAL NERVE
US9811912B2 (en) * 2013-05-14 2017-11-07 Toshiba Medical Systems Corporation Image processing apparatus, image processing method and medical imaging device
US20140341426A1 (en) * 2013-05-14 2014-11-20 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method and medical imaging device
US20150110368A1 (en) * 2013-10-22 2015-04-23 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities
US9008391B1 (en) * 2013-10-22 2015-04-14 Eyenuk, Inc. Systems and methods for processing retinal images for screening of diseases or abnormalities
US20150161783A1 (en) * 2013-12-06 2015-06-11 International Business Machines Corporation Tracking eye recovery
US9552517B2 (en) * 2013-12-06 2017-01-24 International Business Machines Corporation Tracking eye recovery
WO2015187861A1 (en) * 2014-06-03 2015-12-10 Socialeyes Corporation Systems and methods for retinopathy workflow, evaluation and grading using mobile devices
US11019997B2 (en) 2015-03-20 2021-06-01 Glaukos Corporation Gonioscopic devices
US12279822B2 (en) 2015-03-20 2025-04-22 Glaukos Corporation Gonioscopic devices
US11826104B2 (en) 2015-03-20 2023-11-28 Glaukos Corporation Gonioscopic devices
US10499809B2 (en) 2015-03-20 2019-12-10 Glaukos Corporation Gonioscopic devices
US11019996B2 (en) 2015-03-20 2021-06-01 Glaukos Corporation Gonioscopic devices
US11530985B2 (en) * 2015-08-19 2022-12-20 Nanoscope Technologies, LLC Cancer diagnosis by refractive index multifractality
US20210140882A1 (en) * 2015-08-19 2021-05-13 Nanoscope Technologies Llc Cancer Diagnosis by Refractive Index Multifractality
US10873681B2 (en) 2016-02-08 2020-12-22 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
US11734911B2 (en) 2016-02-08 2023-08-22 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
US10565350B2 (en) 2016-03-11 2020-02-18 International Business Machines Corporation Image processing and text analysis to determine medical condition
US10489909B2 (en) * 2016-12-13 2019-11-26 Shanghai Sixth People's Hospital Method of automatically detecting microaneurysm based on multi-sieving convolutional neural network
US11744458B2 (en) 2017-02-24 2023-09-05 Glaukos Corporation Gonioscopes
US10674906B2 (en) 2017-02-24 2020-06-09 Glaukos Corporation Gonioscopes
USD833008S1 (en) 2017-02-27 2018-11-06 Glaukos Corporation Gonioscope
USD886997S1 (en) 2017-02-27 2020-06-09 Glaukos Corporation Gonioscope
WO2019013779A1 (en) * 2017-07-12 2019-01-17 Mohammed Alauddin Bhuiyan Automated blood vessel feature detection and quantification for retinal image grading and disease screening
US12524859B2 (en) 2017-08-07 2026-01-13 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
WO2019032558A1 (en) * 2017-08-07 2019-02-14 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
US11935215B2 (en) 2017-08-07 2024-03-19 Imago Systems, Inc. System and method for the visualization and characterization of objects in images
CN108062981A (en) * 2018-01-15 2018-05-22 深港产学研基地 A kind of modeling method in the kind of the asymmetric tree of point of shape blood vessel with inter-species scale
CN109003279A (en) * 2018-07-06 2018-12-14 东北大学 Fundus retina blood vessel segmentation method and system based on K-Means clustering labeling and naive Bayes model
CN111222361A (en) * 2018-11-23 2020-06-02 福州依影健康科技有限公司 Method and system for analyzing hypertension retina vascular change characteristic data
CN111789572A (en) * 2019-04-04 2020-10-20 奥普托斯股份有限公司 Determining hypertension levels from retinal vasculature images
US12224061B2 (en) * 2019-04-04 2025-02-11 Optos Plc Determining levels of hypertension from retinal vasculature images
CN110889846A (en) * 2019-12-03 2020-03-17 哈尔滨理工大学 Diabetes retina image optic disk segmentation method based on FCM
US20240042110A1 (en) * 2021-02-11 2024-02-08 Fresenius Medical Care Deutschland Gmbh Vessel Analysis-Based Medical System for Specifying Adjustable Values of a Blood Treatment Apparatus
US12191038B2 (en) * 2022-09-19 2025-01-07 Alan Andrew Norman Method for analyzing a retinal image of an eye
US20240096489A1 (en) * 2022-09-19 2024-03-21 Alan Andrew Norman Method for Analyzing a Retinal Image of an Eye
CN117078720A (en) * 2023-08-31 2023-11-17 齐鲁工业大学(山东省科学院) Fast tracking method for tubular structures fused with neural network

Also Published As

Publication number Publication date
AU2009234503B2 (en) 2014-01-16
US8687862B2 (en) 2014-04-01
EP2262410B1 (en) 2015-05-27
WO2009126112A1 (en) 2009-10-15
JP2011516200A (en) 2011-05-26
JP5492869B2 (en) 2014-05-14
EP2262410A1 (en) 2010-12-22
AU2009234503A1 (en) 2009-10-15
SG191664A1 (en) 2013-07-31
CN102014731A (en) 2011-04-13
EP2262410A4 (en) 2012-06-20

Similar Documents

Publication Publication Date Title
US8687862B2 (en) Retinal image analysis systems and methods
Fraz et al. QUARTZ: Quantitative Analysis of Retinal Vessel Topology and size–An automated system for quantification of retinal vessels morphology
Martinez-Perez et al. Retinal vascular tree morphology: a semi-automatic quantification
US20240074658A1 (en) Method and system for measuring lesion features of hypertensive retinopathy
Al-Diri et al. A reference data set for retinal vessel profiles
Alam et al. OCT feature analysis guided artery-vein differentiation in OCTA
CN111712186A (en) Method and device for assisting in the diagnosis of cardiovascular disease
EP3939003B1 (en) Systems and methods for assessing a likelihood of cteph and identifying characteristics indicative thereof
Oloumi et al. Computer-aided diagnosis of plus disease via measurement of vessel thickness in retinal fundus images of preterm infants
Güven Automatic detection of age-related macular degeneration pathologies in retinal fundus images
Reethika et al. Diabetic retinopathy detection using statistical features
Lau et al. The singapore eye vessel assessment system
US20230018499A1 (en) Deep Learning Based Approach For OCT Image Quality Assurance
Morales et al. Computer-aided diagnosis software for hypertensive risk determination through fundus image processing
Agrawal et al. Quantitative analysis of research on artificial intelligence in retinopathy of prematurity
Zhang et al. Analysis of retinal vascular biomarkers for early detection of diabetes
Abtahi et al. Deep learning segmentation of periarterial and perivenous capillary-free zones in optical coherence tomography angiography
CN111291706A (en) Retina image optic disc positioning method
US20250316114A1 (en) Method for generating accurate annotation maps
Çelik Ertuǧrul et al. Decision support system for diagnosing diabetic retinopathy from color fundus images
Eladawi et al. Computer-aided diagnosis system based on a comprehensive local features analysis for early diabetic retinopathy detection using OCTA
Azemin et al. High-resolution retinal imaging system: diagnostic accuracy and usability
Guijarro Heeb Evaluating quantitative biomarkers in optical coherence tomography angiography images to predict diabetic retinopathy
Park et al. Separation of left and right lungs using 3-dimensional information of sequential computed tomography images and a guided dynamic programming algorithm
Sadhukhan et al. Automatic method for Artery-vein Ratio measurement in Retinal Fundus Images using Attention-based artery-vein classification

Legal Events

Date Code Title Description
AS Assignment

Owner name: NATIONAL UNIVERSITY OF SINGAPORE, SINGAPORE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HSU, WYNNE;LEE, MONG LI;WONG, TIEN YIN;SIGNING DATES FROM 20090324 TO 20091203;REEL/FRAME:025104/0870

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551)

Year of fee payment: 4

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20220401