WO2025177280A1 - Machine learning generation of corneal refractive power map - Google Patents
Machine learning generation of corneal refractive power mapInfo
- Publication number
- WO2025177280A1 WO2025177280A1 PCT/IL2025/050178 IL2025050178W WO2025177280A1 WO 2025177280 A1 WO2025177280 A1 WO 2025177280A1 IL 2025050178 W IL2025050178 W IL 2025050178W WO 2025177280 A1 WO2025177280 A1 WO 2025177280A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interest
- target layer
- center point
- sampling points
- azimuthal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/007—Methods or devices for eye surgery
- A61F9/008—Methods or devices for eye surgery using laser
- A61F9/00825—Methods or devices for eye surgery using laser for photodisruption
- A61F9/00827—Refractive correction, e.g. lenticle
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B3/00—Apparatus for testing the eyes; Instruments for examining the eyes
- A61B3/10—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions
- A61B3/107—Objective types, i.e. instruments for examining the eyes independent of the patients' perceptions or reactions for determining the shape or measuring the curvature of the cornea
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H40/00—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
- G16H40/60—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
- G16H40/67—ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/007—Methods or devices for eye surgery
- A61F9/008—Methods or devices for eye surgery using laser
- A61F2009/00844—Feedback systems
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/007—Methods or devices for eye surgery
- A61F9/008—Methods or devices for eye surgery using laser
- A61F2009/00853—Laser thermal keratoplasty or radial keratotomy
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61F—FILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
- A61F9/00—Methods or devices for treatment of the eyes; Devices for putting in contact-lenses; Devices to correct squinting; Apparatus to guide the blind; Protective devices for the eyes, carried on the body or in the hand
- A61F9/007—Methods or devices for eye surgery
- A61F9/008—Methods or devices for eye surgery using laser
- A61F2009/00861—Methods or devices for eye surgery using laser adapted for treatment at a particular location
- A61F2009/00872—Cornea
Definitions
- the present invention relates to the field of machine learning.
- the target layer-of-interest is one of: the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, or the posterior surface of the cornea
- the refractive procedure is selected from the group consisting of: LASIK (laser in-situ keratomileusis), photorefractive keratectomy (PRK), radial keratotomy (RK), astigmatic keratotomy (AK), automated lamellar keratoplasty (ALK), laser thermal keratoplasty (LTK), conductive keratoplasty (CK), or intracorneal ring.
- LASIK laser in-situ keratomileusis
- PRK photorefractive keratectomy
- RK radial keratotomy
- AK astigmatic keratotomy
- ALK automated lamellar keratoplasty
- LTK laser thermal keratoplasty
- CK conductive keratoplasty
- FIG. 1 is a block diagram of an exemplary system 100 which provides for automated generation of an accurate refractive power map of a selected layer of the cornea.
- FIG. 5A shows schematically the set of sampling points 502 on a cartesian plane (x, y)
- FIGS. 5B-5F schematically depict an exemplary process of rotationally selecting, about the selected center point of a 3-D cloud, a series of successive discrete azimuthal sectors.
- FIGS. 6A-6B show discrete subsets of sampling points selected at azimuth angles 90 and 45 degrees, respectively.
- a “layer-of-interest” of a human cornea may include any one or more of the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, and/or the posterior surface of the cornea.
- elevation mapping of the cornea or any of its several layers can be derived from multiple ocular imaging modalities, including, but not limited to, corneal topography, corneal tomography, corneal pachymetry, and the like.
- the present technique may be particularly useful as a tool to guide practitioners in conjunction with refractive procedures used to alter or improve the refractive state of the eye, such as LASIK (laser in-situ keratomileusis), photorefractive keratectomy (PRK), radial keratotomy (RK), astigmatic keratotomy (AK), automated lamellar keratoplasty (ALK), laser thermal keratoplasty (LTK), conductive keratoplasty (CK), and/or intracorneal ring.
- the present technique may further be used to treat common vision disorders, such as myopia, hyperopia, presbyopia, astigmatism, and in assessing the fit of vision correcting lenses.
- the present technique may be realized as a standalone system or device which may receive, as input, ocular imaging data associated with a target patient from one or more ocular imaging devices, and output one or more refractive power maps for the target patient.
- the present technique may be realized as a hardware and/or software module bundled with, or incorporated into, a device, such as an ocular imaging device or a Laser vision correction device, to perform the steps of one or more methods of the present technique described herein with respect thereto.
- system 100 may comprise a hardware processor 102, and a random-access memory (RAM) 104, and/or one or more non-transitory computer- readable storage device 106.
- RAM random-access memory
- Storage device 106 may be or may include, for example, one or more non- transitory computer-readable storage device(s), a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
- RAM Random Access Memory
- ROM read only memory
- DRAM Dynamic RAM
- SDRAM Synchronous DRAM
- DDR double data rate
- System 100 as described herein is only an exemplary embodiment of the present invention, and in practice may be implemented in hardware only, software only, or a combination of both hardware and software.
- System 100 may have more or fewer components and modules than shown, may combine two or more of the components, or may have a different configuration or arrangement of the components.
- System 100 may include any additional component enabling it to function as an operable computer system, such as a motherboard, data busses, power supply, a network interface card, a display, an input device (e.g., keyboard, pointing device, touch-sensitive display), etc. (not shown).
- Method 200 begins at step 202, wherein system 100 receives, as input, imaging data associated with a target cornea of a human subject, and/or a layer-of-interest within the target cornea.
- the input imaging data may be acquired using one or more ocular imaging modalities, including, but not limited to, corneal topography, comeal tomography, corneal pachymetry, and the like.
- the input imaging data represents ocular imaging results associated with the target cornea, from one or a combination of one or more of the following imaging modalities:
- Corneal tomography which images the cornea in 3-D by cross-sections using penetrating radiation.
- OCT ocular optical coherence tomography
- OCT uses coherent near-infrared light to obtain depth resolved images of the eye.
- Corneal pachymetry which is the process of measuring the thickness of the cornea.
- the input imaging data associated with the target cornea may include one or more of the following imaging and data types:
- the output data maps are color-coded with color scales that range from warm colors (red, orange, yellow), to neutrals (green), to cool colors (blue, purple).
- the color scales represent the different patterns shown in each image.
- step 204 system 100 executes image processing module 108 to process the input imaging data received in step 202, to calculate data representing an elevation or depression of a set of sampling points within the target layer-of-interest, relative to a surface that is best-fit to the target layer-of-interest.
- the target layer-of- interest may be the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, and/or the posterior surface of the cornea.
- system 100 executes image processing module 108 to process one or more of the following input imaging data associated with the target layer-of-interest, which may include one or more of: axial and/or tangential maps representing curvature of the target layer-of-interest; thickness maps representing the thickness of the target layer-of-interest; and/or elevation maps representing the elevation of points about the target layer-of-interest relative to a reference shape.
- system 100 executes image processing module 108 to process the input imaging data received in step 202, to extract data representing an elevation or depression of a set of sampling points distributed about the target layer-of- interest, relative to a surface that is best-fit to the target layer-of-interest.
- the set of sampling points comprises a selected number of sampling points, e.g., between 10-10,000 points. However, in other cases, the set of sampling points may comprise fewer or more sampling points. In some embodiments, the number of sampling points in the set may be based on any suitable or desired sampling interval, selected according to a desired sampling density. In some embodiments, the sampling density may be user-selected, e.g., by a user operating system 100 via user interface 114.
- the number of sampling points in the set may be based on a variable sampling interval, which sample more densely in selected areas of the target layer-of-interest. This feature can be implemented using automatic identification of areas by differentiating thickness in a condensed space, in order to increase the sampling resolution.
- the coordinates system may reference a selected center point of the elevation map of the target layer-of-interest.
- the selected center point of the elevation map of the target layer-of-interest may be any one of a geometric center point of the elevation map, the visual axis of the target cornea, the pupillary axis of the target cornea, the point with the maximum elevation, or any user- selected point.
- the generated 3-D cloud has a selected center point.
- the generated 3-D cloud has a selected center point which may be any one of a geometric center point of the elevation map, the visual axis of the target cornea, the pupillary axis of the target cornea, the point with the maximum elevation, or any user-selected point.
- FIG. 4 shows schematically an exemplary 3-D cloud representing elevation data for a set of sampling points distributed about a target layer-of-interest.
- the X and Y axes form a cartesian plane (x, y) which represents the area of the target layer-of-interest.
- the selected center point is represented on the cartesian plane.
- the Z axis denotes the elevation of each sampling point relative to the reference shape. The values are given in micrometers.
- FIG. 5A shows schematically a 2-D representation of the sampling points 502 comprising the generated 3-D cloud, represented on a cartesian plane (x,y) and having a selected center point 504.
- system 100 executes refractive mapping module 110 to rotationally select, about a selected center point 504 of the 3-D cloud generated in step 206, a series of successive discrete azimuthal sectors, wherein each of the successive discrete azimuthal sectors includes one or more of sampling points 502a which lie within an area defined by the azimuthal sector within the cartesian plane (x, y) shown in FIG. 5A.
- FIGS. 5B-5F schematically depict an exemplary process by which system 100 executes refractive mapping module 110 to rotationally select, about selected center point 504 of the 3-D cloud generated in step 206, a series of successive discrete azimuthal sectors, each comprising a defined subset of the sampling points 502 which lie within the azimuthal sector.
- system 100 executes refractive mapping module 110 to apply an azimuthal vector 506 which rotates about selected center point 504 of the cartesian plane (x, y) shown in FIG. 5A (which is a planar 2-D representation of the 3-D cloud generated in step 206.
- azimuthal vector 506 can extend from selected center point 504 (which in this case is the geometric center of the cartesian plane) to substantially the periphery of the cartesian plane (x, y) shown in FIG. 5A (which represents the area of the target layer-of-interest).
- Azimuthal vector 506 can be rotated up to 360 degrees, e.g., between 1-360 degrees, about selected center point 504, to cover a specified angular sector of the area of the layer-of-interest.
- azimuthal vector 506 may represent a vector extending from selected center point 505 (which in this example is a point other than the geometric center of the cartesian plane), having a predetermined or user-selected length or radius, to cover a selected area-of-interest within the layer-of-interest.
- azimuthal vector 506 may have a length selected to cover only a predetermined portion of the cartesian plane (x, y) centered about the selected center point 504, as indicated by the dashed circle in FIG. 5C.
- the azimuthal sector 508 selects a subset of sampling points 502a (indicated by blacked-out circle in FIGS. 5B-5D) which lie within the area defined by azimuthal sector 508 within the cartesian plane (x, y) shown in FIG. 5A.
- azimuthal vector 506 may be rotated, as indicated by the arrows in FIGS. 5B and 5D, about selected center point 504 according to a selected angle a, to a next angular position.
- azimuthal vector 506 is shown as rotatable in the clockwise direction, however, azimuthal vector 506 may be equally rotatable in the clockwise or the opposite directions.
- azimuthal vector 506 again defines azimuthal sector 508, which selects a next subset comprising sampling points 502a which lie within azimuthal sector 508. This process may reiterate over the azimuthal range from 0-360 or 0-180 degrees, as the case may be, to cover the entirety of entire the area of the layer-of- interest, as represented in the cartesian plane (x, y) shown in FIG. 5A.
- system 100 obtains a set of successive discrete subsets of sampling points, each defined by selected center point 504, azimuthal vector 506 length, azimuthal sector 508 width, and angular position about selected center point 504.
- the number of discrete subsets of sampling points in the set is determined by the angular interval selected for rotating azimuthal vector 506.
- step 210 system 100 executes refractive mapping module 110 to convert each of the discrete subsets of sampling points obtained in step 208, into a 2-D representation showing each sampling point in each subset on a 2-D plane which lies along the corresponding azimuthal angle of azimuthal vector 506 associated with the particular discrete subset.
- FIG. 7A shows an exemplary 2-D representation, wherein the Y-axis represents the elevation of each sampling point, and the X-axis represents the distance relative to selected center point 504 in the cartesian plane (X, F).
- the Y-axis scale values are given in micrometers.
- system 100 executes refractive mapping module 110 to generate and store a set of 2-D representations with respect to all of the discrete subsets of sampling points obtained in step 208.
- system 100 executes refractive mapping module 110 to calculate a best-fit shape with respect to each 2-D representation generated in step 210.
- the best-fit shape is defined as an optimal geometrical shape that best fits the elevation and distance distribution of the subset of sampling points in each particular 2-D representation.
- the best-fit shape may define a sphere, part of a circle, an ellipse, a parabola, a hyperbola, a polynomial fit of degree 1 to n, a Zemike polynomial, and/or any other suitable or fitting shape.
- Dp is the diopter power of the shape.
- R is the radius of the shape.
- n is the refractive index of the layer-of-interest.
- step 216 system 100 executes refractive power module 112 to calculate the mean diopter power of the entire layer-of-interest, which is equivalent to the spherical equivalent dioptric power, as well as the minimum and maximum dioptric power of the layer-of-interest with their respective azimuthal axes. This enables to determine the dioptric power of the layer-of-interest. In some embodiments, this may be based on applying a simplified Munnerlyn formula to obtain an average refractive power for different refractive zone.
- the computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device.
- the computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- a non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device having instructions recorded thereon, and any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- SRAM static random access memory
- CD-ROM compact disc read-only memory
- DVD digital versatile disk
- memory stick a floppy disk
- any suitable combination of the foregoing includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable
- a computer readable storage medium is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Rather, the computer readable storage medium is a non-transient (i.e., not- volatile) medium.
- Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.
- the network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
- Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
- the computer readable program instructions may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
- electronic circuitry including, for example, programmable logic circuitry, a field-programmable gate array (FPGA), or a programmable logic array (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.
- electronic circuitry including, for example, an application-specific integrated circuit (ASIC) may be incorporate the computer readable program instructions already at time of fabrication, such that the ASIC is configured to execute these instructions without programming.
- ASIC application-specific integrated circuit
- These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
- These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
- the computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
- each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration can be implemented by special purpose hardware -based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
- each of the terms “substantially,” “essentially,” and forms thereof, when describing a numerical value means up to a 20% deviation (namely, ⁇ 20%) from that value. Similarly, when such a term describes a numerical range, it means up to a 20% broader range - 10% over that explicit range and 10% below it).
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- Public Health (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Epidemiology (AREA)
- Primary Health Care (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Surgery (AREA)
- Ophthalmology & Optometry (AREA)
- Heart & Thoracic Surgery (AREA)
- Physics & Mathematics (AREA)
- Veterinary Medicine (AREA)
- Animal Behavior & Ethology (AREA)
- Biophysics (AREA)
- Radiology & Medical Imaging (AREA)
- General Business, Economics & Management (AREA)
- Data Mining & Analysis (AREA)
- Business, Economics & Management (AREA)
- Urology & Nephrology (AREA)
- Molecular Biology (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Optics & Photonics (AREA)
- Vascular Medicine (AREA)
- Eye Examination Apparatus (AREA)
Abstract
A method comprising receiving ocular imaging data representing a target layer- of-interest of a human cornea; processing the ocular imaging data to calculate elevation data relative to a best-fit reference shape for a set of sampling points; generating a 3-D cloud indicative of the elevation and X, Y coordinates of the set of sampling points within a cartesian plane; selecting a successive plurality of discrete azimuthal sectors of the sampling points; converting the sampling points in each of the plurality of discrete azimuthal sectors into a 2-D representation; finding, for each of the 2-D representations, a best-fit shape; measuring a diopter of each of the best-fit shapes, based, at least in part, on a known refractive index of the target layer-of-interest; and calculating, from all of the measured diopters, a mean diopter power of the target layer-of-interest.
Description
MACHINE LEARNING GENERATION OF CORNEAL REFRACTIVE POWER MAP
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims priority from U.S. Application Ser. No. 63/556,821, filed February 22, 2024, entitled “MACHINE LEARNING GENERATION OF CORNEAL EPITHELIUM REFRACTIVE POWER (CERP) MAP,” the contents of which are hereby incorporated herein in their entirety by reference.
FIELD OF THE INVENTION
[0002] The present invention relates to the field of machine learning.
BACKGROUND
[0003] The measurement of the corneal epithelial thickness and the characterization of its behavior in response to changes to corneal architecture draw increasing interest in clinical practice.
[0004] The concept of segmental or layered evaluation of the corneal thickness was first introduced by Reinstein, using digital very-high-frequency ultrasound (VHF-US), and later on became available using optical coherence tomography (OCT) technology. The current clinical applications of the comeal epithelial mapping that gained popularity over the past decade are focused on its ability to identify ectatic from non-ectatic corneas, based on the different epithelial patterns. This ability, in turn, may inform therapeutic refractive surgery, where a more precise and individualized corneal surface disorder treatments may be facilitated.
[0005] However, one of the main features of the corneal epithelial layer that still remains poorly measured is its refractive power and contribution to the overall patient refraction. Growing evidence supports the significance of the contribution of the corneal epithelium to total ocular refraction and corneal net power. However, to date, no tools are available to allow the refractive surgeon to assess preoperatively the epithelial refractive power. Such as tool may significantly improve the understanding, accuracy and predictability of the refractive procedures, especially where the epithelium undergoes significant remodeling (such as in therapeutics, LVC enhancement, etc.).
Furthermore, extensive applications could benefit from such a tool, i.e.., investigating the possible impact of the epithelial refractive power on the accuracy of IOL calculation after refractive surgery.
[0006] Therefore, it would be beneficial to develop a tool which can convert a corneal epithelial thickness of OCT epithelial maps into a refractive power epithelial mapping.
[0007] The foregoing examples of the related art and limitations related therewith are intended to be illustrative and not exclusive. Other limitations of the related art will become apparent to those of skill in the art upon a reading of the specification and a study of the figures.
SUMMARY OF THE INVENTION
[0008] The following embodiments and aspects thereof are described and illustrated in conjunction with systems, tools and methods which are meant to be exemplary and illustrative, not limiting in scope.
[0009] There is provided, in an embodiment, a computer-implemented method comprising: receiving, as input, ocular imaging data representing a target layer-of- interest of a human cornea; processing the ocular imaging data to calculate elevation data relative to a best-fit reference shape, for a set of sampling points in the target layer-of- interest; generating, based on the processing, a 3-D cloud indicative of the elevation and X, Y coordinates of the set of sampling points within a cartesian plane representing the target layer-of-interest; selecting a plurality of discrete subsets of the sampling points, by successively rotatably advancing an azimuthal vector at predetermined angular intervals about a selected center point of the cartesian plane, and selecting the sampling points which lie within an azimuthal sector of the cartesian plane defined relative to the azimuthal vector; converting each of the plurality of discrete subsets of sampling points into a 2-D representation; finding, for each of the 2-D representations, a best-fit shape; measuring a diopter of each of the best-fit shapes, based, at least in part, on a known refractive index of the target layer-of-interest; and calculating, from all of the measured diopters, a mean diopter power of the target layer-of-interest.
[0010] There is also provided, in an embodiment, a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least
one hardware processor to: receive, as input, ocular imaging data representing a target layer-of-interest of a human cornea, process the ocular imaging data to calculate elevation data relative to a best-fit reference shape, for a set of sampling points in the target layer-of-interest, generate, based on the processing, a 3-D cloud indicative of the elevation and X, Y coordinates of the set of sampling points within a cartesian plane representing the target layer-of-interest, select a plurality of discrete subsets of the sampling points, by successively rotatably advancing an azimuthal vector at predetermined angular intervals about a selected center point of the cartesian plane, and selecting the sampling points which lie within an azimuthal sector of the cartesian plane defined relative to the azimuthal vector, convert each of the plurality of discrete subsets of sampling points into a 2-D representation, find, for each of the 2-D representations, a best-fit shape, measure a diopter of each of the best-fit shapes, based, at least in part, on a known refractive index of the target layer-of-interest, and calculate, from all of the measured diopters, a mean diopter power of the target layer-of-interest.
[0011] There is further provided, in an embodiment, a computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: receive, as input, ocular imaging data representing a target layer- of-interest of a human cornea; process the ocular imaging data to calculate elevation data relative to a best-fit reference shape, for a set of sampling points in the target layer-of- interest; generate, based on the processing, a 3-D cloud indicative of the elevation and X, Y coordinates of the set of sampling points within a cartesian plane representing the target layer-of-interest; select a plurality of discrete subsets of the sampling points, by successively rotatably advancing an azimuthal vector at predetermined angular intervals about a selected center point of the cartesian plane, and selecting the sampling points which lie within an azimuthal sector of the cartesian plane defined relative to the azimuthal vector; convert each of the plurality of discrete subsets of sampling points into a 2-D representation; find, for each of the 2-D representations, a best-fit shape; measure a diopter of each of the best-fit shapes, based, at least in part, on a known refractive index of the target layer-of-interest; and calculate, from all of the measured diopters, a mean diopter power of the target layer-of-interest.
[0012] In some embodiments, the target layer-of-interest is one of: the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, or the posterior surface of the cornea
[0013] In some embodiments, the selected center point is one of: a geometric center point of the cartesian plane, the visual axis of the human cornea, the pupillary axis of the human cornea, the sampling point having the highest elevation in the 3-D cloud, or a user-selected point in the cartesian plane.
[0014] In some embodiments, the azimuthal sector is based on a length of the azimuthal vector and a width extending symmetrically by a predetermined distance on each side of the azimuthal vector.
[0015] In some embodiments, the azimuthal vector extends from the selected center point a predetermined length, and is rotatable up to 360 degrees about the selected center point.
[0016] In some embodiments, the azimuthal vector is centered on the selected center point and extends a predetermined length equally in opposing directions therefrom, and is rotatable up to 180 degrees about the selected center point.
[0017] In some embodiments, the plurality of sampling points is sampled based on a predetermined sampling density.
[0018] In some embodiments, the sampling density is variable.
[0019] In some embodiments, the best- fit shape is selected from the group consisting of: a sphere, part of a circle, ellipse, parabola, hyperbola, a polynomial fit of degree 1 to n, or Zemike polynomials.
[0020] There is further provided, in an embodiment, a method comprising: receiving as input, ocular imaging data representing a target layer-of-interest of a human cornea; processing the ocular imaging data to generate a refractive power map of the target layer- of-interest; using the refractive power map of the target layer-of-interest in conjunction with a refractive procedure to the human cornea, or to treat a vision disorder of the human cornea.
[0021] In some embodiments, the refractive procedure is selected from the group consisting of: LASIK (laser in-situ keratomileusis), photorefractive keratectomy (PRK), radial keratotomy (RK), astigmatic keratotomy (AK), automated lamellar keratoplasty
(ALK), laser thermal keratoplasty (LTK), conductive keratoplasty (CK), or intracorneal ring.
[0022] In some embodiments, the vision disorder is selected from the group consisting of: myopia, hyperopia, presbyopia, or astigmatism.
[0023] In addition to the exemplary aspects and embodiments described above, further aspects and embodiments will become apparent by reference to the figures and by study of the following detailed description.
BRIEF DESCRIPTION OF THE FIGURES
[0024] FIG. 1 is a block diagram of an exemplary system 100 which provides for automated generation of an accurate refractive power map of a selected layer of the cornea.
[0025] FIG. 2 is a flowchart which illustrates the functional steps in a method for automated generation of an accurate refractive power map of a layer-of-interest of the cornea.
[0026] FIG. 3 shows an exemplary schematic color-coded (in grey scale) representation of an exemplary corneal thickness map of a human eye.
[0027] FIG. 4 shows an exemplary 3-D cloud representing elevation data for a set of 13 X 13 sampling points distributed about a target layer-of-interest.
[0028] FIG. 5A shows schematically the set of sampling points 502 on a cartesian plane (x, y)
[0029] FIGS. 5B-5F schematically depict an exemplary process of rotationally selecting, about the selected center point of a 3-D cloud, a series of successive discrete azimuthal sectors.
[0030] FIGS. 6A-6B show discrete subsets of sampling points selected at azimuth angles 90 and 45 degrees, respectively.
[0031] FIG. 7A shows an exemplary 2-D representation of a discrete subset of sampling points, wherein the Y-axis represents the elevation of each sampling point, and the X-axis represents the distance relative to the selected center point in the cartesian plane (X. F).
[0032] FIG. 7B shows schematically illustrates a best-fit curve calculated as a degree
4 (quadric) polynomial for the exemplary 2-D representation shown in FIG. 7A.
DETAILED DESCRIPTION
[0033] Disclosed is a technique, embodied in a system, computer-implemented method, and computer program product, which provides for automated generation of an accurate refractive power map of a layer-of-interest within a cornea of a human subject.
[0034] For purposes of the present disclosure, a “layer-of-interest” of a human cornea may include any one or more of the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, and/or the posterior surface of the cornea.
[0035] In some embodiments, the present technique provides for automated generation of an accurate refractive power map of a layer-of-interest within a cornea, based, at least in part, on an elevation map of the layer-of-interest. In some embodiments, an elevation map of a layer-of-interest indicates where the layer-of-interest is elevated or depressed relative to a selected reference surface, such as a best fit sphere.
[0036] In some embodiments, elevation mapping of the cornea or any of its several layers can be derived from multiple ocular imaging modalities, including, but not limited to, corneal topography, corneal tomography, corneal pachymetry, and the like.
[0037] The present technique may be particularly useful as a tool to guide practitioners in conjunction with refractive procedures used to alter or improve the refractive state of the eye, such as LASIK (laser in-situ keratomileusis), photorefractive keratectomy (PRK), radial keratotomy (RK), astigmatic keratotomy (AK), automated lamellar keratoplasty (ALK), laser thermal keratoplasty (LTK), conductive keratoplasty (CK), and/or intracorneal ring. The present technique may further be used to treat common vision disorders, such as myopia, hyperopia, presbyopia, astigmatism, and in assessing the fit of vision correcting lenses.
[0038] The present technique may be realized as a standalone system or device which may receive, as input, ocular imaging data associated with a target patient from one or more ocular imaging devices, and output one or more refractive power maps for the target patient. In other exemplary realizations, the present technique may be realized as a hardware and/or software module bundled with, or incorporated into, a device, such as
an ocular imaging device or a Laser vision correction device, to perform the steps of one or more methods of the present technique described herein with respect thereto.
[0039] Reference is made to FIG. 1, which is a block diagram of an exemplary system 100 which provides for automated generation of an accurate refractive power map of a layer-of-interest of the cornea.
[0040] In some embodiments, system 100 may comprise a hardware processor 102, and a random-access memory (RAM) 104, and/or one or more non-transitory computer- readable storage device 106.
[0041] Processing module 102 may include components such as, but not limited to, one or more central processing units (CPUs), graphics processing units (GPUs), or any other suitable multi-purpose or specific processors or controllers. Processing module 102 may be operationally directly and/or indirectly connected to, and control the operation of, storage device 106 and all other components of system 100.
[0042] Storage device 106 may be or may include, for example, one or more non- transitory computer-readable storage device(s), a Random Access Memory (RAM), a read only memory (ROM), a Dynamic RAM (DRAM), a Synchronous DRAM (SDRAM), a double data rate (DDR) memory chip, a Flash memory, a volatile memory, a non-volatile memory, a cache memory, a buffer, a short term memory unit, a long term memory unit, or other suitable memory units or storage units.
[0043] In some embodiments, system 100 may store in storage device 106 software instructions or components configured to operate a processing unit (also ‘hardware processor,’ ‘CPU,’ ‘quantum computer processor,’ or simply ‘processor’), such as hardware processor 102. In some embodiments, the software components may include an operating system, including various software components and/or drivers for controlling and managing general system tasks (e.g., memory management, storage device control, power management, etc.) and facilitating communication between various hardware and software components.
[0044] System 100 may include one or more modules, such as an image processing module 108, a refractive mapping module 110, and a refractive power module 112. These modules may be implemented in hardware only, software only, or a combination of both hardware and software.
[0045] In some embodiments, system 100 may further comprise a user interface module 114, which may comprise a display monitor for displaying data and images, a control panel for controlling system 100, and/or a speaker for providing audio feedback.
[0046] System 100 as described herein is only an exemplary embodiment of the present invention, and in practice may be implemented in hardware only, software only, or a combination of both hardware and software. System 100 may have more or fewer components and modules than shown, may combine two or more of the components, or may have a different configuration or arrangement of the components. System 100 may include any additional component enabling it to function as an operable computer system, such as a motherboard, data busses, power supply, a network interface card, a display, an input device (e.g., keyboard, pointing device, touch-sensitive display), etc. (not shown). Components of system 100 may be co-located or distributed, or the system may be configured to run as one or more cloud computing ‘instances,’ ‘containers,’ ‘virtual machines,’ or other types of encapsulated software applications, as known in the art.
[0047] The instructions of system 100 will now be discussed with reference to the flowchart of FIG. 2, which illustrates the functional steps in a method 200 for automated generation of an accurate refractive power map of a layer-of-interest of the cornea.
[0048] The various steps of method 200 will be described with continuous reference to exemplary system 100 shown in FIG. 1. The various steps of method 200 may either be performed in the order they are presented or in a different order (or even in parallel), as long as the order allows for a necessary input to a certain step to be obtained from an output of an earlier step. In addition, the steps of method 200 may be performed automatically (e.g., by system 100 of FIG. 1), unless specifically stated otherwise. In addition, the steps of method 200 are set forth for exemplary purposes, and it is expected that modifications to the flowchart may be implemented as necessary or desirable.
[0049] Method 200 begins at step 202, wherein system 100 receives, as input, imaging data associated with a target cornea of a human subject, and/or a layer-of-interest within the target cornea.
[0050] In some cases, the input imaging data may be acquired using one or more ocular imaging modalities, including, but not limited to, corneal topography, comeal tomography, corneal pachymetry, and the like.
[0051] In some embodiments, the input imaging data represents ocular imaging results associated with the target cornea, from one or a combination of one or more of the following imaging modalities:
Corneal topography, which maps the anterior surface and curvature of the cornea. Corneal topography can be performed using a variety of devices known in the industry, such as the Zeiss Atlas 500, the NIDEK OPD-Scan III, Bausch and Lomb Orbscan IIz Corneal Analysis System, and more.
Scheimpflug imaging, which evaluates the front and back surfaces of the cornea. Scheimpflug imaging can evaluate corneal curvature, corneal thickness, and corneal opacities. Scheimpflug imaging can be performed using devices such as the Oculus Pentacam, Galilei ColorZ, Schwind SIRIUS+, and more.
Corneal tomography, which images the cornea in 3-D by cross-sections using penetrating radiation. One example is ocular optical coherence tomography (OCT), which uses coherent near-infrared light to obtain depth resolved images of the eye.
Corneal pachymetry, which is the process of measuring the thickness of the cornea.
[0052] In some cases, common imaging devices used by practitioners will combine one or more of these imaging modalities in a single device, such as Scheimpflug imaging and ocular OCT.
[0053] These common imaging modalities and devices produce a variety of imaging data results. For example, the input imaging data associated with the target cornea may include one or more of the following imaging and data types:
Axial and/or tangential maps which represent corneal curvature values at the anterior and/or posterior surfaces.
Thickness maps, also known as pachymetry maps, which represent the thickness of the cornea or one or more layers within the cornea, such as the epithelium, Bowman’s membrane, stroma, Descemet’s membrane, and/or the endothelium.
Elevation maps, which represent the elevation or depression of points about the plane of one or more layers of the cornea, relative to a reference shape. Typically, the reference shape is a best-fit sphere, which may be centered on the corneal
apex, the corneal line of sight, or floating through the measured corneal surface. Elevation maps indicate, using color-coding, where a point in the cornea is elevated or depressed relative to the best fit sphere. Elevation maps can be generated for the anterior surface of the cornea, the posterior surface of the cornea, and for corneal layers such as that the epithelium, Bowman’s membrane, stroma, Descemet’s membrane, and/or the endothelium.
[0054] Typically, the output data maps are color-coded with color scales that range from warm colors (red, orange, yellow), to neutrals (green), to cool colors (blue, purple). The color scales represent the different patterns shown in each image.
[0055] FIG. 3 is a schematic color-coded (in grey scale) representation of an exemplary corneal thickness map of a human eye. The bar indicates by color coding the height or thickness of the various points in the map, typically in micrometers.
[0056] In step 204, system 100 executes image processing module 108 to process the input imaging data received in step 202, to calculate data representing an elevation or depression of a set of sampling points within the target layer-of-interest, relative to a surface that is best-fit to the target layer-of-interest. As noted above, the target layer-of- interest may be the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, and/or the posterior surface of the cornea.
[0057] For example, system 100 executes image processing module 108 to process one or more of the following input imaging data associated with the target layer-of-interest, which may include one or more of: axial and/or tangential maps representing curvature of the target layer-of-interest; thickness maps representing the thickness of the target layer-of-interest; and/or elevation maps representing the elevation of points about the target layer-of-interest relative to a reference shape.
[0058] In some embodiments, system 100 executes image processing module 108 to process the input imaging data received in step 202, to extract data representing an elevation or depression of a set of sampling points distributed about the target layer-of- interest, relative to a surface that is best-fit to the target layer-of-interest. In some embodiments, the set of sampling points comprises a selected number of sampling points, e.g., between 10-10,000 points. However, in other cases, the set of sampling points may comprise fewer or more sampling points. In some embodiments, the number of sampling
points in the set may be based on any suitable or desired sampling interval, selected according to a desired sampling density. In some embodiments, the sampling density may be user-selected, e.g., by a user operating system 100 via user interface 114.
[0059] In some embodiments, the number of sampling points in the set may be based on a variable sampling interval, which sample more densely in selected areas of the target layer-of-interest. This feature can be implemented using automatic identification of areas by differentiating thickness in a condensed space, in order to increase the sampling resolution.
[0060] In some embodiments, the data representing elevation or depression of the set of sampling points comprises, for each of the sampling points:
Elevation or depression of the sampling point relative to a surface that is best-fit to the target layer-of-interest; and coordinates of the sampling point within a cartesian coordinate system (x,y) representing the area of the layer-of-interest.
[0061] In some embodiments, the coordinates system may reference a selected center point of the elevation map of the target layer-of-interest. In some embodiments, the selected center point of the elevation map of the target layer-of-interest may be any one of a geometric center point of the elevation map, the visual axis of the target cornea, the pupillary axis of the target cornea, the point with the maximum elevation, or any user- selected point.
[0062] In step 206, system 100 executes refractive mapping module 110 to generate a 3-D cloud representing the set of sampling points. In some embodiments, system 100 executes refractive mapping module 110 to generate a 3-D cloud representing the set of sampling points, wherein the X and Y axes of the 3-D cloud represent the coordinates (x,y) within the coordinate system assigned to each sampling point, and the Z axis represents the elevation or height of each sampling point relative to a surface that is best- fit to the target layer-of-interest.
[0063] In some embodiments, the generated 3-D cloud has a selected center point. In some embodiments, the generated 3-D cloud has a selected center point which may be any one of a geometric center point of the elevation map, the visual axis of the target
cornea, the pupillary axis of the target cornea, the point with the maximum elevation, or any user-selected point.
[0064] FIG. 4 shows schematically an exemplary 3-D cloud representing elevation data for a set of sampling points distributed about a target layer-of-interest. The X and Y axes form a cartesian plane (x, y) which represents the area of the target layer-of-interest. the selected center point is represented on the cartesian plane. The Z axis denotes the elevation of each sampling point relative to the reference shape. The values are given in micrometers.
[0065] FIG. 5A shows schematically a 2-D representation of the sampling points 502 comprising the generated 3-D cloud, represented on a cartesian plane (x,y) and having a selected center point 504.
[0066] In step 208, system 100 executes refractive mapping module 110 to rotationally select, about a selected center point 504 of the 3-D cloud generated in step 206, a series of successive discrete azimuthal sectors, wherein each of the successive discrete azimuthal sectors includes one or more of sampling points 502a which lie within an area defined by the azimuthal sector within the cartesian plane (x, y) shown in FIG. 5A.
[0067] FIGS. 5B-5F schematically depict an exemplary process by which system 100 executes refractive mapping module 110 to rotationally select, about selected center point 504 of the 3-D cloud generated in step 206, a series of successive discrete azimuthal sectors, each comprising a defined subset of the sampling points 502 which lie within the azimuthal sector.
[0068] With reference to the example of FIG. 5B, system 100 executes refractive mapping module 110 to apply an azimuthal vector 506 which rotates about selected center point 504 of the cartesian plane (x, y) shown in FIG. 5A (which is a planar 2-D representation of the 3-D cloud generated in step 206.
[0069] In some embodiments, azimuthal vector 506 may be rotated about selected center point 504, which can coincide with any one of a geometric center point of the elevation map, the visual axis of the target cornea, the pupillary axis of the target cornea, the point with the maximum elevation, or any user-selected point.
[0070] As shown in FIG. 5B, azimuthal vector 506 may represent a radial vector extending a predetermined length from, and rotatable up to 360 degrees about, selected center point 504.
[0071 ] For example, in FIG. 5B, azimuthal vector 506 can extend from selected center point 504 (which in this case is the geometric center of the cartesian plane) to substantially the periphery of the cartesian plane (x, y) shown in FIG. 5A (which represents the area of the target layer-of-interest). Azimuthal vector 506 can be rotated up to 360 degrees, e.g., between 1-360 degrees, about selected center point 504, to cover a specified angular sector of the area of the layer-of-interest.
[0072] In other cases as shown in FIG. 5C, azimuthal vector 506 may represent a vector extending from selected center point 505 (which in this example is a point other than the geometric center of the cartesian plane), having a predetermined or user-selected length or radius, to cover a selected area-of-interest within the layer-of-interest. For example azimuthal vector 506 may have a length selected to cover only a predetermined portion of the cartesian plane (x, y) centered about the selected center point 504, as indicated by the dashed circle in FIG. 5C.
[0073] In yet other cases, as shown in FIG. 5D, azimuthal vector 506 can represent a vector centered on and extending equal lengths in opposing directions from selected center point 504. Azimuthal vector 506 can be rotated up to 180 degrees, e.g., between 1-180 degrees, about selected center point 504, to cover a specified angular sector of the area of the layer-of-interest.
[0074] As shown in FIGS. 5B-5D, azimuthal vector 506 defines an azimuthal sector 508, which extends the length of azimuthal vector 506, and has a width XI (shown in FIG. 5D) extending symmetrically by a predetermined distance on each side of vector 506 acting as the central axis of azimuthal sector 508, as indicated by the parallel dashed lines.
[0075] In each case, the azimuthal sector 508 selects a subset of sampling points 502a (indicated by blacked-out circle in FIGS. 5B-5D) which lie within the area defined by azimuthal sector 508 within the cartesian plane (x, y) shown in FIG. 5A.
[0076] As shown in FIG. 5E, the width of azimuthal sector 508 may be predetermined or selected as desired, by changing the width of azimuthal sector 508 extending
symmetrically on each side of vector 506, to a different width X2, as indicated by the parallel dashed lines in FIG. 5E.
[0077] As shown in FIG. 5F, azimuthal vector 506 may be rotated, as indicated by the arrows in FIGS. 5B and 5D, about selected center point 504 according to a selected angle a, to a next angular position. In FIG. 5E, azimuthal vector 506 is shown as rotatable in the clockwise direction, however, azimuthal vector 506 may be equally rotatable in the clockwise or the opposite directions.
[0078] In its next angular position, azimuthal vector 506 again defines azimuthal sector 508, which selects a next subset comprising sampling points 502a which lie within azimuthal sector 508. This process may reiterate over the azimuthal range from 0-360 or 0-180 degrees, as the case may be, to cover the entirety of entire the area of the layer-of- interest, as represented in the cartesian plane (x, y) shown in FIG. 5A.
[0079] In some cases, the length of the azimuthal vector 506 may be varied around the selected center point position 504, so as to include only a portion of the cloud (for instance, when conducting calculations for a central 3 mm radius, a central 1 mm radius, or any radius from the selected center point of the cloud). As noted above, the selected center point can coincide with any one of a geometric center point of the elevation map, the visual axis of the target cornea, the pupillary axis of the target cornea, the point with the maximum elevation, or any user-selected point.
[0080] In some cases, the length of the azimuthal vector 506 may be changed based on user selection, e.g., by a user operating system 100 via user interface 114.
[0081 ] Similarly, the width of the azimuthal sector 508 may be varied about the central axis defined by azimuthal vector 506, based on user selection, e.g., by a user operating system 100 via user interface 114. In one example, the width of the azimuthal sector 508 may be varied about the central axis defined by azimuthal vector 506 between 10-1,000 micrometers. However, in other cases, the width of the azimuthal sector 508 may be wider or narrower.
[0082] In addition, azimuthal vector 506 can be rotated at any desired angular interval a, for example, between 0.1-90 degrees. In some embodiments, azimuthal vector 506 can be rotated at any desired angular interval a, based on user selection, e.g., by a user operating system 100 via user interface 114.
[0083] FIGS. 6A-6B show exemplary subsets of sampling points 502a selected at two angular positions of azimuthal vector 506. The scale values are given in micrometers.
[0084] At the conclusion of step 208, system 100 obtains a set of successive discrete subsets of sampling points, each defined by selected center point 504, azimuthal vector 506 length, azimuthal sector 508 width, and angular position about selected center point 504. In some embodiments, the number of discrete subsets of sampling points in the set is determined by the angular interval selected for rotating azimuthal vector 506.
[0085] In step 210, system 100 executes refractive mapping module 110 to convert each of the discrete subsets of sampling points obtained in step 208, into a 2-D representation showing each sampling point in each subset on a 2-D plane which lies along the corresponding azimuthal angle of azimuthal vector 506 associated with the particular discrete subset.
[0086] FIG. 7A shows an exemplary 2-D representation, wherein the Y-axis represents the elevation of each sampling point, and the X-axis represents the distance relative to selected center point 504 in the cartesian plane (X, F). The Y-axis scale values are given in micrometers.
[0087] At the conclusion of step 210, system 100 executes refractive mapping module 110 to generate and store a set of 2-D representations with respect to all of the discrete subsets of sampling points obtained in step 208.
[0088] In step 212, system 100 executes refractive mapping module 110 to calculate a best-fit shape with respect to each 2-D representation generated in step 210. In some embodiments, the best-fit shape is defined as an optimal geometrical shape that best fits the elevation and distance distribution of the subset of sampling points in each particular 2-D representation. In some embodiments, the best-fit shape may define a sphere, part of a circle, an ellipse, a parabola, a hyperbola, a polynomial fit of degree 1 to n, a Zemike polynomial, and/or any other suitable or fitting shape.
[0089] In some embodiments, system 100 executes refractive mapping module 110 to calculate a best-fit shape with respect to each 2-D representation generated in step 210, based on any suitable optimization method or algorithm, such as the least square means distance, or the like.
[0090] FIG. 7B schematically illustrates a best-fit curve calculated as a degree 4 (quadric) polynomial, for the 2-D representation shown in FIG. 7A. The Y-axis scale values are given in micrometers.
[0091 ] At the conclusion of step 212, system 100 executes refractive mapping module 110 to generate and store a best-fit shape with respect to each of the 2-D representations generated in step 210.
[0092] In step 214, system 100 executes refractive power module 112 to measure the diopter power of each of the best-fit shapes calculated in step 212, based on the known refractive index of the target layer-of-interest (for example, the epithelium refractive index is approximately 1.401, while the refractive index of the stroma is approximately 1.377). The formula that measures the diopter power relies on a radius and the refractive power:
where:
Dp is the diopter power of the shape.
R is the radius of the shape. n is the refractive index of the layer-of-interest.
[0093] In the case of best-fit shapes that do not represent a constant radius, different approaches may be employed, such as measuring the average curvature of each point closest to the discrete sampling points measured, and then calculating the average diopter power based on the dioptric power of each point.
[0094] In step 216, system 100 executes refractive power module 112 to calculate the mean diopter power of the entire layer-of-interest, which is equivalent to the spherical equivalent dioptric power, as well as the minimum and maximum dioptric power of the layer-of-interest with their respective azimuthal axes. This enables to determine the dioptric power of the layer-of-interest. In some embodiments, this may be based on applying a simplified Munnerlyn formula to obtain an average refractive power for different refractive zone.
[0095] The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage
medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.
[0096] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire. Rather, the computer readable storage medium is a non-transient (i.e., not- volatile) medium.
[0097] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.
[0098] Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any
combination of one or more programming languages, including an object-oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user’s computer, partly on the user’s computer, as a stand-alone software package, partly on the user’s computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user’s computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, a field-programmable gate array (FPGA), or a programmable logic array (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention. In some embodiments, electronic circuitry including, for example, an application-specific integrated circuit (ASIC), may be incorporate the computer readable program instructions already at time of fabrication, such that the ASIC is configured to execute these instructions without programming.
[0099] Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.
[0100] These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored
therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
[0101] The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
[0102] The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware -based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
[0103] In the description and claims, each of the terms “substantially,” “essentially,” and forms thereof, when describing a numerical value, means up to a 20% deviation (namely, ±20%) from that value. Similarly, when such a term describes a numerical range, it means up to a 20% broader range - 10% over that explicit range and 10% below it).
[0104] In the description, any given numerical range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range, such that each such subrange and individual numerical value constitutes an embodiment of the invention. This applies regardless of the breadth of the range. For example, description of a range of integers from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6, etc., as well as individual numbers within that range,
for example, 1, 4, and 6. Similarly, description of a range of fractions, for example from 0.6 to 1.1, should be considered to have specifically disclosed subranges such as from 0.6 to 0.9, from 0.7 to 1.1, from 0.9 to 1, from 0.8 to 0.9, from 0.6 to 1.1, from 1 to 1.1 etc., as well as individual numbers within that range, for example 0.7, 1, and 1.1.
[0105] The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the explicit descriptions. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.
[0106] In the description and claims of the application, each of the words “comprise,” “include,” and “have,” as well as forms thereof, are not necessarily limited to members in a list with which the words may be associated.
[0107] Where there are inconsistencies between the description and any document incorporated by reference or otherwise relied upon, it is intended that the present description controls.
Claims
1. A computer- implemented method comprising: receiving, as input, ocular imaging data representing a target layer-of-interest of a human cornea; processing the ocular imaging data to calculate elevation data relative to a best- fit reference shape, for a set of sampling points in the target layer-of-interest; generating, based on said processing, a 3-D cloud indicative of said elevation and X, Y coordinates of said set of sampling points within a cartesian plane representing said target layer-of-interest; selecting a plurality of discrete subsets of said sampling points, by successively rotatably advancing an azimuthal vector at predetermined angular intervals about a selected center point of said cartesian plane, and selecting said sampling points which lie within an azimuthal sector of said cartesian plane defined relative to said azimuthal vector; converting each of said plurality of discrete subsets of sampling points into a 2- D representation; finding, for each of said 2-D representations, a best-fit shape; measuring a diopter of each of said best-fit shapes, based, at least in part, on a known refractive index of said target layer-of-interest; and calculating, from all of said measured diopters, a mean diopter power of said target layer-of-interest.
2. The computer- implemented method of claim 1, wherein said target layer-of- interest is one of: the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, or the posterior surface of the cornea
3. The computer-implemented method of any one of claims 1 or 2, wherein said selected center point is one of: a geometric center point of said cartesian plane, the visual axis of said human cornea, the pupillary axis of the human cornea, said sampling point having the highest elevation in said 3-D cloud, or a user-selected point in said cartesian plane.
4. The computer-implemented method of any one of claims 1-3, wherein said azimuthal sector is based on a length of said azimuthal vector and a width extending symmetrically by a predetermined distance on each side of said azimuthal vector.
5. The computer-implemented method of any one of claims 1-4, wherein said azimuthal vector extends from said selected center point a predetermined length, and is rotatable up to 360 degrees about said selected center point.
6. The computer-implemented method of any one of claims 1-4, wherein said azimuthal vector is centered on said selected center point and extends a predetermined length equally in opposing directions therefrom, and is rotatable up to 180 degrees about said selected center point.
7. The computer-implemented method of any one of claims 1-6, wherein said plurality of sampling points is sampled based on a predetermined sampling density.
8. The computer-implemented method of claim 7, wherein said sampling density is variable.
9. The computer-implemented method of any one of claims 1-8, wherein said best- fit shape is selected from the group consisting of: a sphere, part of a circle, ellipse, parabola, hyperbola, a polynomial fit of degree 1 to n, or Zemike polynomials.
10. A system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program instructions, the program instructions executable by the at least one hardware processor to: receive, as input, ocular imaging data representing a target layer-of-interest of a human cornea, process the ocular imaging data to calculate elevation data relative to a best- fit reference shape, for a set of sampling points in the target layer-of-interest, generate, based on said processing, a 3-D cloud indicative of said elevation and X, Y coordinates of said set of sampling points within a cartesian plane representing said target layer-of-interest,
select a plurality of discrete subsets of said sampling points, by successively rotatably advancing an azimuthal vector at predetermined angular intervals about a selected center point of said cartesian plane, and selecting said sampling points which lie within an azimuthal sector of said cartesian plane defined relative to said azimuthal vector, convert each of said plurality of discrete subsets of sampling points into a 2- D representation, find, for each of said 2-D representations, a best-fit shape, measure a diopter of each of said best-fit shapes, based, at least in part, on a known refractive index of said target layer-of-interest, and calculate, from all of said measured diopters, a mean diopter power of said target layer-of-interest.
11. The system of claim 10, wherein said target layer-of-interest is one of: the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, or the posterior surface of the cornea
12. The system of any one of claims 10 or 11, wherein said selected center point is one of: a geometric center point of said cartesian plane, the visual axis of said human cornea, the pupillary axis of the human cornea, said sampling point having the highest elevation in said 3-D cloud, or a user-selected point in said cartesian plane.
13. The system of any one of claims 10-12, wherein said azimuthal sector is based on a length of said azimuthal vector and a width extending symmetrically by a predetermined distance on each side of said azimuthal vector.
14. The system of any one of claims 10-13, wherein said azimuthal vector extends from said selected center point a predetermined length, and is rotatable up to 360 degrees about said selected center point.
15. The system of any one of claims 10-13, wherein said azimuthal vector is centered on said selected center point and extends a predetermined length equally in opposing directions therefrom, and is rotatable up to 180 degrees about said selected center point.
16. The system of any one of claims 10-15, wherein said plurality of sampling points is sampled based on a predetermined sampling density.
17. The system of claim 16, wherein said sampling density is variable.
18. The system of any one of claims 10-17, wherein said best-fit shape is selected from the group consisting of: a sphere, part of a circle, ellipse, parabola, hyperbola, a polynomial fit of degree 1 to n, or Zernike polynomials.
19. A computer program product comprising a non-transitory computer-readable storage medium having program instructions embodied therewith, the program instructions executable by at least one hardware processor to: receive, as input, ocular imaging data representing a target layer-of-interest of a human cornea; process the ocular imaging data to calculate elevation data relative to a best-fit reference shape, for a set of sampling points in the target layer-of-interest; generate, based on said processing, a 3-D cloud indicative of said elevation and X, Y coordinates of said set of sampling points within a cartesian plane representing said target layer-of-interest; select a plurality of discrete subsets of said sampling points, by successively rotatably advancing an azimuthal vector at predetermined angular intervals about a selected center point of said cartesian plane, and selecting said sampling points which lie within an azimuthal sector of said cartesian plane defined relative to said azimuthal vector; convert each of said plurality of discrete subsets of sampling points into a 2-D representation; find, for each of said 2-D representations, a best-fit shape; measure a diopter of each of said best-fit shapes, based, at least in part, on a known refractive index of said target layer-of-interest; and calculate, from all of said measured diopters, a mean diopter power of said target layer-of-interest.
20. The computer program product of claim 19, wherein said target layer-of-interest is one of: the anterior surface of the cornea, the tear film, the epithelium, Bowman’s membrane, the stroma, Descemet’s membrane, the endothelium, or the posterior surface of the cornea
21. The computer program product of any one of claims 19 or 20, wherein said selected center point is one of: a geometric center point of said cartesian plane, the visual axis of said human cornea, the pupillary axis of the human cornea, said sampling point having the highest elevation in said 3-D cloud, or a user-selected point in said cartesian plane.
22. The computer program product of any one of claims 19-21, wherein said azimuthal sector is based on a length of said azimuthal vector and a width extending symmetrically by a predetermined distance on each side of said azimuthal vector.
23. The computer program product of any one of claims 19-22, wherein said azimuthal vector extends from said selected center point a predetermined length, and is rotatable up to 360 degrees about said selected center point.
24. The computer program product of any one of claims 19-22, wherein said azimuthal vector is centered on said selected center point and extends a predetermined length equally in opposing directions therefrom, and is rotatable up to 180 degrees about said selected center point.
25. The computer program product of any one of claims 19-24, wherein said plurality of sampling points is sampled based on a predetermined sampling density.
26. The computer program product of claim 25, wherein said sampling density is variable.
27. The computer program product of any one of claims 19-26, wherein said best- fit shape is selected from the group consisting of: a sphere, part of a circle, ellipse, parabola, hyperbola, a polynomial fit of degree 1 to n, or Zernike polynomials.
28. A method comprising: receiving as input, ocular imaging data representing a target layer-of-interest of a human cornea; processing said ocular imaging data to generate a refractive power map of said target layer-of-interest;
using said refractive power map of said target layer-of-interest in conjunction with a refractive procedure to said human cornea, or to treat a vision disorder of said human cornea.
29. The method of claim 28, wherein said refractive procedure is selected from the group consisting of: LAS1K (laser in-situ keratomileusis), photorefractive keratectomy (PRK), radial keratotomy (RK), astigmatic keratotomy (AK), automated lamellar keratoplasty (ALK), laser thermal keratoplasty (LTK), conductive keratoplasty (CK), or intracorneal ring.
30. The method of claim 28, wherein said vision disorder is selected from the group consisting of: myopia, hyperopia, presbyopia, or astigmatism.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US202463556821P | 2024-02-22 | 2024-02-22 | |
| US63/556,821 | 2024-02-22 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2025177280A1 true WO2025177280A1 (en) | 2025-08-28 |
Family
ID=96846565
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/IL2025/050178 Pending WO2025177280A1 (en) | 2024-02-22 | 2025-02-20 | Machine learning generation of corneal refractive power map |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2025177280A1 (en) |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030055412A1 (en) * | 1998-10-02 | 2003-03-20 | Scientific Optics, Inc. | Method for diagnosing and improving vision |
| US20030236515A1 (en) * | 1995-10-18 | 2003-12-25 | Scientific Optics, Inc. | Method and apparatus for improving vision |
| US20060189966A1 (en) * | 2002-06-03 | 2006-08-24 | Scientific Optics, Inc. | Method and system for improving vision |
| US20070291228A1 (en) * | 2006-05-01 | 2007-12-20 | University Of Southern California | Gaussian fitting on mean curvature maps of parameterization of corneal ectatic diseases |
| US20110077734A1 (en) * | 2006-03-08 | 2011-03-31 | Scientific Optics, Inc. | Method and Apparatus for Universal Improvement of Vision |
| US20110112805A1 (en) * | 1995-10-18 | 2011-05-12 | Scientific Optics, Inc. | Method and apparatus for improving vision |
| US20160015262A1 (en) * | 2011-03-03 | 2016-01-21 | David M. Lieberman | Method and system for improving vision of an eye with macular degeneration |
| US20180000342A1 (en) * | 2016-06-30 | 2018-01-04 | Oregon Health And Science University | Diagnostic classification of corneal shape abnormalities |
| US20180035884A1 (en) * | 2013-07-29 | 2018-02-08 | Bioptigen, Inc. | Methods of Performing Surgery Using Optical Coherence Tomography (OCT) |
| US20190209006A1 (en) * | 2017-01-11 | 2019-07-11 | University Of Miami | Segmentation-based corneal mapping |
| US20200245865A1 (en) * | 2019-02-06 | 2020-08-06 | Jichi Medical University | Method for assisting corneal severity identification using unsupervised machine learning |
-
2025
- 2025-02-20 WO PCT/IL2025/050178 patent/WO2025177280A1/en active Pending
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20030236515A1 (en) * | 1995-10-18 | 2003-12-25 | Scientific Optics, Inc. | Method and apparatus for improving vision |
| US20110112805A1 (en) * | 1995-10-18 | 2011-05-12 | Scientific Optics, Inc. | Method and apparatus for improving vision |
| US20030055412A1 (en) * | 1998-10-02 | 2003-03-20 | Scientific Optics, Inc. | Method for diagnosing and improving vision |
| US20060189966A1 (en) * | 2002-06-03 | 2006-08-24 | Scientific Optics, Inc. | Method and system for improving vision |
| US20110077734A1 (en) * | 2006-03-08 | 2011-03-31 | Scientific Optics, Inc. | Method and Apparatus for Universal Improvement of Vision |
| US20070291228A1 (en) * | 2006-05-01 | 2007-12-20 | University Of Southern California | Gaussian fitting on mean curvature maps of parameterization of corneal ectatic diseases |
| US20160015262A1 (en) * | 2011-03-03 | 2016-01-21 | David M. Lieberman | Method and system for improving vision of an eye with macular degeneration |
| US20180035884A1 (en) * | 2013-07-29 | 2018-02-08 | Bioptigen, Inc. | Methods of Performing Surgery Using Optical Coherence Tomography (OCT) |
| US20180000342A1 (en) * | 2016-06-30 | 2018-01-04 | Oregon Health And Science University | Diagnostic classification of corneal shape abnormalities |
| US20190209006A1 (en) * | 2017-01-11 | 2019-07-11 | University Of Miami | Segmentation-based corneal mapping |
| US20200245865A1 (en) * | 2019-02-06 | 2020-08-06 | Jichi Medical University | Method for assisting corneal severity identification using unsupervised machine learning |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| Pinero | Technologies for anatomical and geometric characterization of the corneal structure and anterior segment: a review | |
| Preussner et al. | Impact of posterior corneal surface on toric intraocular lens (IOL) calculation | |
| Savini et al. | Optimized keratometry and total corneal astigmatism for toric intraocular lens calculation | |
| Ho et al. | Accuracy of corneal astigmatism estimation by neglecting the posterior corneal surface measurement | |
| Maloney et al. | Determination of corneal image-forming properties from corneal topography | |
| US10426551B2 (en) | Personalized refractive surgery recommendations for eye patients | |
| Rudnicka et al. | Magnification characteristics of fundus imaging systems | |
| Savini et al. | Corneal ray tracing versus simulated keratometry for estimating corneal power changes after excimer laser surgery | |
| Park et al. | Comparison of astigmatism prediction error taken with the Pentacam measurements, Baylor nomogram, and Barrett formula for toric intraocular lens implantation | |
| AU2012385282A1 (en) | Process and apparatus for determining optical aberrations of an eye | |
| Salmon | Corneal contribution to the wavefront aberration of the eye | |
| Holladay et al. | Astigmatism analysis and reporting of surgically induced astigmatism and prediction error | |
| Cairns et al. | Accuracy of Orbscan II slit-scanning elevation topography | |
| US20050134799A1 (en) | Application of neuro-ocular wavefront data in vision correction | |
| CN118452845B (en) | Wearing evaluation method and device of cornea shaping mirror based on OCT | |
| Ramos et al. | Correlation of topometric and tomographic indices with visual acuity in patients with keratoconus | |
| WO2002043581A9 (en) | Advanced vision intervention algorithm | |
| Kanellopoulos | Initial outcomes with customized myopic LASIK, guided by automated ray tracing optimization: a novel technique | |
| US11638661B2 (en) | Intelligent topographic corneal procedure advisor | |
| Salouti et al. | Angle κ and its effect on the corneal elevation maps in refractive surgery candidates | |
| Patel et al. | Spotlight on the corneal back surface astigmatism: a review | |
| Næser et al. | Estimating total corneal astigmatism from anterior corneal data | |
| Faria-Ribeiro et al. | Computing retinal contour from optical biometry | |
| Llorens-Quintana et al. | Accuracy of OCT–derived net corneal astigmatism measurement | |
| Cerviño et al. | Determination of corneal volume from anterior topography and topographic pachymetry: application to healthy and keratoconic eyes |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 25758263 Country of ref document: EP Kind code of ref document: A1 |