Disclosure of Invention
In view of the above, embodiments of the present application provide a positioning method, an interface display method, an apparatus and a device for guiding a logo for eye surgery, so as to overcome or at least partially solve the above problems.
In a first aspect of the embodiments of the present application, a positioning method for an eye surgery guiding mark is disclosed, including:
Acquiring a target eyeball image of a patient in an operation pose;
determining a target eye key point set according to the target eyeball image;
determining a target corresponding relation between the non-operative pose and the operative pose of the same eye key point according to the target eye key point set and a reference eye key point set, wherein the reference eye key point set is obtained according to a reference eyeball image of the patient in the non-operative pose;
And determining a target eye guiding mark in the target eyeball image according to the target corresponding relation and the reference eye guiding mark.
Optionally, determining the target eye guiding identifier in the target eyeball image according to the target correspondence and the reference eye guiding identifier includes:
acquiring a reference mark scale of the full periphery limbus of the eye of the patient in the reference eyeball image;
determining a target mark scale of the full circumference limbus of the eye of the patient in the target eyeball image according to the target corresponding relation and the reference mark scale;
and determining the target eye guiding mark according to the target mark scale and the reference eye guiding mark.
Optionally, acquiring a target eyeball image of the patient in the surgical pose includes:
Acquiring an initial frame eyeball image of the patient in an eye video under an operation pose;
determining a target eye key point set according to the target eyeball image, wherein the method comprises the following steps:
and detecting the eye key points of the initial frame eyeball image to obtain the target eye key point set.
Optionally, acquiring a target eyeball image of the patient in the surgical pose includes:
Acquiring a current frame eyeball image in an eye video of the patient in an operation pose, wherein the current frame eyeball image is any frame eyeball image except a starting frame eyeball image in the eye video;
determining a target eye key point set according to the target eyeball image, wherein the method comprises the following steps:
And predicting eye key points according to the current frame eyeball image and a historical frame eyeball image before the current frame eyeball image to obtain the target eye key point set, wherein the historical frame eyeball image at least comprises a previous frame eyeball image of the current frame eyeball image.
Optionally, determining a target eye key point set according to the target eyeball image, and further includes:
Checking a candidate eye key point set, wherein the candidate eye key point set is a target eye key point set obtained through eye key point prediction;
And under the condition that the candidate eye key point set is not verified, eye key point detection is carried out on the current frame eyeball image, and a final target eye key point set is obtained.
Optionally, verifying the set of candidate eye keypoints obtained by eye keypoint prediction comprises:
Comparing the set of eye candidate keypoints with the set of reference eye keypoints;
Determining that the candidate eye key point set does not pass verification when the position offset between two key points representing the same eye key point in the candidate eye key point set and the reference eye key point set is larger than an offset threshold;
and determining that the candidate eye key point set passes the verification under the condition that the position offset between two key points representing the same eye key point in the candidate eye key point set and the reference eye key point set is not larger than the offset threshold value.
Optionally, the target eye keypoint set comprises at least one target eye keypoint, the reference eye keypoint set comprises at least one reference eye keypoint, and determining a target correspondence between the non-operative pose and the operative pose of the same eye keypoint according to the target eye keypoint set and the reference eye keypoint set comprises:
Performing eye key point matching on the target eye key point set and the reference eye key point set to obtain at least one eye key point pair, wherein each eye key point pair comprises a target eye key point and a reference eye key point which represent the same eye key point;
and determining the target corresponding relation based on the at least one eye key point pair.
Optionally, the method further comprises:
And detecting eye key points of the reference eyeball image to obtain the reference eye key point set.
Optionally, the method further comprises:
image segmentation is carried out on the reference eyeball image to obtain a reference image area, and the reference image area represents the cornea area of the patient;
And filtering out points which are positioned outside the reference image area in the estimated eye key point set and obtained through eye key point detection to obtain the reference eye key point set.
Optionally, the obtaining manner of the reference eye guiding mark includes any one or a combination of at least two manners of the following:
means for obtaining corneal topography data based on the target eye in the non-operative pose;
a means obtained by fitting based on the reference eyeball image;
Based on the manner in which the user configuration information is obtained.
Optionally, the method further comprises:
and obtaining a target eye guiding mark based on the target eyeball image.
In a second aspect of the embodiments of the present application, an interface display method for an eye surgery is disclosed, applied to an XR device, comprising:
In a display space of the XR device, displaying a target eye guiding mark in real time for a target eyeball image, wherein the target eye guiding mark in the target eyeball image is determined according to the positioning method of the eye surgery guiding mark according to the first aspect of the embodiment of the application.
Optionally, the method further comprises:
in the display space of the XR device, a target mark scale for the limbus of the patient's eye is displayed in real time.
In a third aspect of the embodiments of the present application, a positioning device for an eye surgery guiding mark is disclosed, including:
the first acquisition module is used for acquiring a target eyeball image of a patient in an operation pose;
The first determining module is used for determining a target eye key point set according to the target eyeball image;
The second determining module is used for determining the target corresponding relation between the non-operative pose and the operative pose of the same eye key point according to the target eye key point set and the reference eye key point set, and the reference eye key point set is obtained according to a reference eyeball image of the patient in the non-operative pose;
and the third determining module is used for determining the target eye guiding mark in the target eyeball image according to the target corresponding relation and the reference eye guiding mark.
In a fourth aspect of the embodiment of the present application, an electronic device is disclosed, including a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the processor implements the steps of the method for positioning an eye surgery guide identifier according to the first aspect of the embodiment of the present application or the steps of the method for displaying an eye surgery interface according to the second aspect of the embodiment of the present application when the processor executes the computer program.
In a fifth aspect of the embodiments of the present application, a computer readable storage medium is disclosed, on which a computer program is stored, the computer program implementing the steps of the method for positioning an eye surgery guide identifier according to the first aspect of the embodiments of the present application or the steps of the method for displaying an eye surgery interface according to the second aspect of the embodiments of the present application when executed by a processor.
In a sixth aspect of the embodiments of the present application, a computer program product is disclosed, where the computer program product includes a computer program, where the computer program when executed by a processor implements the steps of the method for positioning an eye surgery guidance mark according to the first aspect of the embodiments of the present application, or the steps of the method for displaying an eye surgery interface according to the second aspect of the embodiments of the present application.
The embodiment of the application has the following advantages:
In the embodiment of the application, the target corresponding relation between the non-operative pose and the operative pose of the same eye key point is determined through the target eye key point set and the reference eye key point set of the target eyeball image of the patient under the operative pose so as to realize real-time alignment and anchoring of the non-operative pose and the operative pose based on the eye key point, thus the eye guiding mark is determined in the target eyeball image based on the target corresponding relation and the reference eye guiding mark determined under the non-operative pose so as to realize real-time positioning of the intra-operative eye guiding mark. The method is based on the target eye key point set under the surgical pose and the reference eye key point set under the non-surgical pose to realize positioning, so that subjective errors in operation of operators are reduced, stability and accuracy of operation are improved, preoperative preparation time is shortened, operation flow is simplified, scratch of a needle head on the limbus of a patient in a preoperative physical mark is avoided, and discomfort and infection risk of the patient are reduced.
Detailed Description
In order that the above objects, features and advantages of the present application will be readily apparent, a more particular description of embodiments of the application will be rendered by reference to the appended drawings, which are illustrated in the appended drawings. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The existing marking method of the axial position and the incision of the crystalline lens mainly comprises the following three steps of marking the axial position by a marking pen before operation, marking according to a special position before operation and marking by a navigation system in microscopy. Wherein, the
1) The marking of the axial position by the preoperative marker pen means that the head of a patient is placed in a slit lamp fixing position under the condition that the patient takes a sitting position, and the slit lamp is used for preoperative positioning by self-provided scales. Specifically, firstly, the slit lamp is adjusted to a light band, then the light band is adjusted to a required axial position according to the scale of the slit lamp, the needle is used for making scratches on two sides of the sclera edge shown by the light band, and the scratches are dyed by the marker pen. Or only marking the horizontal position (180 degrees) and the vertical position (90 degrees), then using an astigmatism marking disc in operation, and using a marking pen to mark the axial position where the artificial lens is to be placed according to the marked horizontal position and the marked vertical position. The method has the following defects that a, a slit lamp is rough in self-contained dial when a slit lamp is marked under an operation, scale intervals are 10 degrees, scale positioning pointers of the slit lamp are thick (the span occupies about 10 degrees), human errors are easy to occur when a scattered disc is adopted for aligning horizontal and vertical positions in operation, an error is easy to occur when an operation marking pen is thick, the occupied span is easy to cause errors, c, the operation and the operation marking are completed by hand and eye of an operator, in order to reduce the errors before operation, the existing related design has functions of a level meter and the like to reduce the errors, but different markers still have interference of human factors, parallax and alignment of the marker are easy to form errors, d, errors are easy to occur when the patient is matched, cataract operation is mostly old patients, the degree of matching is poor, hearing is poor, error is easy to occur when head position matching is bad, and e, the method is generally invasive operation.
2) According to the special position mark before operation, the existing device has the function of positioning graduations according to the special blood vessel positions of the affected cornea and sclera limbus or conjunctiva before operation, and positioning the intraocular lens according to the graduation difference of the special blood vessel positions by using the astigmatism mark disc during operation. The method has the defects that a, the variability of different patients is large, errors are easy to cause in the selection of special positions, b, the error of alignment of an astigmatic marking disc and the error of marking pen exist in the marking scale of an astigmatic marking disc used by an operator in operation, and c, errors are easy to cause in the matching degree of the patients in operation.
3) The method has the following defects that a, the cost is high, a patient needs to pay extra navigation cost and economic burden is increased, and b, the axial position mark line of the navigation system has deviation from the crystal plane, so that parallax of the operator causes easy error in axial position alignment of the intraocular lens.
In addition, the centering performance of the multifocal intraocular lens is confirmed after implantation, and the center point of the lens and the center height of the lens bag are required to be overlapped, and the current use method is that 1) a patient in Bi Zhu is gazed at a microscope lamp source to observe the overlapping degree of a reflecting point of the lens and the center point of a multifocal central ring. The method comprises the steps of 1) displaying the center of an optical zone by a navigation system in microscopy due to poor coordination degree of a patient and larger reflection spots on the surface of the crystalline lens, and 2) ordering the patient to watch the coincidence degree between the reflection point of the crystalline lens and the center of the optical zone shown by the navigation system by a microscope lamp source. The method has high cost, and the display plane and the crystal plane are deviated, so that the deviation of the centering property of the crystalline lens is caused by parallax of an operator.
Therefore, the existing method for determining the axial position of the crystalline lens and the position of the surgical incision is invasive and time-consuming for the eyes of the patient, and is seriously dependent on the coordination degree of the patient and the operation of the operator, so that the stability and the accuracy of the operation are poor due to the differences of the individual operators, the operation procedure and the coordination of the patient.
In order to overcome the limitations of the related art, the embodiment of the application provides a positioning method of an eye surgery guiding mark, which realizes the alignment anchoring of a non-surgery pose and a surgery pose through eye key points, and individuates positioning and guiding a patient suffering from eye steep shaft, flat shaft and surgery incision (eye guiding mark) in surgery under the condition of no pre-surgery physical mark, thereby realizing the real-time accurate positioning of the eye guiding mark in surgery and increasing the stability and accuracy of the operation in surgery.
The following describes in detail a positioning method of an eye surgery guide mark according to an embodiment of the present application with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a method for positioning an eye surgery guide mark according to an embodiment of the present application. As shown in fig. 1, the positioning method of the ocular surgery guide mark may include steps S110 to S140:
Step S110, acquiring a target eyeball image of the patient in the surgical pose.
In the embodiment of the application, the target eyeball image refers to a real-time eyeball image of a patient in an operation position, wherein the operation position is usually referred to as a lying position.
In some embodiments, the target eye image may be acquired by an image acquisition device (e.g., a camera on an XR device), for example, an eye video in an operative pose may be acquired in real time by the image acquisition device, the target eye image may be the most recent frame of eye image in the eye video, and for example, the current eye image may be acquired directly by the image acquisition device.
And step S120, determining a target eye key point set according to the target eyeball image.
In the embodiment of the present application, each target eye key point in the target eye key point set may be a feature point corresponding to an eyeball kerb in the target eyeball image, or a feature point corresponding to a cornea boundary or a cornea limbus in the target eyeball image.
In some embodiments, the target eye keypoint set may be obtained by performing a keypoint extraction algorithm on the target eyeball image, where the keypoint extraction algorithm may be a maximum stable extremum region algorithm or a SuperPoint algorithm based on a neural network. In some embodiments, the set of target eye keypoints may be obtained by predicting target eye keypoints for the target eyeball image using a neural network model (e.g., a time series transducer model).
And S130, determining a target corresponding relation between the non-operative pose and the operative pose of the same eye key point according to the target eye key point set and a reference eye key point set, wherein the reference eye key point set is obtained according to a reference eyeball image of the patient in the non-operative pose.
In the embodiment of the application, the reference eyeball image is an eyeball image of a patient in a non-operative pose, which is acquired before an operation, the non-operative pose is different from the operative pose, the non-operative pose is the pose of the patient before the operation, the reference eyeball image is acquired to be in a sitting position or a standing position when the patient performs eye examination before the operation in general, in some embodiments, an XR device (that is, an Extended quality device, an XR device refers to a device which generates a real and virtual combined environment through a computer technology and a wearable device and realizes man-machine interaction) can be utilized to perform image shooting on the eyes, so as to obtain the reference eyeball image in the non-operative pose.
The reference eye key points of the reference eye key point set may be feature points corresponding to the limbus of the eyeball in the reference eyeball image or feature points corresponding to the cornea boundary or the limbus in the reference eyeball image. In some embodiments, the reference eye keypoint set may be obtained by performing an eye keypoint extraction on the reference eyeball image by a keypoint extraction algorithm.
The target eye key point set is an eye key point set corresponding to a target eyeball image in the surgical pose, the reference eye key point set is an eye key point set corresponding to a reference eyeball image in the non-surgical pose, and the alignment is carried out based on the target eye key point set and each eye key point in the reference eye key point set, so that the target corresponding relation of the same eye key point between the non-surgical pose and the surgical pose can be determined, namely the non-surgical pose and the surgical pose are anchored in real time based on the eye key points.
And step 140, determining a target eye guiding mark in the target eyeball image according to the target corresponding relation and the reference eye guiding mark.
In embodiments of the present application, the reference ocular guide identifier is used for guiding during surgical procedures, and may also be used for guiding during non-surgical (e.g., ocular examination). The reference eye guiding mark comprises an axial position of a lens, a surgical incision position, and an eyeball white-to-white center position, a steep axis and/or a flat axis, wherein the surgical incision position can be adjusted according to the needs of an operator, the surgical incision position can be a position of 120 degrees in a reference mark scale of the whole circumference limbus of the eye in the reference eye guiding mark, the determined target eye guiding mark corresponds to the position of 120 degrees in the target mark scale of the whole circumference limbus of the eye, as shown in fig. 2, the axial position of the lens is obtained according to corneal topography data, the axial position of the lens can be a position of 45 degrees in the reference mark scale of the whole circumference limbus of the eye in the reference eye guiding mark, and the axial position of the lens corresponds to the position of 45 degrees in the target mark scale of the whole circumference limbus of the eye in the determined target eye guiding mark, as shown in fig. 2.
In performing an eye operation (for example, refractive cataract operation), it is necessary to place an astigmatism-correcting intraocular lens at a correct position according to the axial position and to determine the centering of the multifocal intraocular lens by the white-to-white center position of the eyeball.
The reference eye guiding mark is an eye guiding mark determined under the non-operation pose, and real-time alignment anchoring of the non-operation pose and the operation pose can be realized based on the target corresponding relation, so that the target eye guiding mark in the target eyeball image is determined according to the target corresponding relation and the reference eye guiding mark, specifically, the relative position of the reference eye guiding mark in the reference eyeball image can be obtained, and the target eye guiding mark in the target eyeball image is determined according to the relative position and the target corresponding relation. Thus, the real-time positioning of the eye guiding mark in the operation is realized, and the operator completes the eye operation based on the target eye guiding mark.
By adopting the technical scheme of the embodiment of the application, the target corresponding relation between the non-operative pose and the operative pose of the same eye key point is determined through the target eye key point set and the reference eye key point set of the target eyeball image of the patient under the operative pose, so that the real-time alignment anchoring of the non-operative pose and the operative pose is realized based on the eye key point, and the eye guide mark is determined in the target eyeball image based on the target corresponding relation and the reference eye guide mark determined under the non-operative pose, thereby realizing the real-time positioning of the intra-operative eye guide mark. The method is based on the target eye key point set under the surgical pose and the reference eye key point set under the non-surgical pose to realize positioning, so that subjective errors in operation of operators are reduced, stability and accuracy of operation are improved, preoperative preparation time is shortened, operation flow is simplified, scratch of a needle head on the limbus of a patient in a preoperative physical mark is avoided, and discomfort and infection risk of the patient are reduced.
In practical applications, the steps of the positioning method of the eye surgery guiding mark can be implemented through an XR device (e.g., a virtual reality device, an enhanced display device, a mixed reality device), so that the XR device can be applied to the navigation function of the target eye guiding mark of the ophthalmic refractive surgery, and the XR device can realize real-time alignment and anchoring of the reference eyeball image and the target eyeball image according to the eye key points.
Specifically, the eye may be imaged by a camera on the XR device prior to surgery to obtain a reference eye image, and/or the patient may be examined for corneal topography in a non-surgical pose by a corneal topography instrument to obtain corneal topography data. Reference eye image and/or corneal topography data is then imported into the XR device. During the procedure, the operator performs an ophthalmic surgical procedure with an XR device (e.g., wearing a headset XR device), and in particular, the XR device displays a target eye guide identifier in a display space (display field of view) of the XR device by performing the methods of steps S110 to S140 described above, and the operator performs the ophthalmic procedure based on the display space of the XR device.
Taking a virtual reality device as an example, as shown in fig. 2, fig. 2 illustrates a display field (i.e., a display space) of the virtual reality device, in which an eye video enlarged for the eyes of a patient can be displayed, that is, each frame of target eyeball image is displayed in real time, and in addition, at least one of an axial position of a lens, an operation incision position, and an eye operation guidance mark of an eyeball white-to-white center position can be displayed superimposed on each frame of target eyeball image in real time. The operator places an astigmatism correcting intraocular lens on the axial position of the lens and determines the centering of the multifocal intraocular lens by the white-to-white center position of the eye. In addition, at least one of a plurality of circles (through which a surgeon can be guided to perform capsulorhexis operation) centering on the white-to-white center of the eyeball, a steep axis and a flat axis led out by taking the white-to-white center of the eyeball as end points can be superimposed and displayed on each frame of the target eyeball image in real time.
In combination with the above embodiment, in an implementation manner, the embodiment of the application further provides a positioning method of the eye surgery guiding mark. In the method, the obtaining mode of the reference eye guiding mark comprises any one mode or at least two modes in combination:
item H-1, a manner of obtaining corneal topography data of the target eye based on the non-operative pose;
h-2, fitting the reference eyeball image to obtain a mode based on the reference eyeball image;
H-3, a mode obtained based on the user configuration information.
In the embodiment of the application, the reference eye guiding mark comprises the axial position of the crystalline lens, the surgical incision position, and the white-to-white center position, the steep axis and/or the flat axis of the eyeball, and the reference eye guiding mark can be obtained by one mode or the combination of at least two modes from the H-1 to the H-3.
Specifically, for the H-1 term, the axial position, the steep axis and the flat axis can be obtained through target cornea topographic map data, wherein the target cornea topographic map data can be obtained through a cornea topographic map instrument, the cornea topographic map of a patient is checked in a non-operative position, as shown in fig. 3, fig. 3 is a cornea topographic map obtained through image acquisition and analysis by the cornea topographic map instrument, and the axial position, the steep axis and the flat axis equiangular membrane topographic map data can be obtained through the cornea topographic map. In some embodiments, the corneal topography may be obtained by anterior ocular segment imaging.
For the H-2 term, the white-to-white center position of the eyeball can be obtained by fitting a reference eyeball image, for example, the white-to-white center position can be fitted through an algorithm according to the photographed reference eyeball image, specifically, a reference eye key point set is obtained according to the reference eyeball image, a reference circle (for example, a circle formed by fitting a limbus) is obtained according to the reference eye key point set fitting, and then the circle center is fitted according to the reference circle to be used as the white-to-white center position of the eyeball.
For item H-3, for a surgical incision location, it may be determined based on user configuration information, e.g., 120 degrees configured as the surgical incision location.
By adopting the technical scheme of the embodiment of the application, the reference eye guiding mark can be determined through one or more modes, so that the target eye guiding mark in the target eyeball image is determined based on the reference eye guiding mark, and the eye operation is guided.
In combination with the above embodiment, in an implementation manner, the embodiment of the application further provides a positioning method of the eye surgery guiding mark. In the method, the method further comprises the steps of:
and obtaining a target eye guiding mark based on the target eyeball image under the surgical pose.
In the embodiment of the application, the target eye guiding mark (for example, the eye white to white center position) is not required to be referred to the eye guiding mark, and is directly determined through the target eye image. For example, a target eye key point fitting is performed on a target eyeball image to obtain an eyeball white-to-white center position, specifically, a target eye key point set is obtained according to the target eyeball image, a target circle (for example, a circle formed by fitting a limbus) is obtained according to the target eye key point set fitting, and then a circle center is fitted according to the target circle to be used as the eyeball white-to-white center position.
In combination with the above embodiment, in an implementation manner, the embodiment of the application further provides a positioning method of the eye surgery guiding mark. In the method, the "determining the target eye guiding identifier in the target eyeball image according to the target correspondence and the reference eye guiding identifier" in the step S130 specifically includes the following steps S130-1 to S130-3:
step S130-1, acquiring reference mark scales of the full circumference limbus of the eyes of the patient in the reference eyeball image.
And step S130-2, determining a target mark scale of the whole circumference limbus of the eye of the patient in the target eyeball image according to the target corresponding relation and the reference mark scale.
And step 130-3, determining the target eye guiding mark according to the target mark scale and the reference eye guiding mark.
In the embodiment of the application, the position of the guiding mark can be represented by 360-degree scales of the full-circle cornea rim of the eye, wherein the reference mark scale refers to 360-degree scales of the full-circle cornea rim of the eye in a non-operative pose, the target mark scale refers to 360-degree scales of the full-circle cornea rim of the eye in the operative pose, and the 0-degree position is fixed in the 360-degree scales of the full-circle cornea rim of the eye, for example, the right side in the horizontal direction in fig. 2 is 0 degree, and the counterclockwise rotation is 360 degrees.
According to the target corresponding relation and the reference mark scale, determining a target mark scale of the whole circumferential cornea rim of the eye of the patient in the target eyeball image, and realizing the alignment anchoring of the whole circumferential cornea rim scale of the eye in the non-operative position and the operative position, so that the target eye guiding mark under the operative position can be determined based on the target mark scale and the reference eye guiding mark. For example, taking the target guide mark as the axial position of the lens and the surgical incision position as shown in fig. 2, it can be determined that the axial position of the lens is 45 degrees in the target mark scale of the full limbus of the eye in fig. 2, and the surgical incision position is 120 degrees in the target mark scale of the full limbus of the eye in fig. 2.
By adopting the technical scheme of the embodiment of the application, the alignment anchoring of the eye full-circumference limbus scales of the non-operative pose and the operative pose is realized, namely, the direct positioning from objective inspection to operation in operation is realized, the subjective error in operation of an operator is reduced, and the stability and the accuracy of operation in operation are improved.
In combination with the above embodiment, in an implementation manner, the embodiment of the application further provides a positioning method of the eye surgery guiding mark. In this method, the following steps are further included before step S130 is performed:
And detecting eye key points of the reference eyeball image to obtain the reference eye key point set.
Specifically, the eye key point detection is performed on the reference eyeball image, and the eye key point detection can be performed through a key point extraction algorithm, so that a reference eye key point set is obtained, wherein the reference eye key point set comprises at least one reference eye key point.
The key point extraction algorithm can be a maximum stable extremum region algorithm or a SuperPoint algorithm based on a neural network, the SuperPoint algorithm is a key point detection and descriptor generation algorithm of self-supervision learning, high-quality characteristic points and descriptions thereof (namely eye key points) can be extracted from a reference eyeball image under the condition of no manual annotation, and the network structure comprises a shared encoder and a decoder for key point detection and descriptor generation, so that the key point detection and the characteristic description can be completed simultaneously, and a reference eye key point set is obtained.
In some embodiments, the method further comprises the steps of:
The method comprises the steps of obtaining a reference eyeball image, carrying out image segmentation on the reference eyeball image to obtain a reference image area, wherein the reference image area represents the cornea area of a patient, and filtering out points which are positioned outside the reference image area in a predicted eye key point set obtained through eye key point detection to obtain the reference eye key point set.
The image segmentation of the reference eyeball image can be realized through an image segmentation algorithm, after the eye key point detection of the reference eyeball image is carried out to obtain a predicted eye key point set, the eye key point screening is also needed, and the eye key points in the reference image area are used as the reference eye key point set so as to ensure that the selected eye key points are positioned in the cornea area of the patient and avoid being interfered by non-cornea areas such as eyelids, eyelashes and the like.
In some embodiments, the eye keypoints in the cornea region may also be selected as the reference eye keypoint set by manually selecting an eye keypoint from a set of estimated eye keypoints obtained by eye keypoint detection.
In combination with the above embodiment, in an implementation manner, the embodiment of the application further provides a positioning method of the eye surgery guiding mark. In the method, the set of target eye keypoints comprises at least one target eye keypoint and the set of reference eye keypoints comprises at least one reference eye keypoint.
Specifically, the step S130 of determining the target correspondence between the non-operative pose and the operative pose of the same eye keypoint according to the target eye keypoint set and the reference eye keypoint set includes the substeps S130-1 and S130-2:
And step S130-1, performing eye key point matching on the target eye key point set and the reference eye key point set to obtain at least one eye key point pair, wherein each eye key point pair comprises a target eye key point and a reference eye key point which represent the same eye key point.
And step S130-2, determining the target corresponding relation based on the at least one eye key point pair.
In the embodiment of the application, according to the elliptical shape constraint of cornea, the target eye key point set and the reference eye key point set are matched with each other by utilizing a matching algorithm to eliminate the error matching eye key point pairs and determine reliable eye key point pairs.
The matching algorithm can be a random sampling consistency algorithm, the matching algorithm is an iterative algorithm for parameter estimation, model parameters can be estimated from a target eye key point set and a reference eye key point set, and then in the eye key point matching, the random sampling consistency algorithm is used for selecting a subset fitting model randomly, evaluating consistency, and obtaining an optimal model through iterative optimization so as to reject mismatching eye key point pairs based on the optimal model.
For a target eye key point and a reference eye key point in the pair of eye key points, the eye key point corresponding to a target eyeball image in the surgical pose of the target eye key point corresponds to the reference eyeball image in the non-surgical pose, and therefore, the target corresponding relation between the non-surgical pose and the surgical pose of the same eye key point can be determined based on the alignment of the target eye key point and the reference eye key point in the pair of eye key points.
Therefore, the non-operative pose and the operative pose are positioned and anchored in real time based on the eye key points, and the eye guide mark is determined in the target eyeball image based on the target corresponding relation and the reference eye guide mark determined under the non-operative pose, so that the real-time positioning of the intra-operative eye guide mark is realized.
In combination with the above embodiment, in an implementation manner, the embodiment of the application further provides a positioning method of the eye surgery guiding mark. In the method, the target eyeball image is one frame of eyeball image in the eye video of the patient in the surgical pose, the eye video acquired in real time is required to be processed frame by frame in the surgical process, namely, a corresponding target eye key point set is determined according to each frame of eyeball image, so that the target corresponding relation between the non-surgical pose and the surgical pose of the same eye key point can be determined based on the target eye key point set and the reference eye key point set determined by the current frame of eyeball image (target eyeball image) in the surgical process, and the real-time and dynamic position anchoring of the non-surgical pose and the surgical pose of the eye key point is realized.
The following describes the method of acquiring the target eyeball images at different moments and the method of determining the target eye key point set.
(1) The step S110 of acquiring the target eyeball image of the patient in the operation pose specifically comprises acquiring a starting frame eyeball image of the eye video of the patient in the operation pose.
The eye video in the surgical pose refers to real-time eye video in the surgical process, the eye video in the surgical pose can be acquired through an image acquisition device (for example, a camera on an XR device), and a starting frame eyeball image in the eye video in the surgical pose is taken as a target eyeball image at the beginning of the surgery.
Correspondingly, the step S120 of determining the target eye key point set according to the target eyeball image specifically comprises the step of detecting the eye key point of the initial frame eyeball image to obtain the target eye key point set.
In the embodiment of the application, the eye key point detection can be carried out on the target eyeball image (initial frame eyeball image) at the beginning of the operation through a key point extraction algorithm so as to obtain a corresponding target eye key point set, wherein the key point extraction algorithm can adopt the same algorithm as the eye key point detection carried out on the reference eyeball image so as to ensure that each detected target eyeball key point can correspond to the reference eye key point.
Therefore, the target eye key points in the target eyeball image can be rapidly and accurately detected at the beginning of an operation, so that the non-operation pose and the operation pose are positioned and anchored in real time based on the target eye key points and the reference eye key points, and the real-time positioning of the intra-operation eye guiding mark is realized.
(2) The step S110 of acquiring the target eyeball image of the patient in the surgical pose specifically comprises the steps of acquiring a current frame eyeball image of an eye video of the patient in the surgical pose, wherein the current frame eyeball image is any frame eyeball image except a starting frame eyeball image of the eye video;
In order to ensure the real-time performance of the positioning of the guide mark, in the operation process, the frame-by-frame processing is required to be carried out on the eye video under the operation pose acquired in real time, and the latest frame eyeball image (namely the current frame eyeball image) in the eye video under the operation pose is taken as the target eyeball image.
Correspondingly, the step S120 of determining the target eye key point set according to the target eyeball image specifically comprises the step of predicting the eye key point according to the current frame eyeball image and a history frame eyeball image before the current frame eyeball image to obtain the target eye key point set, wherein the history frame eyeball image at least comprises a previous frame eyeball image of the current frame eyeball image.
In the embodiment of the present application, the history frame eyeball image may be the previous frame eyeball image of the current frame eyeball image or the previous frame eyeball image including the previous frame eyeball image of the current frame eyeball image.
The eye key point prediction method specifically comprises the steps of inputting a current frame eyeball image, a history frame eyeball image before the current frame eyeball image and a target eye key point set corresponding to the history frame eyeball image into a time sequence tracking model, and predicting the target eye key point set corresponding to the current frame eyeball image by the time sequence tracking model so as to realize real-time tracking of the eye key points. The time sequence tracking model can be a time sequence transducer model, the time sequence transducer model is a sequence model based on an attention mechanism, long-range dependence in time sequence data can be captured, and in the process of tracking the eye key points, the time sequence transducer model predicts the position of the time sequence transducer model in the current frame eyeball image through modeling the position change of the feature points on the time sequence, so that the accurate tracking of the eye key points is realized.
In some embodiments, determining the target eye keypoint set from the target eyeball image further includes the following steps A1 and A2:
and A1, checking a candidate eye key point set, wherein the candidate eye key point set is a target eye key point set obtained through eye key point prediction.
And A2, under the condition that the candidate eye key point set is not verified, performing eye key point detection on the current frame eyeball image to obtain a final target eye key point set.
In the embodiment of the application, in order to ensure the accuracy and stability of eye key point tracking, a candidate eye key point set obtained through eye key point prediction needs to be checked. The checking of the candidate eye key point set refers to checking of tracking conditions of the candidate eye key point set, namely checking of differences between the candidate eye key point set and the reference eye key point set.
If the difference between the candidate eye key point set and the reference eye key point set is greater than the difference threshold, the tracking of the candidate eye key point set is inaccurate (the candidate eye key point set is not checked, the selected eye key point set cannot be used as the target eye key point set, the eye key point detection is needed to be carried out on the current frame eyeball image by using a key point extraction algorithm, and the detected eye key point is used as the target eye key point set. If the difference between the candidate eye key point set and the reference eye key point set is not greater than the difference threshold, the tracking of the candidate eye key point set is accurate (the candidate eye key point set passes the verification), and the selected eye key point set is used as the target eye key point set.
The method comprises the steps of comparing a candidate eye key point set with a reference eye key point set, determining that the candidate eye key point set is not verified under the condition that the position deviation between two key points representing the same eye key point in the candidate eye key point set and the reference eye key point set is larger than a deviation threshold value, and determining that the candidate eye key point set is verified under the condition that the position deviation between two key points representing the same eye key point in the candidate eye key point set and the reference eye key point set is not larger than the deviation threshold value.
In the embodiment of the application, the offset threshold value is flexibly determined according to the accuracy of the tracking of the key points, if the position offset between two key points representing the same eye key point is larger than the offset threshold value, the tracking of the predicted eye key point to the reference eye key point is inaccurate, the verification of the candidate eye key point set is determined to be failed, and if the position offset between two key points representing the same eye key point is not larger than the offset threshold value, the accurate tracking of the predicted eye key point to the reference eye key point is determined, and the verification of the candidate eye key point set is determined to be passed.
It may be appreciated that, in a case where there are a plurality of eye keypoints (i.e., the eye-candidate keypoint set includes a plurality of eye-candidate keypoints, and the reference eye-candidate keypoint set includes a plurality of reference eye-keypoints), whether the eye-candidate keypoint set passes the verification may be comprehensively determined according to the verification results of the plurality of eye-candidate keypoints. Specifically, there are a plurality of two keypoints (candidate eye keypoints and reference eye keypoints) that characterize the same eye keypoint, if there is a positional offset between the two keypoints that characterize the same eye keypoint that exceeds a proportional threshold (e.g., 80%) that is not greater than an offset threshold, it may be determined that the candidate eye keypoint set check passes, otherwise it is determined that the candidate eye keypoint set check does not pass.
Therefore, the eye key points can be tracked in the operation process, so that the non-operation pose and the operation pose are positioned and anchored in real time and dynamically based on the target eye key point set and the reference eye key point set corresponding to the eyeball image of the current frame, and the real-time positioning of the eye guiding mark in the operation is realized.
The method of locating an ocular surgical guide marker according to the present application will be described with reference to a specific embodiment. Referring to fig. 4, fig. 4 is a flowchart illustrating a step of another method for positioning an ocular surgery guide mark according to an embodiment of the present application, where the method includes steps S410 to 460:
step S410, obtaining a reference eyeball image of a patient in a non-operation pose, and detecting eye key points of the reference eyeball image to obtain the reference eye key point set.
The method comprises the steps of obtaining a reference eyeball image, carrying out image segmentation on the reference eyeball image to obtain a reference image area, wherein the reference image area represents the cornea area of a patient, and filtering out points located outside the reference image area in a predicted eye key point set obtained through eye key point detection to obtain the reference eye key point set.
Step S420, acquiring an initial frame eyeball image in an eye video of the patient in the surgical pose, and detecting eye key points of the initial frame eyeball image to obtain the target eye key point set.
Step S430, obtaining a current frame eyeball image in an eye video of the patient in an operation pose, wherein the current frame eyeball image is any frame eyeball image except a starting frame eyeball image in the eye video, and predicting eye key points according to the current frame eyeball image and a historical frame eyeball image before the current frame eyeball image to obtain the target eye key point set, and the historical frame eyeball image at least comprises a previous frame eyeball image of the current frame eyeball image.
Step S440, performing eye key point matching on the target eye key point set and the reference eye key point set to obtain at least one eye key point pair, wherein each eye key point pair comprises a target eye key point and a reference eye key point which represent the same eye key point.
And S450, determining the target corresponding relation between the non-operative pose and the operative pose of the same eye key point based on the at least one eye key point pair.
Step S460, determining a target eye guiding identifier in the target eyeball image according to the target corresponding relation and a reference eye guiding identifier, wherein the reference eye guiding identifier is the eye guiding identifier determined based on the reference eyeball image.
The method comprises the steps of obtaining a reference mark scale of the whole circumferential cornea rim of the eye of a patient in the reference eyeball image, determining a target mark scale of the whole circumferential cornea rim of the eye of the patient in the target eyeball image according to the target corresponding relation and the reference mark scale, and determining a target eye guiding mark according to the target mark scale and the reference eye guiding mark.
In the embodiment of the application, the target corresponding relation between the non-operative pose and the operative pose of the same eye key point is determined through the target eye key point set and the reference eye key point set of the target eyeball image of the patient under the operative pose, so that the real-time alignment anchoring of the non-operative pose and the operative pose is realized based on the eye key point, and the eye guide mark is determined in the target eyeball image based on the target corresponding relation and the reference eye guide mark determined under the non-operative pose, thereby realizing the real-time positioning of the intra-operative eye guide mark.
The method is based on a target eye key point set under the surgical pose and a reference eye key point set under the non-surgical pose to realize positioning, reduces subjective errors in operation of operators, increases stability and accuracy of operation in operation, completely avoids marking errors caused by interference of human factors caused by marking and positioning by using physical operations according to a corneal topography before operation and during operation, avoids scratch of a patient's limbus caused by using a needle in pre-operation physical marking, avoids uncomfortable feeling of the patient, avoids use of an intraoperative scattered disk and a marker, and uses accurate imaging alignment, avoids use of a navigator, and greatly reduces economic burden of the patient. Meanwhile, the method also saves mydriatic time delayed by waiting for physical marking before operation of the patient and saves waiting time before operation.
In addition, by the method, the corneal topography examination (to acquire the reference eyeball image of the patient in the non-operative pose) is completed by the cooperation of the patient before the operation, and the target eye guiding mark is displayed by adopting real-time alignment during the operation, so that the error of the physical mark of the patient caused by poor eye position matching is completely avoided, and the stability and the accuracy of the operation during the operation are improved. The target eye guiding mark is displayed in real time, so that vision errors caused by different display planes are avoided, and the selection of the axial position of the lens and the position of the surgical incision can be conveniently operated and placed in the operation of an operator. At present, the position of a lens bag can not be probed, the position of the lens bag is often estimated according to the white-to-white position, and the scheme can identify and display the white-to-white center position, so that errors caused by poor matching degree of a patient and larger anti-flare of the surface of the lens in the multifocal intraocular lens implantation are avoided.
The embodiment of the application also provides an interface display method for eye surgery, the method is applied to XR equipment, referring to FIG. 5, FIG. 5 is a step flow chart of the interface display method for eye surgery, the method comprises the following steps:
Step S510, displaying a target eye guiding mark in real time aiming at a target eyeball image in a display space of the XR equipment, wherein the target eye guiding mark in the target eyeball image is determined by the positioning method of the eye surgery guiding mark.
In the embodiment of the application, the XR device can be a virtual reality device, an enhanced display device or a mixed reality device, and the display space of the XR device can be a display field of view of the XR device. Because the target eye guiding mark is determined based on the reference eye guiding mark and the target corresponding relation under the non-operation pose, the target eye guiding mark is a real-time and dynamic positioning eye guiding mark, and the operator can complete the eye operation based on the display view of the XR equipment by displaying the target eye guiding mark in the display view of the XR equipment in real time.
Further, the method further comprises step S520:
Step S520 displays in real time a target mark scale for the limbus of the patient' S eye in the display space of the XR device.
In the embodiment of the application, the target mark scale refers to 360-degree scale of the limbus of the whole circumference of the eye in the surgical position, and the determination mode of the target mark scale is that the target mark scale is determined according to the corresponding relation of the target and the reference mark scale, wherein the corresponding relation of the target refers to the relation of the key point of the same eye between the non-surgical position and the surgical position.
By displaying the target mark scale of the limbus around the eye in the field of view of the XR device, the operator is enabled to perform better surgical procedures based on the target mark scale, for example, when refractive cataract surgery is performed, an astigmatism correcting intraocular lens may be placed in the correct position based on the reference target mark scale.
Therefore, the target eye guiding mark and the target mark scale of the whole circumference cornea rim of the eye are displayed in real time aiming at the target eyeball image, so that an operator is guided to perform eye surgery, subjective errors in operation of the operator are reduced, and stability and accuracy of operation in the operation are improved.
The embodiment of the application also provides a positioning device for the eye surgery guiding mark, referring to fig. 6, fig. 6 is a schematic structural diagram of the positioning device for the eye surgery guiding mark, provided by the embodiment of the application, the device comprises:
a first acquisition module 610 for acquiring a target eyeball image of a patient in an operation;
A first determining module 620, configured to determine a target eye key point set according to the target eyeball image;
A second determining module 630, configured to determine, according to the target eye keypoint set and a reference eye keypoint set, a target correspondence between a non-operative pose and an operative pose of the same eye keypoint, where the reference eye keypoint set is obtained according to a reference eyeball image of the patient in the non-operative pose;
And a third determining module 640, configured to determine a target eye guiding identifier in the target eyeball image according to the target correspondence and the reference eye guiding identifier.
In an alternative embodiment, the third determining module includes:
the scale acquisition module is used for acquiring reference mark scales of the full-circle limbus of the eyes of the patient in the reference eyeball image;
the scale positioning module is used for determining a target mark scale of the whole-circumference limbus of the eye of the patient in the target eyeball image according to the target corresponding relation and the reference mark scale;
And the identification determining module is used for determining the target eye guide identification according to the target mark scale and the reference eye guide identification.
In an alternative embodiment, the first obtaining module includes:
The first acquisition submodule is used for acquiring an initial frame eyeball image in an eye video of the patient in an operation pose;
The first determining module includes:
The first detection module is used for detecting the eye key points of the initial frame eyeball image to obtain the target eye key point set.
In an alternative embodiment, the first obtaining module includes:
the second acquisition sub-module is used for acquiring a current frame eyeball image in an eye video of the patient in an operation pose, wherein the current frame eyeball image is any frame eyeball image except a starting frame eyeball image in the eye video;
The first determining module includes:
The first prediction module is used for predicting eye key points according to the current frame eyeball image and a history frame eyeball image before the current frame eyeball image to obtain the target eye key point set, and the history frame eyeball image at least comprises a previous frame eyeball image of the current frame eyeball image.
In an alternative embodiment, the first determining module further includes:
the first verification module is used for verifying a candidate eye key point set, wherein the candidate eye key point set is a target eye key point set obtained through eye key point prediction;
And the second detection module is used for detecting the eye key points of the current frame eyeball image under the condition that the candidate eye key point set is not verified, so as to obtain a final target eye key point set.
In an alternative embodiment, the first verification module includes:
A first comparison module for comparing the set of eye candidate keypoints with the set of reference eye keypoints;
A fourth determining module, configured to determine that the candidate eye key point set does not pass the verification when a positional deviation between two key points representing the same eye key point in the candidate eye key point set and the reference eye key point set is greater than a deviation threshold;
And a fifth determining module, configured to determine that the candidate eye key point set passes the verification if a positional deviation between two key points that characterize the same eye key point in the candidate eye key point set and the reference eye key point set is not greater than the deviation threshold.
In an alternative embodiment, the target eye keypoint set comprises at least one target eye keypoint, the reference eye keypoint set comprises at least one reference eye keypoint, the second determination module comprises:
The first matching module is used for matching the target eye key point set with the reference eye key point set to obtain at least one eye key point pair, wherein each eye key point pair comprises a target eye key point and a reference eye key point which represent the same eye key point;
and the first determining submodule is used for determining the target corresponding relation based on the at least one eye key point pair.
In an alternative embodiment, the apparatus further comprises:
and the third detection module is used for detecting the eye key points of the reference eyeball image to obtain the reference eye key point set.
In an alternative embodiment, the apparatus further comprises:
The segmentation module is used for carrying out image segmentation on the reference eyeball image to obtain a reference image region, and the reference image region represents the cornea region of the patient;
and the filtering module is used for filtering out points which are positioned outside the reference image area in the estimated eye key point set and are obtained through eye key point detection, so as to obtain the reference eye key point set.
In an alternative embodiment, the obtaining the reference eye guiding mark includes any one or a combination of at least two of the following ways:
means for obtaining corneal topography data based on the target eye in the non-operative pose;
a means obtained by fitting based on the reference eyeball image;
Based on the manner in which the user configuration information is obtained.
In an alternative embodiment, the apparatus further comprises:
And the guide identifier obtaining module is used for obtaining a target eye guide identifier based on the target eyeball image.
The embodiment of the application also provides an electronic device, and referring to fig. 7, fig. 7 is a schematic structural diagram of the electronic device. As shown in fig. 7, the electronic device 700 includes a memory 710 and a processor 720, where the memory 710 and the processor 720 are connected through a bus, and the memory 710 stores a computer program, where the computer program can run on the processor 720, so as to implement the steps of the method for positioning the guiding mark of the eye surgery according to the embodiment of the present application, or the steps of the method for displaying the interface of the eye surgery according to the embodiment of the present application.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, implements the steps of the method for positioning the guiding mark of the eye surgery according to the embodiment of the application, or the steps of the method for displaying the interface of the eye surgery according to the embodiment of the application.
The embodiment of the application also provides a computer program product, which comprises a computer program, wherein the computer program realizes the steps of the positioning method of the eye surgery guiding mark in the embodiment of the application or the steps of the interface display method of the eye surgery in the embodiment of the application when being executed by a processor.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described by differences from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
Embodiments of the present application are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing terminal device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing terminal device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiment and all such alterations and modifications as fall within the scope of the embodiments of the application.
Finally, it is further noted that relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or terminal that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or terminal. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article or terminal device comprising the element.
The positioning method, the interface display method, the device and the equipment for guiding the eye surgery provided by the application are described in detail, the principles and the implementation modes of the application are described by applying specific examples, the description of the examples is only used for helping to understand the method and the core idea of the application, and meanwhile, the technical personnel in the field can change the specific implementation mode and the application range according to the idea of the application, so that the content of the description is not to be construed as limiting the application.