Disclosure of Invention
The invention aims to: the invention aims to solve the technical problem of providing a depth perception training method based on a multi-depth cue scene aiming at the defects of the prior art.
In order to solve the technical problems, the invention discloses a depth perception training method based on a multi-depth cue scene, which comprises the following steps:
Step 1, an eyeball tracker is arranged and used for tracking eyeball movement, realizing interaction between a user and a training scene according to the eyeball movement and collecting eye movement data;
step 2, constructing three different training scenes for displaying depth perception in a virtual environment by combining binocular depth cues and monocular depth cues;
Step 3, training the user to be trained by using the training scene, and respectively recording training feedback data and eye movement data of the user to be trained in the training scene;
Step 4, respectively obtaining analysis reports by combining training feedback data recorded in the three training scenes with corresponding eye movement data;
Step 5, adjusting a visual training scene according to the analysis report, and continuing training until a preset training task is completed;
And 6, evaluating depth perception capability of the user to be trained according to the analysis report, and generating a training result report.
Further, the step1 of implementing user interaction with a scene specifically includes:
the eyeball motion position and the gazing focus of the user to be trained are tracked and collected in real time through an eyeball tracker;
When the user to be trained gazes at the training scene, namely the menu in the virtual reality environment, the menu function is triggered when the gazing duration meets the preset condition.
Further, the three different training scenes described in step 2, namely the first training scene, the second training scene and the third training scene of depth perception, specifically include:
a first training scene having both binocular and monocular depth cues; wherein, monocular cue, at least include: perspective effect, distribution of highlights and shadows, superposition of relative sizes and contours of objects, binocular cues, comprising at least: motion parallax and binocular parallax of an object;
The second training scene is used for removing all monocular depth cues, namely removing perspective, highlight and shadow distribution, motion parallax, object relative size and contour superposition, and only retaining binocular depth cues, namely motion parallax and binocular parallax of the object;
And the third training scene is used for removing all monocular depth cues, namely removing perspective, highlight and shadow distribution, motion parallax, object relative size and contour superposition, and simultaneously limiting the motion parallax of objects in the binocular depth cues, and only retaining binocular parallax.
Further, the retaining binocular parallax in step 2 specifically includes: modifying object sizes in the scene and modifying linear perspectives.
Further, the modifying the object size in the scene in step 2 is specifically as follows:
In the training scene, when one object leads another object, the proportion of the previous object is adjusted according to the distance from the next object to human eyes along with the change of the distance from the user to be trained, and the method is concretely as follows:
;
Wherein, For the distance between the latter object and the eyes of the user to be trained,Is the distance between two objects,Is the scaling of the previous object.
Further, the modified linear perspective described in step 2 is specifically as follows:
In the training scene, a reference object is introduced as a rear object, the other object is placed at a position closer to a user along an x-axis, the position of the reference object is offset by a preset length on a y-axis in the center of an inclined plane, the two objects are prevented from overlapping on an x-y plane and an x-z plane, the human eyes and the two objects are kept to have consistent visual angle difference, and the position of the former object is adjusted according to the position relation between the human eyes of the user to be trained and the latter object.
Further, the training of the user to be trained in step 3 is specifically as follows:
in a first training scene, a user to be trained judges the position and depth of a target according to the shielding relation between objects in the scene; judging the concave-convex shape and the position of the target by a user to be trained according to the illumination change and the shadow on the object;
In a second training scene, a user to be trained deduces the distance of the user to be trained by observing the relative speed of the object in motion; the user to be trained judges the relative position of the object by observing the track and direction change of the object in motion; the user to be trained judges the distance and depth of the known object by comparing the visual movement difference of the left eye and the right eye;
In a third training scene, the user to be trained judges the distance of the object by comparing the parallax of the left eye and the right eye; the user to be trained judges the depth and the space position of the object by integrating and comparing the visual information of the left eye and the right eye; the user to be trained judges the distance relation and depth difference of the object by observing the parallax gradient;
And (3) during training, setting different grades according to the difficulty by using different training scenes in the step (2), displaying the grades to the user to be trained for judgment, and scoring according to the judgment of the user to be trained.
Further, the analysis report in step 4 at least includes: user training accuracy, user completion time, entropy and stereoscopic sharpness, wherein:
User training accuracy The calculation method is as follows:
;
Wherein, Representing the score of the user to be trained in the training process,Representing the total score in the training process;
The user completion time T is the time for completing one time of training in one level of a scene of the user to be trained;
Entropy, namely, calculating an uncertainty value of a user to be trained through gaze data in eye movement data, wherein the specific calculation method is as follows:
;
Wherein, Shannon entropy representing gaze point,Represents gaze point range probability,Represents the gaze point,Represents Net,Representing the value quantity of the gaze point;
The method for calculating the stereo vision sharpness comprises the following steps:
;
Wherein, Representing stereo acuity,AndRepresenting two object positionsAndAn angle formed with the eye.
Further, in the step 5, according to the analysis report, the visual training scene is adjusted, and training is continued, which specifically includes:
If the analysis report satisfies the following conditions: the training times of the user to be trained in the same scene meet the preset training times, the accuracy of each training, the user completion time and the entropy value all meet the preset requirements, and different parallaxes in the scene can be distinguished as ;
The scene automatically enters the next level after the last test is completed. After all the level tests in one scene are completed, the scene automatically jumps to the next depth scene for training.
Further, the parallax described in step 5The calculation method is as follows:
;
Wherein, Representing the target position,AndRepresenting the position of the left and right eye, respectively,Representing the angle between the left eye and the target,Representing the angle between the right eye and the target,Representing the distance between the left eye and the target,Representing the distance between the right eye and the target.
Advantageous effects
1. According to the invention, through constructing training scenes with different depth cues, immersive experience and visual parallax are provided for a user in a virtual reality environment, so that depth perception is improved. Conventional depth perception training techniques typically provide only a single depth cue, limiting the user's understanding and adaptability to real-world depth perception.
2. According to the invention, the eyeball tracker is introduced to track the eyeball movement of the user, so that the real-time capturing and man-machine interaction of the user information are realized. Compared with the prior art, the invention improves the eye movement data acquisition and man-machine interaction, and provides more accurate and efficient interaction experience for users.
3. According to the invention, the training result report is generated by analyzing the user feedback information and the eye movement data, so that the defects of the prior art are further improved, more comprehensive and personalized training evaluation and feedback are provided for the user, and the depth perception capability of the user is improved.
4. By introducing three levels of difficulty designs, the invention provides a more interesting, diversified and challenging training experience for users. This improvement can better motivate users' enthusiasm and sustained engagement, further enhancing their depth perception capabilities, than the single tediousness of the prior art.
Detailed Description
The invention provides a depth perception training method based on a multi-depth cue scene, which solves the problems that proper training and evaluation methods are still lacking in depth perception and effective feedback is not available. The technical scheme is as follows:
eye movement module: and adopting an eye movement tracking technology to interact with the scene according to the eye movement information of the user.
Eyeball information acquisition unit: the eyeball tracking technology is used for collecting eyeball information of a tested person in the virtual game environment and recording information such as tested person eyeball fixation information, glance range, fixation times, blink times and the like.
A data storage unit: for storing training data and eye movement data of the user.
A data analysis unit: the method is used for integrating and analyzing training data of the user in the virtual environment with eye movement data.
Report generation unit: a depth perception training result report about the user is generated according to the data analysis result.
In a first aspect, the present invention provides a depth perception training method based on a multi-depth cue scene, where the method is shown in fig. 1, and specifically includes:
the eyeball tracker is arranged to track the eyeball movement of the user, so that the interaction between the user and the scene and the acquisition of the eye movement data are realized;
Displaying a first training scene of depth perception in a virtual environment by combining binocular depth cues and monocular depth cues;
a second training scene that displays depth perception in the virtual environment by removing monocular depth cues while preserving binocular depth cues;
displaying a third training scene of depth perception in the virtual environment by including binocular disparity cues and restricting motion disparity cues;
Three analysis reports are respectively obtained by combining training feedback data of the three scenes with eye movement information of a training process;
Self-adaptively adjusting the visual scene according to the user analysis report;
the depth perception capability of the user is evaluated according to three analysis reports obtained in three scenes, and a training result report is generated.
Further, the position and the fixation focus of the eyeball movement of the user are tracked and acquired in real time through an eyeball tracker in the eye movement module. By analyzing eye tracking data when a user gazes at a start menu in a virtual reality environment, the system is able to recognize the user's intent and trigger the start of a game when the gaze duration meets certain criteria.
In a second aspect, a training scene for displaying depth perception in a virtual reality environment by transforming binocular depth cues and monocular depth cues, comprising:
The first training scene, consisting of binocular and monocular depth cues, includes perspective, distribution of highlights and shadows, motion parallax, object relative size, and contour superposition. Objects in the scene are farther from the user and thus appear smaller, while objects closer to the user appear larger; the targets in the scene can be overlapped or partially blocked, and a user can judge the positions and the depths of the targets according to the blocking relation; the scene has fixed illumination, and a user can judge the concave-convex shape and the position of the target according to illumination change and shadow on the target.
And the second training scene is used for removing all monocular depth cues, including perspective, highlight and shadow distribution, motion parallax, object relative size and contour superposition, and finally only binocular depth cues are reserved. By observing the relative speed of the object in motion, the user deduces the distance of the object. The farther target produces slower visual movement at the same speed, while the nearer target produces faster visual movement; the user perceives the relative position of the object by observing the track and direction change of the object in motion; the user more accurately perceives the distance and depth of the object by comparing the visual motion differences of the left and right eyes.
And a third training scene, which is used for removing all monocular depth cues, including perspective, highlight and shadow distribution, motion parallax, object relative size and contour superposition, and limiting the motion parallax in the binocular depth cues, so as to leave binocular parallax cues. By comparing the parallax of the left and right eyes, the user deduces the distance of the object. The user can perceive the depth and the space position of the target by integrating and comparing the visual information of the left eye and the right eye; by observing the parallax gradient, the user can judge the distance relationship and depth difference of the object.
Further, the retaining binocular disparity includes modifying the object size. The distance of the object from the user and its size can affect the angle of view projected on the user's retina. When one object leads another, the viewing angle of the object increases as the distance from the user decreases. Therefore, in order to make the viewing angle identical between objects, it is necessary to adjust the ratio of the latter ball according to the ratio of the latter object to the distance. The distance between the previous object and the eyes of the user is D, the distance between the previous object and the eyes of the user is D, and the scaling ratio of the previous object is。
Further, the preserving binocular disparity includes modifying the linear perspective. One reference object is introduced as a post object and the other ball is placed closer to the user along the x-axis, and the user, the inclined plane and the two objects must be aligned in order to maintain the same viewing angle. However, this would result in two objects overlapping in the gaze direction, which needs to be avoided. Thus, the position of the reference object is offset by 25cm on the y-axis in the center of the inclined plane. If the previous object is closer to the user only in the x-axis, the perspective between the user and each object will be different. This effect applies not only to the x-y plane but also to the x-z plane. Therefore, the previous object must be consistent with the three-dimensional vector between the user and the reference object while satisfying the required parallax difference.
In a third aspect, a user obtains an analysis result report by combining training feedback data of a scene with eye movement information of a training process, where the analysis result report includes user training accuracy, user completion time, entropy, and stereo vision sharpness, and the analysis result report specifically includes:
User training accuracy (a): (training final score/game total score) ;
User completion time (T): refers to the time for the user to complete one training in one level of one scene
Entropy: gaze data is obtained by an eye tracker and applied to gaze behavior, indicating variability and unpredictability of gaze location. If the gaze locations are random and evenly distributed over all possible locations, the entropy will be high, indicating to the user whether the target is correct or not. But if it aims at a particular location in a predictable way, the value of entropy will be low, indicating that the user has a high probability of determining the target is correct.
Wherein,Shannon entropy representing gaze point,Represents gaze point range probability,Represents the gaze point,Represents a Nate, usually taken as 2,Representing the value quantity of the gaze point;
Stereo visual acuity: in depth perception capability, stereo sharpness is an indispensable measure. There are also 3 levels in each depth scene. In the first level, the required stereoscopic sharpness vision is 400 arcseconds, in the second level, the required stereoscopic sharpness vision is 200 arcseconds, and in the fourth level, the required stereoscopic sharpness is 100 arcseconds. This means that as the level increases, the user's stereoscopic sharpness will increase.
In the formula (2), the amino acid sequence of the formula (2),Representing stereo acuity,AndRepresenting the angle formed by the two target positions a and B with the eyes, the parallax is quantified.
According to the fourth aspect, according to the analysis result report, the training scene is adaptively adjusted and the user is trained, specifically including:
the user needs to test more than 2 times at least in one scene, if the accuracy of the two training processes reaches more than 90%, the time for completing one test is not more than 3 minutes, the entropy can be less than 0.5, and different parallaxes in the scene can be distinguished 。
;
In the formula (3), B represents a target position,AndRepresenting the position of the left and right eye, respectively.
The scene automatically enters the next level after the last test is completed. After all the level tests in one scene are completed, the scene automatically jumps to the next depth scene for training.
Further, at the back end of the virtual reality device, C# programming is used, a condition triggering event is adopted, and when the condition is completed, an adaptive mechanism is triggered, and the next level of the scene is automatically jumped. When a user has tested all levels in one scene, the next depth scene will be automatically skipped. If one of the tasks is not satisfied, the scene is circulated until the next condition is triggered;
In a fifth aspect, the user gives a report of the training results after training for one month. And obtaining the depth perception capability level by using the user training accuracy, the user completion time, the entropy value and the stereo vision sharpness in the training result report, and objectively evaluating the user training effect.
Example 1:
According to the depth perception training method based on the multi-depth cue scene, a VR game can be created in the virtual reality device, the game comprises three scenes with different depth cues, and each scene has three different levels. A group of target monsters with different parallaxes is placed in the game just in front of the user's field of view, and the target monsters are moved by a fixed number of steps (4 seconds per step) 2m away from the user, and in order to obtain a high score, the user needs to accurately identify the nearest target monsters to himself and hit them quickly before they reach a distance point of 1.5 m. And if the accurate monster targets are hit each time, the corresponding score and the accurate prompt sound effect feedback are generated, otherwise, the score is not added, and the wrong sound effect feedback is output. The time until the user hits the next hit from the last hit and the total time to complete the game are recorded.
After each scene training, the user can obtain an analysis report, and indexes such as user training accuracy, user completion time, entropy value, stereo vision sharpness and the like are displayed on the analysis report. Training will last one month, the depth perception capability and the stereo vision sharpness of the user will be greatly improved for one month, the depth perception capability and the stereo vision sharpness of the user will be greatly improved, the user can more accurately judge the position and the distance of the object, and the safety during actions is improved. After all training is completed, the user will get a training result report. The training results report may evaluate the user's depth perception capabilities.
As can be seen from the above description, the embodiment of the present invention provides a depth perception training method based on a multi-depth cue scene. In the invention, depth perception scenes are constructed by changing binocular depth cues and monocular depth cues, and a colorful and attractive powerful environment is provided for users through VR equipment, so that different scenes can be changed, and game level and time are set to increase the interestingness of the virtual game.
Example 2:
the invention provides a depth perception training method based on a virtual reality game, which specifically comprises the following steps:
s1, tracking eyeball movement of a user by installing an eyeball tracker to realize interaction between the user and a scene and acquisition of eye movement data;
S2, displaying a first training scene of depth perception in a virtual environment by combining the binocular depth cue and the monocular depth cue;
s3, reserving a second training scene of depth perception in the virtual environment by removing the monocular depth cues and reserving the binocular depth cues;
s4, displaying a third training scene of depth perception in the virtual environment by containing binocular parallax cues and limiting motion parallax cues;
s5, three analysis reports are respectively obtained by combining training feedback data of the three scenes with eye movement information of a training process;
S6, adaptively adjusting the visual scene according to the user analysis report;
And S7, evaluating the depth perception capability of the user according to three analysis reports obtained in three scenes, and generating a training result report.
Further, the eyeball movement position and the gazing focus of the user are tracked and collected in real time through a high-precision eyeball tracker in the eye movement module. By analyzing eye tracking data when a user gazes at a start menu in a virtual reality environment, the system is able to recognize the user's intent and trigger the start of a game when the gaze duration meets certain criteria.
The training scene for displaying depth perception in a virtual reality environment by transforming binocular depth cues and monocular depth cues described in the present embodiment includes:
the first training scene, consisting of binocular and monocular depth cues, includes perspective, distribution of highlights and shadows, motion parallax, relative object size, and occlusion.
Further, the perspective line clues can enable a user to perceive the distance and the distance according to the convergence degree of the clues; object size is also an important monocular depth cue, objects in the scene are far from the user and thus appear smaller, while objects closer to the user appear larger; the targets in the scene can be overlapped or partially blocked, and the user can judge the positions and the depths of the targets according to the blocking relation. Occlusion cues provide important information about the relative position and distance between objects; the scene has fixed illumination, and a user can judge the concave-convex shape and the position of the target according to illumination change and shadow on the target.
And the second training scene is used for removing all monocular depth cues, including perspective, highlight and shadow distribution, motion parallax, object relative size and contour superposition, and finally only binocular depth cues are reserved. There is motion parallax in the preserved binocular depth cues. Motion parallax is the difference in relative motion in the field of view due to objects in different positions as the user moves himself.
Further, by observing the relative speed of the target in motion, the user can infer the distance of the target. The farther objects produce slower visual movements at the same speed, while the closer objects produce faster visual movements.
The user perceives the protrusion, penetration and relative position of the object by observing the track and direction change of the object in motion.
The user can more accurately perceive the distance and depth of the object by comparing the visual movement differences of the left and right eyes.
And the third training scene is used for removing all monocular depth cues, including perspective, highlight and shadow distribution, motion parallax, object relative size and contour superposition, and limiting the motion parallax in the binocular depth cues, so that binocular parallax cues are left, and the depth cue conditions are gradually reduced.
Further, binocular parallax refers to a visual difference generated by two eyes observing an object at different positions. Binocular parallax is one of the key factors of depth perception, including parallax interpretation depth, stereoscopic vision and parallax gradient, and plays an important role in the perception of the far and near and spatial depth of an object by a user.
By comparing the parallax of the left and right eyes, the user can infer the distance of the object. Such parallax differences may help a user perceive the distance and depth of an object.
The retaining binocular disparity includes modifying the object size. The distance of the object from the user and its size can affect the angle of view projected on the user's retina. When one object leads another, the viewing angle of the object increases as the distance from the user decreases. Therefore, in order to make the viewing angle identical between objects, it is necessary to adjust the ratio of the latter ball according to the ratio of the latter object to the distance.
As shown in FIG. 2, the distance between the previous object and the eyes of the user is D, the distance between the previous object and the eyes of the user is D, and the scaling ratio of the previous object is。
The preserving binocular disparity includes modifying the linear perspective. As shown in fig. 3, one reference object a' is introduced as a rear object and the other object B is placed closer to the user along the x-axis, and the user, the inclined plane and the two objects must be aligned in order to maintain the same viewing angle. However, this would result in two objects overlapping in the gaze direction, which needs to be avoided. Thus, the position of the reference object is offset by 25 cm on the y-axis in the center of the inclined plane. If the previous object is closer to the user only in the x-axis, the perspective between the user and each object will be different. This effect applies not only to the x-y plane but also to the x-z plane. Therefore, the previous object must be consistent with the three-dimensional vector between the user and the reference object while satisfying the required parallax difference.
In this embodiment, a user obtains an analysis report by combining training feedback data of a scene with eye movement information in a training process, where the analysis result report may include user training accuracy, user completion time, entropy value, and stereo vision sharpness, and specifically includes:
user training accuracy: (training final score/game total score) ;
User completion time: the time for the user to finish one training in one level of one scene is referred to;
Entropy: gaze data is obtained by an eye tracker and applied to gaze behavior, indicating variability and unpredictability of gaze location. If the gaze locations are random and evenly distributed over all possible locations, the entropy will be high, indicating to the user whether the target is correct or not. But if it aims at a particular location in a predictable way, the value of entropy will be low, indicating that the user has a high probability of determining the target is correct.
Stereo visual acuity: in depth perception capability, stereo sharpness is an indispensable measure. There are also three levels in each depth scene. In the first level, the required stereoscopic sharpness vision is 400 arcseconds, in the second level, the required stereoscopic sharpness vision is 200 arcseconds, and in the third level, the required stereoscopic sharpness is 100 arcseconds. This means that as the level increases, the user's stereoscopic sharpness will increase.
In the formula (2), the amino acid sequence of the formula (2),Representing stereo acuity,AndRepresenting the angle formed by the two target positions a and B with the eyes, the parallax is quantified. As shown in fig. 5, the inter-pupillary distance is denoted as a, the angle formed by the two target locations a and B with the eyeAndParallax is quantized. The smaller the difference between the two angles, the smaller the distance Δd of the targets a and B. The disparity threshold is determined by reducing the distance Δd between objects multiple times until the depth difference between two objects cannot be discerned, and the stereo sharpness is evaluated.
In this embodiment, according to the analysis result report, the training scenario is adaptively adjusted and the user is trained, which specifically includes:
As shown in FIG. 6, the user needs to test more than 2 times at least in one scene, if the accuracy of training is more than 90% in both times, the test is completed for no more than 3 minutes, the entropy can be less than 0.5, and different parallaxes in the scene can be distinguished 。
;
In the formula (3), B represents a target position,AndRepresenting the position of the left and right eye, respectively.
The scene automatically enters the next level after the last test is completed. After all the level tests in one scene are completed, the scene automatically jumps to the next depth scene for training.
As shown in fig. 7, at the back end of the virtual reality device, using c# programming, a conditional triggering event is employed, and when the above conditions are completed, an adaptive mechanism is triggered, automatically jumping to the next level of the scene. When the user has tested all levels in one scene, the user will automatically jump to the next depth scene. If one of the tasks is not satisfied, the scene is circulated until the next condition is triggered;
in this embodiment, the user gives a report of the training result after training for one month. And obtaining the depth perception capability level by using the user training accuracy, the user completion time, the entropy value and the stereo vision sharpness in the training result report, and objectively evaluating the user training effect.
;
Wherein A represents the accuracy of user training, T represents the user completion time,Representing the stereo sharpness.
Example 3:
the module and the unit provided by the invention comprise:
eye movement module: and adopting an eye movement tracking technology to interact with the scene according to the eye movement information of the user.
Eyeball information acquisition unit: the eyeball tracking technology is used for collecting eyeball information of a tested person in the virtual game environment and recording information such as tested person eyeball fixation information, glance range, fixation times, blink times and the like.
A data storage unit: for storing training data and eye movement data of the user.
A data analysis unit: the method is used for integrating and analyzing training data of the user in the virtual environment with eye movement data.
Report generation unit: a depth perception training result report about the user is generated according to the data analysis result.
Further, the eye movement module starts the game when the user gazes at a start menu of the interface. A gaze is considered to be directed at an object when the duration of the gaze is greater than 100ms and the gaze angle is less than 1 °.
Example 4:
According to the depth perception training method based on the multi-depth cue scene, a VR game can be created in the virtual reality device, the game comprises three scenes with different depth cues, and each scene has three different levels. As shown in fig. 4, a group of target monsters having different parallaxes is placed in the game just in front of the user's field of view, and the target monsters are moved by a fixed number of steps (4 seconds per step), so that in order to obtain a high score, the user needs to accurately identify the nearest target monsters to himself and hit them quickly before they reach a distance point of 1.5 meters. And if the accurate monster targets are hit each time, the corresponding score and the accurate prompt sound effect feedback are generated, otherwise, the score is not added, and the wrong sound effect feedback is output. The time until the user hits the next hit from the last hit and the total time to complete the game are recorded.
There are three different levels of difficulty in a scene and there are scores and timings, and object discrimination in each level is required for stereoscopic sharpness to measure the depth difference between every two consecutive objects. In the first level, the required stereoscopic sharpness vision is 400 arcseconds, while in the fourth level, the required stereoscopic sharpness is 100 arcseconds, the higher the level, the greater the difficulty.
The user needs to test more than 2 times at least in one level in one scene, if the accuracy of the two training processes reaches more than 90%, the time for completing one test is not more than 3 minutes, the target parallax measurement is in the parallax range of the specified scene and the entropy value can be below 0.5, and the scene automatically enters the next level after the last test is completed. After all the level tests in one scene are completed, the scene automatically jumps to the next depth scene for training. If the above requirements are not met, the cost level needs to be re-completed until the requirements are met. After each scene training, the user can obtain an analysis report, and indexes such as user training accuracy, user completion time, entropy value, stereo vision sharpness and the like are displayed on the analysis report. Training will last for one month, and the depth perception capability and the stereo vision sharpness of the user will be greatly improved, so that the user can more accurately judge the position and distance of the object, and the safety during the action is improved. After all training is completed, the user will get a training result report. The training results report may evaluate the user's depth perception capabilities.
In a specific implementation, the present application provides a computer storage medium and a corresponding data processing unit, where the computer storage medium is capable of storing a computer program, where the computer program when executed by the data processing unit may perform some or all of the steps of the depth perception training method based on a multi-depth cue scene provided by the present application. The storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), a random-access memory (random access memory, RAM), or the like.
It will be apparent to those skilled in the art that the technical solutions in the embodiments of the present invention may be implemented by means of a computer program and its corresponding general hardware platform. Based on such understanding, the technical solutions in the embodiments of the present invention may be embodied essentially or in the form of a computer program, i.e. a software product, which may be stored in a storage medium, and include several instructions to cause a device (which may be a personal computer, a server, a single-chip microcomputer, an MCU or a network device, etc.) including a data processing unit to perform the methods described in the embodiments or some parts of the embodiments of the present invention.
The invention provides a thought and a method for depth perception training based on a multi-depth cue scene, and the method and the way for realizing the technical scheme are numerous, the above is only a preferred embodiment of the invention, and it should be noted that, for those skilled in the art, several improvements and modifications can be made without departing from the principle of the invention, and the improvements and modifications should be regarded as the protection scope of the invention. The components not explicitly described in this embodiment can be implemented by using the prior art.