WO2017113689A1 - Method, device, and system for spatial positioning in virtual reality system - Google Patents
Method, device, and system for spatial positioning in virtual reality system Download PDFInfo
- Publication number
- WO2017113689A1 WO2017113689A1 PCT/CN2016/088579 CN2016088579W WO2017113689A1 WO 2017113689 A1 WO2017113689 A1 WO 2017113689A1 CN 2016088579 W CN2016088579 W CN 2016088579W WO 2017113689 A1 WO2017113689 A1 WO 2017113689A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- spatial
- calibration
- device group
- camera device
- reflection point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Definitions
- the invention belongs to the field of virtual reality technology, and in particular relates to a spatial positioning method, device and system in a virtual reality system.
- VR virtual reality
- Virtual virtual reality
- Reality technology is a virtual environment in a specific range that uses computer or other intelligent computing to be combined with photoelectric sensing technology to generate realistic visual, audio and visual integration.
- the VR system it mainly includes an input device and an output device.
- the spatial positioning scheme based on the 2D camera device has a technical difficulty and a large amount of computation, and is difficult to implement on the mobile terminal.
- the spatial positioning method based on the 3D camera device faces the bottleneck that cannot be productized on the mobile terminal or communicates with the mobile terminal, and is also difficult to be productized in the VR mobile system.
- the invention provides a spatial positioning method, device and system in a virtual reality system, which are used to establish a spatial relationship model between a calibration reflection point and a user by using an image of a calibration reflection point acquired by the camera device, when the user moves, Calculating the position change of the user through the change of the spatial relationship model can simplify the calculation of the location of the user, reduce the difficulty of the positioning technology, and improve the feasibility of locating the positioning device in the VR system, thereby improving the production quantification of the positioning related products.
- a spatial positioning method in a virtual reality system includes:
- Controlling the camera device group to collect an image of the calibration reflection point the camera device group includes a plurality of imaging devices, the calibration reflection point is used to calibrate the position where the calibration reflection point is located, and the camera device group is worn on the user; Acquiring a first image of the calibration reflection point acquired by two adjacent camera devices, and obtaining the calibration reflection point and the camera device according to the first image by a preset binocular ranging algorithm Setting a first spatial distance of the group; establishing, according to the first spatial distance, a first spatial position relationship model of the calibration reflective point and the camera device group, the first space distance
- the spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the camera device group in the current spatial coordinate system; when the camera device group moves synchronously with the user, acquiring two adjacent a second image of the calibration reflection point acquired by the camera device, and obtaining the calibration reflection point and the location according to the second image by the preset binocular ranging algorithm a second spatial distance of the camera device group; taking a position where the camera device group
- a spatial positioning device in a virtual reality system provided by the present invention includes:
- a control module configured to control an image of the camera set to collect a calibration reflective spot, the camera set includes a plurality of camera devices, and the calibration reflective spot is used to calibrate a position where the calibration reflective point is located, the camera device group Wearing on the user; acquiring module, configured to acquire a first image of the calibration reflective point acquired by two adjacent camera devices; and a calculation module, configured to use a preset binocular ranging algorithm according to the Determining, by the first image, a first spatial distance between the calibration reflection point and the camera device group; and a modeling module, configured to establish the calibration according to the first spatial distance, with the position of the camera device group as an origin a first spatial positional relationship model of the retroreflective point and the camera set, the first spatial positional relationship model indicating a spatial positional relationship between the calibration retroreflective point and the camera set in the current spatial coordinate system; An acquiring module, configured to acquire, when the camera device group moves synchronously with the user, acquire a second of the calibration reflective points collected by two adjacent camera devices
- a spatial positioning system in a virtual reality system includes:
- the head mounted display is configured to control the image capturing device group to acquire an image of a calibration reflective spot
- the camera device group includes a plurality of imaging devices
- the calibration reflective spot For calibrating the position where the calibration reflection point is located, the camera device group is worn on the user; acquiring the first image of the calibration reflection point collected by two adjacent camera devices, and adopting the preset double a visual distance algorithm, obtaining a first spatial distance between the calibration reflection point and the camera device group according to the first image; establishing a location according to the first spatial distance, using the position of the camera device group as an origin; Determining a first spatial position relationship model between the retroreflective point and the camera set, the first spatial position relationship model indicating a spatial positional relationship between the calibration reflective point and the camera set in the current spatial coordinate system; Acquiring a second image of the calibration reflective spot collected by two adjacent camera devices when the camera device group moves synchronously with the user, and passes through the Presetting a binocular ranging algorithm,
- the spatial positioning method, device and system in the virtual reality system provided by the present invention set a fixed calibration reflection point, and acquire an image of the calibration illumination point to determine the distance between the user and the calibration reflection point. And constructing a first spatial position relationship model between the current position of the user and the calibration reflection point.
- the camera device group worn on the user synchronizes motion, acquires an image of the calibration reflection point, and constructs the user again in motion.
- the second spatial positional relationship model between the rear position and the calibration reflective point by comparing the difference between the first spatial positional relationship model and the second spatial positional relationship model, reverses the positional change before and after the user's motion, compared to the existing
- the technology reduces the amount of calculation in the process of locating the user's position, reduces the difficulty of the positioning technology, and can quantify the positioning related products in the VR mobile system to improve product productivity.
- FIG. 1 is a schematic structural diagram of a spatial positioning system in a virtual reality system according to an embodiment of the present invention
- FIG. 2 is a schematic flowchart of implementing a spatial positioning method in a virtual reality system according to a first embodiment of the present invention
- FIG. 3 is a schematic flowchart showing an implementation of a spatial positioning method in a virtual reality system according to a second embodiment of the present invention
- FIG. 4 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a third embodiment of the present invention.
- FIG. 5 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a fourth embodiment of the present invention.
- the spatial positioning method in the virtual reality system can be applied to a spatial positioning system of a virtual reality system including a camera set and a head mounted display.
- a spatial positioning system of a virtual reality system including a camera set and a head mounted display.
- the camera set 10 and the headset Display 20 via Universal Serial Bus (USB, Universal) Serial Bus), WIFI, or other wired or wireless methods are available for data exchange.
- USB Universal Serial Bus
- WIFI Wireless Fidelity
- the imaging device group 10 includes a plurality of single imaging devices 101 that are at a preset angle.
- the preset angle is related to the imaging angle of each imaging device 101.
- the setting rule of the imaging device 101 is a plurality of imaging devices 101.
- 360-degree imaging of the surrounding space can be achieved.
- the imaging angle of view of the imaging apparatus 101 is 60 degrees
- the imaging angle of view of the imaging apparatus 101 is 60 degrees
- the common imaging angle of view of the adjacent two imaging apparatuses is 45 degrees.
- the imaging angles of the imaging device 101 are not necessarily the same, and a plurality of imaging devices 101 having different imaging angles of view may be provided. However, it is necessary to satisfy the imaging device group 10 of these imaging devices, and it is possible to perform 360-degree imaging of the surrounding space.
- the head-mounted display 20 is shaped like a pair of glasses, and receives an instruction by sensing the movement of the human eye, and amplifies the image on the ultra-fine display through a set of optical systems, projects the image on the retina, and presents the large-screen image in the viewer's eye. .
- the head mounted display 20 is configured to control the camera set 10 to acquire an image of the calibration reflection point, the calibration reflection point is used to calibrate the position where the calibration reflection point is located, and the camera set 10 is worn on the user.
- the head mounted display 20 is further configured to acquire a first image of the calibration reflective spot collected by two adjacent camera devices, and obtain the calibration reflective point according to the first image by using a preset binocular ranging algorithm. Establishing a first spatial distance relationship between the calibration reflection point and the imaging device group 10 according to the first spatial distance, with the first spatial distance of the imaging device group as the origin, and the first spatial distance, the first space
- the positional relationship model represents the spatial positional relationship between the calibration reflection point and the camera set in the current spatial coordinate system.
- the head mounted display 20 is further configured to acquire a second image of the two calibration points collected by the adjacent two camera devices, and pass the preset binocular measurement.
- the distance algorithm obtains a second spatial distance between the calibration reflection point and the camera device group 10 according to the second image, and uses the position where the camera device group 10 is moved as an origin, and establishes the calibration reflection point according to the second spatial distance.
- the second spatial positional relationship model indicating a spatial positional relationship between the calibration reflective spot and the imaging device group 10 in a newly created spatial coordinate system after the movement of the imaging device group 10, and comparing
- the first spatial position relationship model and the second spatial position relationship model obtain position change information before and after the movement of the user according to the comparison result.
- FIG. 2 is a schematic flowchart of an implementation process of a spatial positioning method in a virtual reality system according to a first embodiment of the present invention. The method is applicable to the head mounted display 20 shown in FIG.
- a data processing chip can be disposed in the head-mounted display, which can be used as an execution body of the embodiment.
- the execution body of the spatial positioning method in the virtual reality system in this embodiment may also be set in other devices of the VR system. For example, it may be provided in the camera group or in the mobile terminal connected to the VR system.
- the embodiment of the present invention is described by using a head-mounted display as an execution subject, but is not a limitation of the technical solution.
- the head mounted display controls the camera set to capture an image of the calibrated reflective spot.
- the camera device group is worn on the user, and specifically connected to the head mounted display, and the camera device group includes a plurality of camera devices.
- the calibration point is used to calibrate the position of the calibration point.
- the calibration reflection point is made of a reflective material, and can reflect the light projected thereon, so that the calibration reflection point is more conspicuous in the image, and is easy to distinguish and locate.
- the calibration reflection point is set at a specified spatial position, which is an object or a group of objects that can reflect the specified light, which is convenient for the camera group to collect, and can be calibrated by the target reflection point in the image collected by the camera group. The distance between the reflection point and the camera set is calibrated.
- the setting of the calibration reflection point shall be such that the spatial position of the calibration reflection point can be distinguished by the acquired image during image analysis.
- different shapes of the calibration reflection point are set on different walls on four sides, and the different shapes may be Horizontal lines, vertical lines, circles, triangles, trapezoids, pentagons, etc. It is also possible to set calibration highlights of different sizes on different walls on four sides.
- a plurality of first images of the calibration reflection points acquired by two adjacent imaging devices are acquired.
- the calibration reflection point in the first image acquired by two adjacent camera devices is often the same calibration reflection point, and the calibration reflection point in the image and the first of the camera device group can be obtained by the binocular ranging algorithm. Space distance.
- a commonly used binocular ranging algorithm utilizes the direct difference of the horizontal coordinates of the target point imaged on the left and right views, that is, the parallax and the target point to the imaging. There is an inverse proportional relationship between the plane distances.
- the parameters to be obtained are focal length, parallax, and center distance. Wherein, the focal length and the center distance of the camera device can be obtained by calibration to obtain an initial value.
- the focal length, the center distance of the camera, and the offset value can be obtained by stereo calibration to obtain an initial value, and optimized by stereo calibration, so that two adjacent camera devices that capture images are mathematically placed in parallel, and the parameters of the left and right cameras are the same. . On this basis, the parallax is obtained. Thereby obtaining all the parameters needed to finally obtain the three-dimensional coordinates of a point.
- the The preset binocular ranging algorithm obtains a first spatial distance between the calibration reflection point and the camera set.
- the distance of the camera device group relative to the ground can be approximately regarded as the height of the user, which can be measured by the height measuring tool. Therefore, the first spatial distance of the calibration reflection point relative to the camera set is obtained, that is, the current spatial distance between the calibration reflection point and the user is obtained.
- the first spatial position relationship model is used to indicate a spatial position relationship between the calibration reflection point and the camera device group in the current space coordinate system, and the spatial position relationship is a relative relationship when the calibration reflection point is fixed. When not moving, the spatial positional relationship changes as the camera set moves.
- the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group
- the first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
- the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
- the camera device group moves synchronously with the user, that is, the moving direction and the moving distance of the camera device group are consistent with the moving direction and the moving distance of the user.
- the camera device group is controlled to continue to collect the image of the calibration reflective spot, and among the multiple images collected by all the imaging devices in the camera device group, two adjacent camera devices are acquired.
- the plurality of second images of the calibration point are calibrated.
- the western wall can be collected before.
- the camera device that calibrates the reflective point on the camera may not be able to capture the calibration reflection point due to the limitation of the camera angle of view. In this case, it is necessary to determine from the camera device group two images of the calibration reflection point image that can be collected on the western wall.
- An adjacent camera device acquires a plurality of second images of the calibration reflection points collected by the two adjacent camera devices.
- the calibration reflection point and the camera device group are newly obtained.
- the spatial distance that is, the new current spatial distance between the calibration point and the user after the motion is obtained.
- the process of obtaining the second spatial distance is similar to the process of obtaining the first spatial distance in step S202. Please refer to the related description in step S202, and details are not described herein again. If the user has exercised and the calibration reflective point is fixed, the second spatial distance and the first spatial distance are necessarily different.
- the second spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the imaging device group in a newly created spatial coordinate system after the movement of the camera group.
- the spatial position relationship model between the calibration reflection point and the camera device group is established again, that is, the second spatial position relationship model is established under the new origin.
- the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device.
- the second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
- the second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
- the coordinates of the camera set position that is, the coordinates of the user position are all constructed as a spatial coordinate system, and the calibration reflective point itself
- the absolute spatial position is fixed, then what may change is the relative spatial position of the user and the relative spatial position change caused by the change of the position of the user before and after the movement.
- the relative coordinates of the calibration reflection point in the two spatial position relationship models also change, that is, from the coordinates (x 1 , y 1 , z 1 ) to the coordinates (x 2 , y 2 , z 2 ).
- the position change information of the user before and after the exercise can be obtained, for example, obtaining the user after exercise and exercising The change in the value of the front position and the change in direction.
- a fixed calibration reflection point is set, an image of the calibration illumination point is obtained, and a distance between the user and the calibration reflection point is determined, thereby constructing a first spatial position relationship between the current position of the user and the calibration reflection point.
- a model when the user moves, the camera device group worn on the user moves synchronously, acquires an image of the calibration reflection point, and reconstructs a second spatial positional relationship model between the position of the user after the movement and the calibration reflection point, Comparing the difference between the first spatial positional relationship model and the second spatial positional relationship model, the position change before and after the user's motion is reversed.
- the calculation amount in the process of locating the user position is reduced, and the difficulty of the positioning technology is reduced. Quantify positioning-related products in VR mobile systems to increase product productivity.
- FIG. 3 is a schematic flowchart of an implementation process of a spatial positioning method in a virtual reality system according to a second embodiment of the present invention, which mainly includes the following steps:
- the imaging device group includes a plurality of imaging devices.
- the number of these imaging devices can be differently set according to the imaging angle of view of each imaging device.
- the purpose of the setting is that the images captured by the respective imaging devices can cover the entire space.
- the number of imaging devices is 360° divided by the imaging device.
- the camera device group is provided with an imaging light-emitting device, which can emit light of a specified wavelength.
- the imaging light-emitting device is an infrared light-emitting device
- the image-capturing device is provided with an infrared filter that filters out light of other wavelengths than the infrared light emitted by the calibration light-reflecting point.
- the head-mounted display controls the infrared light-emitting device to illuminate the calibration reflection point, so that the calibration reflection point emits infrared reflection light, and controls each camera device in the camera device group to collect the calibration reflection point filtered by the infrared filter. image.
- infrared light is the most ideal reflected light.
- the calibration point is used to calibrate the location of the calibration point.
- the setting of the calibration reflection point shall be such that the spatial position of the calibration reflection point can be distinguished by the acquired image during image analysis.
- a plurality of first images of the calibration reflection points acquired by two adjacent imaging devices are acquired.
- the parallax is obtained according to the focal length and the center distance of the two adjacent camera devices, and the calibration reflection point in the plurality of first images acquired by the two adjacent camera devices, Then, the preset spatial distance between the calibration reflection point and the camera device group is obtained by the preset binocular ranging algorithm, that is, the current spatial distance between the calibration reflection point and the user is obtained.
- the resolution of the calibration reflection points is selected in the plurality of first images.
- the two images with the highest rate are used as the left and right views of the calibration reflection point.
- the preset binocular ranging algorithm obtains the calibration reflection point and the first of the camera group according to the selected two images. Space distance.
- the first spatial position relationship model is used to indicate a spatial position relationship between the calibration reflection point and the camera device group in the current space coordinate system, and the spatial position relationship is a relative relationship when the calibration reflection point is fixed. When not moving, the spatial positional relationship changes as the camera set moves.
- the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group
- the first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
- the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
- the camera device group is controlled to continue to collect the image of the calibration reflection point, and the calibration reflections acquired by two adjacent camera devices are acquired among the plurality of images collected by all the camera devices in the camera device group. Multiple second images of points.
- the western wall can be collected before.
- the camera device that calibrates the reflective point on the camera may not be able to capture the calibration reflection point due to the limitation of the camera angle of view. In this case, it is necessary to determine from the camera device group two images of the calibration reflection point image that can be collected on the western wall. An adjacent camera device acquires an image of the calibration reflection point acquired by the two adjacent camera devices.
- Obtaining a second spatial distance between the calibration reflection point and the camera device group by using a preset binocular ranging algorithm, that is, obtaining the new target of the calibration reflection point and the camera device group after the user motion is obtained here
- the spatial distance that is, the new current spatial distance between the calibration point and the user after the motion is obtained.
- the process of obtaining the second spatial distance is similar to the process of obtaining the first spatial distance in step S202. Please refer to the related description in step S202, and details are not described herein again. If the user has exercised and the calibration reflective point is fixed, the second spatial distance and the first spatial distance are necessarily different.
- a plurality of second images of the calibration reflection points acquired by the two adjacent camera devices are acquired, and the preset binocular ranging algorithm is used according to the plurality of second images.
- the two images with the highest resolution of the calibration reflection point are selected as the preset binocular ranging algorithm in the plurality of second images.
- the corresponding left and right views are obtained by the preset binocular ranging algorithm, and the second spatial distance between the calibration reflection point and the camera device group is obtained.
- the second spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the imaging device group in a newly created spatial coordinate system after the movement of the camera group.
- the spatial position relationship model between the calibration reflection point and the camera device group is established again, that is, the second spatial position relationship model is established under the new origin.
- the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device.
- the second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
- the second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
- the difference in the second coordinate calculates the position difference of the user after the exercise and before the exercise. It includes changes in the value and direction of the position of the user after exercise and before exercise.
- a fixed calibration reflection point is set, and an image of the calibration illumination point is collected to determine a distance between the user and the calibration reflection point, thereby constructing a first space between the current position of the user and the calibration reflection point.
- a positional relationship model when the user moves, the camera device group worn on the user synchronously moves, and according to the image of the calibration reflective spot collected at this time, the second position between the user's position after the movement and the calibration reflective point is constructed again.
- the spatial position relationship model reduces the position change before and after the user's motion by comparing the difference between the first spatial position relationship model and the second spatial position relationship model. Compared with the prior art, the calculation amount in the positioning process is reduced, and the positioning is lowered. Technical difficulty, can achieve positioning product quantification in VR mobile system, improve product productivity.
- FIG. 4 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a fourth embodiment of the present invention.
- the spatial positioning device in the virtual reality system illustrated in FIG. 4 may be an execution body of the spatial positioning method in the virtual reality system provided by the foregoing embodiments shown in FIG. 2 and FIG. 3, such as the head mounted display 20 or one of the control modules.
- the spatial positioning device in the virtual reality system illustrated in FIG. 4 mainly includes: a control module 401, an acquisition module 402, a calculation module 403, a modeling module 404, and a comparison module 405.
- the control module 401 is configured to control the image capturing device group to collect an image of the calibration reflective spot, the camera device group includes a plurality of imaging devices, and the calibration reflective spot is used to calibrate the position where the calibration reflective spot is located, and the imaging device group is worn. On the user.
- the obtaining module 402 is configured to acquire a first image of the calibration reflective spot acquired by two adjacent camera devices.
- the calibration reflection point in the first image acquired by two adjacent camera devices is often the same calibration reflection point, and the calibration reflection point in the image and the first of the camera device group can be obtained by the binocular ranging algorithm. Space distance.
- the calculating module 403 is configured to obtain, according to the first image, a first spatial distance between the calibration reflection point and the camera device group by using a preset binocular ranging algorithm.
- the modeling module 404 is configured to establish, according to the first spatial distance, a first spatial position relationship model of the calibration reflection point and the camera device group, where the first spatial position relationship model is represented by using a position of the camera device group as an origin. In the current spatial coordinate system, the spatial relationship between the calibration reflection point and the camera set is determined.
- the first spatial position relationship model is used to indicate a spatial position relationship between the calibration reflection point and the camera device group in the current space coordinate system, and the spatial position relationship is a relative relationship when the calibration reflection point is fixed. When not moving, the spatial positional relationship changes as the camera set moves.
- the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group
- the first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
- the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
- the obtaining module 402 is further configured to acquire, when the camera set moves synchronously with the user, acquire a second image of the two calibration points collected by the adjacent two camera devices.
- the camera device group is controlled to continue to collect the image of the calibration reflective spot, and among the multiple images collected by all the imaging devices in the camera device group, two adjacent camera devices are acquired.
- the plurality of second images of the calibration point are calibrated.
- the calculating module 403 is configured to obtain, by using the preset binocular ranging algorithm, a second spatial distance between the calibration reflection point and the camera device group according to the second image.
- a second spatial distance between the calibration reflection point and the camera device group, that is, the user motion obtained here is obtained in the acquired image of the calibration reflection point
- the calibration reflection point is new to the spatial distance of the camera set, that is, a new current spatial distance between the calibration reflection point and the user after the motion is obtained.
- the modeling module 404 is further configured to use a position where the camera device group is moved as an origin, and establish a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance, the second space
- the positional relationship model indicates the spatial positional relationship between the calibration reflection point and the imaging device group in the newly created space coordinate system after the movement of the camera group.
- the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device.
- the second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
- the second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
- the comparison module 405 is configured to compare the first spatial position relationship model and the second spatial position relationship model.
- the calculation module 403 is further configured to obtain position change information before and after the movement of the user according to the comparison result of the comparison module 405.
- the position change information of the user before and after the exercise can be obtained, for example, after the user is exercised and before the exercise The change in position and the change in direction.
- each functional module is merely an example, and the actual application may be implemented according to requirements, such as corresponding hardware configuration requirements or software implementation.
- the above function distribution is completed by different functional modules, that is, the internal structure of the spatial positioning device in the virtual reality system is divided into different functional modules to complete all or part of the functions described above.
- the corresponding functional modules in this embodiment may be implemented by corresponding hardware, or may be executed by corresponding hardware to execute corresponding software. The above description principles may be applied to various embodiments provided in this specification, and are not described herein again.
- a fixed calibration reflection point is set, an image of the calibration illumination point is obtained, and a distance between the user and the calibration reflection point is determined, thereby constructing a first spatial position relationship between the current position of the user and the calibration reflection point.
- a model when the user moves, the camera device group worn on the user moves synchronously, acquires an image of the calibration reflection point, and reconstructs a second spatial positional relationship model between the position of the user after the movement and the calibration reflection point, Comparing the difference between the first spatial position relationship model and the second spatial position relationship model, the position change before and after the user motion is reversed, and the calculation amount in the positioning process is reduced, the technical difficulty is reduced, and the VR movement can be reduced compared with the prior art.
- Product quantification is achieved in the system to increase product productivity.
- FIG. 5 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a fifth embodiment of the present invention.
- the apparatus mainly includes: a control module 501, an obtaining module 502, a calculating module 503, a modeling module 504, and a comparing module 505. And a screening module 506.
- the above functional modules are described in detail as follows:
- the control module 501 is configured to control the image capturing device group to collect an image of the calibration reflective spot, the camera device group includes a plurality of imaging devices, and the calibration reflective spot is used to calibrate the position where the calibration reflective spot is located, and the camera device group is worn. On the user, it can be specifically connected to the head mounted display.
- the number of these imaging devices can be differently set according to the imaging angle of view of each imaging device.
- the purpose of the setting is that the images captured by the respective imaging devices can cover the entire space.
- the number of imaging devices is 360° divided by the imaging device. The quotient of the camera viewing angle.
- the camera device group is provided with an imaging light-emitting device, which can emit light of a specified wavelength.
- the imaging light-emitting device is an infrared light-emitting device
- the image-capturing device is provided with an infrared filter that filters out light of other wavelengths than the infrared light emitted by the calibration light-reflecting point.
- the control module 501 is further configured to control the infrared light emitting device to illuminate the calibration reflective point, such that the calibration reflective point emits infrared reflected light, and controls the calibration of the camera device in the camera set through the infrared filter. An image of the reflective point.
- the obtaining module 502 is configured to acquire a first image of the calibration reflective spot collected by the two adjacent camera devices.
- the calculating module 503 is configured to obtain, by using a preset binocular ranging algorithm, a first spatial distance between the calibration reflection point and the camera device group according to the first image.
- the modeling module 504 is configured to establish, according to the first spatial distance, a first spatial position relationship model of the calibration reflection point and the camera device group, where the position of the camera device group is an origin, the first spatial position relationship model representation In the current spatial coordinate system, the spatial relationship between the calibration reflection point and the camera set is determined.
- the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group
- the first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
- the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
- the obtaining module 502 is further configured to acquire, when the camera group moves synchronously with the user, acquiring a second image of the calibration reflective spot collected by two adjacent camera devices.
- the camera device group is controlled to continue to collect the image of the calibration reflection point, and the calibration reflections acquired by two adjacent camera devices are acquired among the plurality of images collected by all the camera devices in the camera device group. Multiple second images of points.
- the calculating module 503 is configured to obtain, according to the preset binocular ranging algorithm, a second spatial distance between the calibration reflection point and the camera device group according to the second image.
- the modeling module 504 is further configured to use a position where the camera device group moves as an origin, and establish a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance, the second space
- the positional relationship model indicates the spatial positional relationship between the calibration reflection point and the imaging device group in the newly created space coordinate system after the movement of the camera group.
- the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device.
- the second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
- the second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
- the comparison module 505 is configured to compare the first spatial position relationship model and the second spatial position relationship model.
- the calculation module 503 is further configured to obtain position change information before and after the movement of the user according to the comparison result of the comparison module.
- the comparison module 505 is further configured to compare whether the first coordinate of the calibration reflection point in the first spatial position relationship model and the second coordinate in the second spatial relationship model are the same. That is, whether (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are compared, whether x 1 is the same as x 2 , whether y 1 is the same as y 2 , and whether z 1 is the same as z 2 .
- the calculation module 503 is further configured to: if the comparison result of the comparison module 505 is the same, determine that the position of the user before and after the movement does not change, and if the comparison result is different, calculate the difference between the first coordinate and the second coordinate. The user is out of position after exercise and before exercise.
- the difference in the second coordinate calculates the position difference of the user after the exercise and before the exercise. It includes changes in the value and direction of the position of the user after exercise and before exercise.
- the device further includes:
- the screening module 506 is configured to filter out, in the first image, two images with the highest resolution of the calibration reflection point.
- the calculating module 503 is further configured to obtain, by using the preset binocular ranging algorithm, a first spatial distance between the calibration reflection point and the camera device group according to the two selected images.
- a fixed calibration reflection point is set, an image of the calibration illumination point is obtained, and a distance between the user and the calibration reflection point is determined, thereby constructing a first spatial position relationship between the current position of the user and the calibration reflection point.
- a model when the user moves, the camera device group worn on the user moves synchronously, acquires an image of the calibration reflection point, and reconstructs a second spatial positional relationship model between the position of the user after the movement and the calibration reflection point, Comparing the difference between the first spatial position relationship model and the second spatial position relationship model, the position change before and after the user motion is reversed, and the calculation amount in the positioning process is reduced, the technical difficulty is reduced, and the VR movement can be reduced compared with the prior art.
- Product quantification is achieved in the system to increase product productivity.
- An embodiment of the present invention provides a spatial positioning apparatus in a virtual reality system, where the apparatus includes: one or more processors;
- One or more programs the one or more programs being stored in the memory, when executed by the one or more processors:
- the camera device group includes a plurality of camera devices
- the calibration reflection point is used for calibrating the position of the calibration reflection point
- the camera device group is worn on the user to obtain the adjacent a first image of the calibration reflection point acquired by the two camera devices, and obtaining a first spatial distance between the calibration reflection point and the camera device group according to the first image by a preset binocular ranging algorithm
- the position of the camera device group is an origin, and according to the first spatial distance, a first spatial position relationship model between the calibration reflection point and the camera device group is established, and the first spatial position relationship model indicates that the calibration is performed in the current space coordinate system.
- a spatial position relationship between the reflection point and the camera device group when the camera device group moves synchronously with the user, acquiring a second image of the calibration reflection point collected by two adjacent camera devices, and passing the preset a binocular ranging algorithm, according to the second image, obtaining a second spatial distance between the calibration reflection point and the camera device group, after the camera device group moves
- the position is an origin, and according to the second spatial distance, a second spatial position relationship model between the calibration reflection point and the camera device group is established, and the second spatial position relationship model represents a newly created space coordinate after the camera device group moves.
- determining a spatial position relationship between the calibration reflection point and the camera device group comparing the first spatial position relationship model and the second spatial position relationship model, and obtaining position change information before and after the movement of the user according to the comparison result.
- the disclosed systems, devices, and methods may be implemented in other ways.
- the device embodiments described above are merely illustrative.
- the division of the modules is only a logical function division.
- there may be another division manner for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication link shown or discussed may be an indirect coupling or communication link through some interface, device or module, and may be electrical, mechanical or otherwise.
- the modules described as separate components may or may not be physically separated.
- the components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module.
- the above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
- the integrated modules if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium.
- the technical solution of the present invention which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium.
- a number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention.
- the foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM, Read-Only) Memory, random access memory (RAM), disk or optical disk, and other media that can store program code.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Studio Devices (AREA)
- Processing Or Creating Images (AREA)
Abstract
Description
本申请要求在2015年12月29日提交中国专利局、申请号201511014777.4、发明名称为“虚拟现实系统中的空间定位方法、装置及系统”的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201511014777.4, entitled "Space Positioning Method, Apparatus and System in Virtual Reality System", filed on December 29, 2015, the entire contents of which are incorporated by reference. The citations are incorporated herein by reference.
本发明属于虚拟现实技术领域,尤其涉及一种虚拟现实系统中的空间定位方法、装置及系统。 The invention belongs to the field of virtual reality technology, and in particular relates to a spatial positioning method, device and system in a virtual reality system.
发明人在实现本发明的过程中发现虚拟现实(VR,Virtual Reality)技术就是利用电脑或其他智能计算设为核心结合光电传感技术生成逼真的视、听、触一体化的特定范围内虚拟的环境。在VR系统中主要包括输入设备和输出设备。The inventor discovered virtual reality (VR, Virtual) in the process of implementing the present invention. Reality technology is a virtual environment in a specific range that uses computer or other intelligent computing to be combined with photoelectric sensing technology to generate realistic visual, audio and visual integration. In the VR system, it mainly includes an input device and an output device.
在VR系统中, 基于2D或者3D摄像装置,理论上都能实现空间定位。但是现有技术中,基于2D摄像装置的空间定位方案技术难度和运算量大,难以在移动端实现。而基于3D摄像装置的空间定位方法,面临着无法在移动端产品化或者与移动端通信的瓶颈,同样难以在VR移动系统中产品化。In the VR system, Based on 2D or 3D camera devices, spatial positioning can be achieved theoretically. However, in the prior art, the spatial positioning scheme based on the 2D camera device has a technical difficulty and a large amount of computation, and is difficult to implement on the mobile terminal. The spatial positioning method based on the 3D camera device faces the bottleneck that cannot be productized on the mobile terminal or communicates with the mobile terminal, and is also difficult to be productized in the VR mobile system.
本发明提供一种虚拟现实系统中的空间定位方法、装置及系统,用以通过根据摄像装置采集的标定反光点的图像,为标定反光点和用户之间建立空间关系模型,当用户运动时,通过空间关系模型的变化计算出用户的位置变化,可简化定位用户位置的计算量,降低定位技术难度,提高VR系统中将定位设备移动化的可行性,从而提高定位相关产品的生产量化。 The invention provides a spatial positioning method, device and system in a virtual reality system, which are used to establish a spatial relationship model between a calibration reflection point and a user by using an image of a calibration reflection point acquired by the camera device, when the user moves, Calculating the position change of the user through the change of the spatial relationship model can simplify the calculation of the location of the user, reduce the difficulty of the positioning technology, and improve the feasibility of locating the positioning device in the VR system, thereby improving the production quantification of the positioning related products.
本发明提供的一种虚拟现实系统中的空间定位方法,包括:A spatial positioning method in a virtual reality system provided by the present invention includes:
控制摄像装置组采集标定反光点的图像,所述摄像装置组中包括多个摄像装置,所述标定反光点用于标定所述标定反光点所在的位置,所述摄像装置组佩戴在用户身上;获取相邻的两个所述摄像装置采集的所述标定反光点的第一图像,并通过预置的双目测距算法,根据所述第一图像获得所述标定反光点与所述摄像装置组的第一空间距离;以所述摄像装置组的位置为原点,根据所述第一空间距离,建立所述标定反光点与所述摄像装置组的第一空间位置关系模型,所述第一空间位置关系模型表示在当前空间坐标系下,所述标定反光点与所述摄像装置组的空间位置关系;当所述摄像装置组随所述用户同步运动时,获取相邻的两个所述摄像装置采集的所述标定反光点的第二图像,并通过所述预置的双目测距算法,根据所述第二图像获得所述标定反光点与所述摄像装置组的第二空间距离;以所述摄像装置组运动后所在的位置为原点,根据所述第二空间距离,建立所述标定反光点与所述摄像装置组的第二空间位置关系模型,所述第二空间位置关系模型表示在所述摄像装置组运动后新建的空间坐标系下,所述标定反光点与所述摄像装置组的空间位置关系;比较所述第一空间位置关系模型以及所述第二空间位置关系模型,根据比较结果得出所述用户的运动前后的位置变化信息。Controlling the camera device group to collect an image of the calibration reflection point, the camera device group includes a plurality of imaging devices, the calibration reflection point is used to calibrate the position where the calibration reflection point is located, and the camera device group is worn on the user; Acquiring a first image of the calibration reflection point acquired by two adjacent camera devices, and obtaining the calibration reflection point and the camera device according to the first image by a preset binocular ranging algorithm Setting a first spatial distance of the group; establishing, according to the first spatial distance, a first spatial position relationship model of the calibration reflective point and the camera device group, the first space distance The spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the camera device group in the current spatial coordinate system; when the camera device group moves synchronously with the user, acquiring two adjacent a second image of the calibration reflection point acquired by the camera device, and obtaining the calibration reflection point and the location according to the second image by the preset binocular ranging algorithm a second spatial distance of the camera device group; taking a position where the camera device group is moved as an origin, and establishing a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance The second spatial position relationship model represents a spatial position relationship between the calibration reflection point and the camera device group in a newly created space coordinate system after the movement of the camera device group; comparing the first spatial position relationship model And the second spatial position relationship model, and the position change information before and after the movement of the user is obtained according to the comparison result.
本发明提供的一种虚拟现实系统中的空间定位装置,包括:A spatial positioning device in a virtual reality system provided by the present invention includes:
控制模块,用于控制摄像装置组采集标定反光点的图像,所述摄像装置组中包括多个摄像装置,所述标定反光点用于标定所述标定反光点所在的位置,所述摄像装置组佩戴在用户身上;获取模块,用于获取相邻的两个所述摄像装置采集的所述标定反光点的第一图像;计算模块,用于通过预置的双目测距算法,根据所述第一图像获得所述标定反光点与所述摄像装置组的第一空间距离;建模模块,用于以所述摄像装置组的位置为原点,根据所述第一空间距离,建立所述标定反光点与所述摄像装置组的第一空间位置关系模型,所述第一空间位置关系模型表示在当前空间坐标系下,所述标定反光点与所述摄像装置组的空间位置关系;所述获取模块,还用于当所述摄像装置组随所述用户同步运动时,获取相邻的两个所述摄像装置采集的所述标定反光点的第二图像;所述计算模块,用于通过所述预置的双目测距算法,根据所述第二图像获得所述标定反光点与所述摄像装置组的第二空间距离;所述建模模块,还用于以所述摄像装置组运动后所在的位置为原点,根据所述第二空间距离,建立所述标定反光点与所述摄像装置组的第二空间位置关系模型,所述第二空间位置关系模型表示在所述摄像装置组运动后新建的空间坐标系下,所述标定反光点与所述摄像装置组的空间位置关系;比较模块,用于比较所述第一空间位置关系模型以及所述第二空间位置关系模型;所述计算模块,还用于根据所述比较模块的比较结果得出所述用户的运动前后的位置变化信息。a control module, configured to control an image of the camera set to collect a calibration reflective spot, the camera set includes a plurality of camera devices, and the calibration reflective spot is used to calibrate a position where the calibration reflective point is located, the camera device group Wearing on the user; acquiring module, configured to acquire a first image of the calibration reflective point acquired by two adjacent camera devices; and a calculation module, configured to use a preset binocular ranging algorithm according to the Determining, by the first image, a first spatial distance between the calibration reflection point and the camera device group; and a modeling module, configured to establish the calibration according to the first spatial distance, with the position of the camera device group as an origin a first spatial positional relationship model of the retroreflective point and the camera set, the first spatial positional relationship model indicating a spatial positional relationship between the calibration retroreflective point and the camera set in the current spatial coordinate system; An acquiring module, configured to acquire, when the camera device group moves synchronously with the user, acquire a second of the calibration reflective points collected by two adjacent camera devices The calculation module is configured to obtain, by the preset binocular ranging algorithm, a second spatial distance between the calibration reflection point and the camera device group according to the second image; the modeling module And establishing, according to the second spatial distance, a second spatial position relationship model between the calibration reflective point and the camera device group, where the second position is determined by using the position of the camera device group after the movement of the camera device group, the second The spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the camera device group in a newly created space coordinate system after the movement of the camera device group; and a comparison module for comparing the first spatial position relationship model And the second spatial position relationship model; the calculation module is further configured to obtain position change information before and after the movement of the user according to the comparison result of the comparison module.
本发明提供的一种虚拟现实系统中的空间定位系统,包括:A spatial positioning system in a virtual reality system provided by the present invention includes:
头戴式显示器以及摄像装置组;其中,所述头戴式显示器,用于控制所述摄像装置组采集标定反光点的图像,所述摄像装置组中包括多个摄像装置,所述标定反光点用于标定所述标定反光点所在的位置,所述摄像装置组佩戴在用户身上;获取相邻的两个所述摄像装置采集的所述标定反光点的第一图像,并通过预置的双目测距算法,根据所述第一图像获得所述标定反光点与所述摄像装置组的第一空间距离;以所述摄像装置组的位置为原点,根据所述第一空间距离,建立所述标定反光点与所述摄像装置组的第一空间位置关系模型,所述第一空间位置关系模型表示在当前空间坐标系下,所述标定反光点与所述摄像装置组的空间位置关系;当所述摄像装置组随所述用户同步运动时,获取相邻的两个所述摄像装置采集的所述标定反光点的第二图像,并通过所述预置的双目测距算法,根据所述第二图像获得所述标定反光点与所述摄像装置组的第二空间距离;以所述摄像装置组运动后所在的位置为原点,根据所述第二空间距离,建立所述标定反光点与所述摄像装置组的第二空间位置关系模型,所述第二空间位置关系模型表示在所述摄像装置组运动后新建的空间坐标系下,所述标定反光点与所述摄像装置组的空间位置关系;比较所述第一空间位置关系模型以及所述第二空间位置关系模型,根据比较结果得出所述用户的运动前后的位置变化信息;所述摄像装置组,用于在所述头戴式显示器的控制下,启动各所述摄像装置采集所述定位反光点的图像。a head mounted display and a camera set; wherein the head mounted display is configured to control the image capturing device group to acquire an image of a calibration reflective spot, the camera device group includes a plurality of imaging devices, and the calibration reflective spot For calibrating the position where the calibration reflection point is located, the camera device group is worn on the user; acquiring the first image of the calibration reflection point collected by two adjacent camera devices, and adopting the preset double a visual distance algorithm, obtaining a first spatial distance between the calibration reflection point and the camera device group according to the first image; establishing a location according to the first spatial distance, using the position of the camera device group as an origin; Determining a first spatial position relationship model between the retroreflective point and the camera set, the first spatial position relationship model indicating a spatial positional relationship between the calibration reflective point and the camera set in the current spatial coordinate system; Acquiring a second image of the calibration reflective spot collected by two adjacent camera devices when the camera device group moves synchronously with the user, and passes through the Presetting a binocular ranging algorithm, obtaining a second spatial distance between the calibration reflection point and the camera device group according to the second image; using a position where the camera device group is moved as an origin, according to the a second spatial distance establishing a second spatial position relationship model of the calibration reflection point and the camera device group, the second spatial position relationship model indicating a new space coordinate system after the movement of the camera device group Determining a spatial positional relationship between the retroreflective point and the camera device group; comparing the first spatial positional relationship model and the second spatial positional relationship model, and obtaining position change information before and after the user's motion according to the comparison result; The camera device group is configured to start, under the control of the head mounted display, each of the camera devices to collect an image of the positioning reflective spot.
从上述本发明实施例可知,本发明提供的虚拟现实系统中的空间定位方法、装置及系统,设置固定的标定反光点,获取该标定发光点的图像,以确定用户与该标定反光点的距离,从而构建用户当前位置与该标定反光点之间的第一空间位置关系模型,当用户运动时,佩戴在用户身上的摄像装置组同步运动,获取该标定反光点的图像,再次构建用户在运动后的位置与该标定反光点之间的第二空间位置关系模型,通过对比第一空间位置关系模型和第二空间位置关系模型的差异,反推出用户运动前后的位置变化,相较于现有技术,降低了定位用户位置过程中的计算量,降低定位技术难度,可在VR移动系统中实现定位相关产品的量化,提高产品生产率。 According to the embodiment of the present invention, the spatial positioning method, device and system in the virtual reality system provided by the present invention set a fixed calibration reflection point, and acquire an image of the calibration illumination point to determine the distance between the user and the calibration reflection point. And constructing a first spatial position relationship model between the current position of the user and the calibration reflection point. When the user moves, the camera device group worn on the user synchronizes motion, acquires an image of the calibration reflection point, and constructs the user again in motion. The second spatial positional relationship model between the rear position and the calibration reflective point, by comparing the difference between the first spatial positional relationship model and the second spatial positional relationship model, reverses the positional change before and after the user's motion, compared to the existing The technology reduces the amount of calculation in the process of locating the user's position, reduces the difficulty of the positioning technology, and can quantify the positioning related products in the VR mobile system to improve product productivity.
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below. Obviously, the drawings in the following description are only It is a certain embodiment of the present invention, and other drawings can be obtained from those skilled in the art without any inventive labor.
图1是本发明实施例中虚拟现实系统中的空间定位系统结构示意图;1 is a schematic structural diagram of a spatial positioning system in a virtual reality system according to an embodiment of the present invention;
图2是本发明第一实施例提供的虚拟现实系统中的空间定位方法的实现流程示意图;2 is a schematic flowchart of implementing a spatial positioning method in a virtual reality system according to a first embodiment of the present invention;
图3是本发明第二实施例提供的虚拟现实系统中的空间定位方法的实现流程示意图;3 is a schematic flowchart showing an implementation of a spatial positioning method in a virtual reality system according to a second embodiment of the present invention;
图4是本发明第三实施例中虚拟现实系统中的空间定位装置的结构示意图;4 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a third embodiment of the present invention;
图5是本发明第四实施例提供的虚拟现实系统中的空间定位装置的结构示意图。FIG. 5 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a fourth embodiment of the present invention.
为使得本发明的发明目的、特征、优点能够更加的明显和易懂,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而非全部实施例。基于本发明中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The technical solutions in the embodiments of the present invention will be clearly and completely described in conjunction with the drawings in the embodiments of the present invention. The embodiments are merely a part of the embodiments of the invention, and not all of the embodiments. All other embodiments obtained by those skilled in the art based on the embodiments of the present invention without creative efforts are within the scope of the present invention.
本发明实施例提供的虚拟现实系统中的空间定位方法,可应用于包括摄像装置组和头戴式显示器的虚拟现实系统的空间定位系统中,请参阅图1,摄像装置组10和头戴式显示器20通过通用串行总线(USB,Universal Serial Bus)方式、WIFI方式,或者其他有线、无线的方式相连接,可进行数据交换。The spatial positioning method in the virtual reality system provided by the embodiment of the present invention can be applied to a spatial positioning system of a virtual reality system including a camera set and a head mounted display. Referring to FIG. 1, the camera set 10 and the headset Display 20 via Universal Serial Bus (USB, Universal) Serial Bus), WIFI, or other wired or wireless methods are available for data exchange.
其中,摄像装置组10包括多个互相呈预置夹角的单个摄像装置101构成,该预置夹角与各摄像装置101的摄像角度有关,摄像装置101的设置规则是以多个摄像装置101组成摄像装置组10后,能够达到对周围空间进行360度全方位拍摄。例如,当摄像装置101的摄像视角均为60度,那么摄像装置组10需要的摄像装置101的个数为360/60=6。当摄像装置101的摄像视角均为60度时,相邻两个摄像装置的共同摄像视角为45度。摄像装置101的摄像视角不必然相同,也可以设置多个摄像视角不同的摄像装置101,但须满足这些摄像装置组成摄像装置组10后,能够达到对周围空间进行360度全方位拍摄。摄像装置组10,用于The imaging device group 10 includes a plurality of single imaging devices 101 that are at a preset angle. The preset angle is related to the imaging angle of each imaging device 101. The setting rule of the imaging device 101 is a plurality of imaging devices 101. After the imaging device group 10 is formed, 360-degree imaging of the surrounding space can be achieved. For example, when the imaging angle of view of the imaging apparatus 101 is 60 degrees, the number of imaging apparatuses 101 required for the imaging apparatus group 10 is 360/60=6. When the imaging angle of view of the imaging apparatus 101 is 60 degrees, the common imaging angle of view of the adjacent two imaging apparatuses is 45 degrees. The imaging angles of the imaging device 101 are not necessarily the same, and a plurality of imaging devices 101 having different imaging angles of view may be provided. However, it is necessary to satisfy the imaging device group 10 of these imaging devices, and it is possible to perform 360-degree imaging of the surrounding space. Camera unit 10 for
头戴式显示器20形如眼镜,通过感应人的眼部活动,接受指令,通过一组光学系统放大超微显示屏上的图像,将影像投射于视网膜上,进而呈现于观看者眼中大屏幕图像。The head-mounted display 20 is shaped like a pair of glasses, and receives an instruction by sensing the movement of the human eye, and amplifies the image on the ultra-fine display through a set of optical systems, projects the image on the retina, and presents the large-screen image in the viewer's eye. .
头戴式显示器20,用于控制摄像装置组10采集标定反光点的图像,该标定反光点用于标定该标定反光点所在的位置,摄像装置组10佩戴在用户身上。头戴式显示器20,还用于获取相邻的两个该摄像装置采集的该标定反光点的第一图像,并通过预置的双目测距算法,根据该第一图像获得该标定反光点与该摄像装置组的第一空间距离,以摄像装置组10的位置为原点,根据该第一空间距离,建立该标定反光点与摄像装置组10的第一空间位置关系模型,该第一空间位置关系模型表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。当摄像装置组10随该用户同步运动时,头戴式显示器20,还用于获取相邻的两个该摄像装置采集的该标定反光点的第二图像,并通过该预置的双目测距算法,根据该第二图像获得该标定反光点与摄像装置组10的第二空间距离,以摄像装置组10运动后所在的位置为原点,根据该第二空间距离,建立该标定反光点与摄像装置组10的第二空间位置关系模型,该第二空间位置关系模型表示在摄像装置组10运动后新建的空间坐标系下,该标定反光点与摄像装置组10的空间位置关系,比较该第一空间位置关系模型以及该第二空间位置关系模型,根据比较结果得出该用户的运动前后的位置变化信息。The head mounted display 20 is configured to control the camera set 10 to acquire an image of the calibration reflection point, the calibration reflection point is used to calibrate the position where the calibration reflection point is located, and the camera set 10 is worn on the user. The head mounted display 20 is further configured to acquire a first image of the calibration reflective spot collected by two adjacent camera devices, and obtain the calibration reflective point according to the first image by using a preset binocular ranging algorithm. Establishing a first spatial distance relationship between the calibration reflection point and the imaging device group 10 according to the first spatial distance, with the first spatial distance of the imaging device group as the origin, and the first spatial distance, the first space The positional relationship model represents the spatial positional relationship between the calibration reflection point and the camera set in the current spatial coordinate system. When the camera unit 10 moves synchronously with the user, the head mounted display 20 is further configured to acquire a second image of the two calibration points collected by the adjacent two camera devices, and pass the preset binocular measurement. The distance algorithm obtains a second spatial distance between the calibration reflection point and the camera device group 10 according to the second image, and uses the position where the camera device group 10 is moved as an origin, and establishes the calibration reflection point according to the second spatial distance. a second spatial positional relationship model of the imaging device group 10, the second spatial positional relationship model indicating a spatial positional relationship between the calibration reflective spot and the imaging device group 10 in a newly created spatial coordinate system after the movement of the imaging device group 10, and comparing The first spatial position relationship model and the second spatial position relationship model obtain position change information before and after the movement of the user according to the comparison result.
上述摄像装置组10和头戴式显示器20上述功能的具体实现过程,请参见下述各实施例的描述。For the specific implementation process of the above functions of the camera set 10 and the head mounted display 20, please refer to the description of the following embodiments.
请参阅图2,图2为本发明第一实施例提供的虚拟现实系统中的空间定位方法的实现流程示意图,可应用于图1所示的头戴式显示器20中,主要包括以下步骤:Referring to FIG. 2, FIG. 2 is a schematic flowchart of an implementation process of a spatial positioning method in a virtual reality system according to a first embodiment of the present invention. The method is applicable to the head mounted display 20 shown in FIG.
S201、控制摄像装置组采集标定反光点的图像;S201. Control an imaging device group to collect an image of the calibration reflective point;
头戴式显示器中可设置数据处理芯片,可作为本实施例的执行主体,需要说明的是,本实施例中实现虚拟现实系统中的空间定位方法的执行主体也可以设置在VR系统的其它设备中,例如,可以设置在摄像装置组中,也可以设置在已连接在VR系统的移动终端中。为便于描述,本发明实施例以头戴显示器作为执行主体进行描述,但不作为对技术方案的限制。A data processing chip can be disposed in the head-mounted display, which can be used as an execution body of the embodiment. It should be noted that the execution body of the spatial positioning method in the virtual reality system in this embodiment may also be set in other devices of the VR system. For example, it may be provided in the camera group or in the mobile terminal connected to the VR system. For convenience of description, the embodiment of the present invention is described by using a head-mounted display as an execution subject, but is not a limitation of the technical solution.
头戴式显示器控制摄像装置组采集标定反光点的图像。其中,该摄像装置组佩戴在用户身上,具体可以与头戴式显示器相连接,该摄像装置组中包括多个摄像装置。The head mounted display controls the camera set to capture an image of the calibrated reflective spot. The camera device group is worn on the user, and specifically connected to the head mounted display, and the camera device group includes a plurality of camera devices.
该标定反光点,用于标定该标定反光点所在的位置。该标定反光点由反光材料制成,可以反射投射在其上的光,使得该标定反光点在图像中较醒目,容易分辨、定位。该标定反光点设置在指定的空间位置,它是可以反射指定光的一个物体或一个物体群,便于摄像装置组采集,通过在该摄像装置组采集的图像中的该标反光点,可标定该标定反光点与该摄像装置组的距离。The calibration point is used to calibrate the position of the calibration point. The calibration reflection point is made of a reflective material, and can reflect the light projected thereon, so that the calibration reflection point is more conspicuous in the image, and is easy to distinguish and locate. The calibration reflection point is set at a specified spatial position, which is an object or a group of objects that can reflect the specified light, which is convenient for the camera group to collect, and can be calibrated by the target reflection point in the image collected by the camera group. The distance between the reflection point and the camera set is calibrated.
该标定反光点的设置,须能够使得在作图像分析时可以通过采集的图像区分出该标定反光点的空间位置,例如,在四面不同的墙壁上设置不同形状的标定反光点,不同形状可以是横线、竖线、圆形、三角形、梯形、五角形等形状。也可以在四面不同的墙壁上设置尺寸大小不同的标定反光点。The setting of the calibration reflection point shall be such that the spatial position of the calibration reflection point can be distinguished by the acquired image during image analysis. For example, different shapes of the calibration reflection point are set on different walls on four sides, and the different shapes may be Horizontal lines, vertical lines, circles, triangles, trapezoids, pentagons, etc. It is also possible to set calibration highlights of different sizes on different walls on four sides.
S202、获取相邻的两个摄像装置采集的该标定反光点的第一图像,并通过预置的双目测距算法,根据该第一图像获得该标定反光点与该摄像装置组的第一空间距离;S202. Acquire a first image of the calibration reflection point acquired by two adjacent camera devices, and obtain a first calibration reflection point and a first image of the camera device group according to the first image by a preset binocular ranging algorithm. Spatial distance
在该摄像组的各摄像装置中采集的图像中,获取相邻的两个摄像装置采集的该标定反光点的多个第一图像。两个相邻的摄像装置采集的第一图像中的标定反光点,常常是同一个标定反光点,可以通过双目测距算法,获得该图像中的标定反光点与该摄像装置组的第一空间距离。In the images acquired by the respective imaging devices of the imaging group, a plurality of first images of the calibration reflection points acquired by two adjacent imaging devices are acquired. The calibration reflection point in the first image acquired by two adjacent camera devices is often the same calibration reflection point, and the calibration reflection point in the image and the first of the camera device group can be obtained by the binocular ranging algorithm. Space distance.
需要说明的是,双目测距算法有很多种,常用的一种双目测距算法是利用了目标点在左右两幅视图上成像的横向坐标直接存在的差异,即视差与目标点到成像平面的距离存在着反比例的关系。为了精确地求得一个点在三维空间里与摄像装置的距离,需获得的参数有焦距、视差、中心距。其中,摄像装置的焦距和中心距可以通过标定获得初始值。It should be noted that there are many binocular ranging algorithms. A commonly used binocular ranging algorithm utilizes the direct difference of the horizontal coordinates of the target point imaged on the left and right views, that is, the parallax and the target point to the imaging. There is an inverse proportional relationship between the plane distances. In order to accurately determine the distance of a point from the camera in three-dimensional space, the parameters to be obtained are focal length, parallax, and center distance. Wherein, the focal length and the center distance of the camera device can be obtained by calibration to obtain an initial value.
进一步地,如果需要获得该一个点的具体坐标,那么还需要额外获知左右像平面的坐标系与立体坐标系中原点在横纵坐标轴上的偏移值。其中,焦距、摄像装置中心距、偏移值可以通过立体标定获得初始值,并通过立体校准优化,使得采集图像的两个相邻摄像装置在数学上完全平行放置,并且左右摄像头的参数值相同。在此基础上,求取视差。从而获取最终获得一个点的三维坐标所需要的所有参数。Further, if it is necessary to obtain the specific coordinates of the one point, it is necessary to additionally know the offset values of the coordinate system of the left and right image planes and the origin of the solid coordinate system on the horizontal and vertical axes. Wherein, the focal length, the center distance of the camera, and the offset value can be obtained by stereo calibration to obtain an initial value, and optimized by stereo calibration, so that two adjacent camera devices that capture images are mathematically placed in parallel, and the parameters of the left and right cameras are the same. . On this basis, the parallax is obtained. Thereby obtaining all the parameters needed to finally obtain the three-dimensional coordinates of a point.
根据上述双目测距算法的原理,根据该两个相邻摄像装置的焦距和中心距,以及,该两个相邻摄像装置采集的第一图像中的该标定反光点求得视差,则通过该预置的双目测距算法,获得该标定反光点与该摄像装置组的第一空间距离。According to the principle of the above-described binocular ranging algorithm, according to the focal length and the center distance of the two adjacent camera devices, and the calibration point in the first image acquired by the two adjacent camera devices to obtain the parallax, the The preset binocular ranging algorithm obtains a first spatial distance between the calibration reflection point and the camera set.
由于该摄像装置组佩戴在该用户身上,与该用户同步运动,则该摄像装置组相对于地面的距离,可以近似认为是该用户的身高,是可以通过测高工具测量得到。因此,获得该标定反光点相对于该摄像装置组的该第一空间距离,即获得了该标定反光点与该用户之间的当前空间距离。Since the camera device group is worn on the user and moves in synchronization with the user, the distance of the camera device group relative to the ground can be approximately regarded as the height of the user, which can be measured by the height measuring tool. Therefore, the first spatial distance of the calibration reflection point relative to the camera set is obtained, that is, the current spatial distance between the calibration reflection point and the user is obtained.
S203、以该摄像装置组的位置为原点,根据该第一空间距离,建立该标定反光点与该摄像装置组的第一空间位置关系模型;S203: Taking a position of the camera device group as an origin, and establishing a first spatial position relationship model between the calibration reflection point and the camera device group according to the first spatial distance;
该第一空间位置关系模型,用于表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系,该空间位置关系,是一种相对的关系,当该标定反光点固定不动时,随着该摄像装置组的运动,该空间位置关系发生改变。The first spatial position relationship model is used to indicate a spatial position relationship between the calibration reflection point and the camera device group in the current space coordinate system, and the spatial position relationship is a relative relationship when the calibration reflection point is fixed. When not moving, the spatial positional relationship changes as the camera set moves.
将当前该摄像装置组的位置设为该第一空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第一空间距离,得到该标定反光点相对于该原点的坐标(x1,y1,z1)。Setting the current position of the camera device group as the coordinate system origin in the first spatial position relationship model, the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group The first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
因此,该第一空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。Therefore, the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
S204、当该摄像装置组随用户同步运动时,获取相邻的两个摄像装置采集的该标定反光点的第二图像,并通过该预置的双目测距算法,根据该第二图像获得该标定反光点与该摄像装置组的第二空间距离;S204. Acquire a second image of the calibration reflection point acquired by two adjacent camera devices when the camera device group moves synchronously with the user, and obtain, according to the second image, the preset binocular ranging algorithm. Aligning the reflective spot with the second spatial distance of the camera set;
当用户运动时,该摄像装置组会随该用户同步运动,即,该摄像装置组的运动方向和运动距离,都与该用户的运动方向和运动距离保持一致。When the user moves, the camera device group moves synchronously with the user, that is, the moving direction and the moving distance of the camera device group are consistent with the moving direction and the moving distance of the user.
在该用户运动到下一个位置时,控制该摄像装置组继续采集该标定反光点的图像,在该摄像装置组中所有摄像装置所采集的多个图像中,获取两个相邻的摄像装置采集的该标定反光点的多个第二图像。When the user moves to the next position, the camera device group is controlled to continue to collect the image of the calibration reflective spot, and among the multiple images collected by all the imaging devices in the camera device group, two adjacent camera devices are acquired. The plurality of second images of the calibration point are calibrated.
具体地,如果该用户在没运动之前,是面对西墙上的标定反光点的,若该用户进行了转体运动,从面向西墙转向了面向东墙,则之前能够采集到该西墙上的标定反光点的摄像装置可能会因为摄像视角的限制,无法采集到该标定反光点,这时,需要从该摄像装置组中,确定能够采集到该西墙上的标定反光点图像的两个相邻的摄像装置,获取这两个相邻的摄像装置所采集的该标定反光点的多个第二图像。Specifically, if the user faces the calibration point on the western wall before moving, if the user performs a swivel movement and turns from the west facing wall to the east wall, the western wall can be collected before. The camera device that calibrates the reflective point on the camera may not be able to capture the calibration reflection point due to the limitation of the camera angle of view. In this case, it is necessary to determine from the camera device group two images of the calibration reflection point image that can be collected on the western wall. An adjacent camera device acquires a plurality of second images of the calibration reflection points collected by the two adjacent camera devices.
进一步地,在所采集的该标定反光点的图像中,筛选出清晰、完整地拍摄到该标定反光点的那些图像。仍然通过预置的双目测距算法,获得该标定反光点与该摄像装置组的第二空间距离,即,此处获得的是该用户运动后,该标定反光点与该摄像装置组的新的空间距离,也即获得了该标定反光点与该用户在运动后之间的新的当前空间距离。获得该第二空间距离的过程与步骤S202中获得该第一空间距离的过程相近似,请参见步骤S202中的相关描述,此处不再赘述。若该用户发生了运动,而该标定反光点固定不动,因此,该第二空间距离与该第一空间距离必然不相同。Further, in the acquired image of the calibration reflection point, those images in which the calibration reflection point is captured clearly and completely are screened out. Still obtaining a second spatial distance between the calibration reflection point and the camera device group by using a preset binocular ranging algorithm, that is, after obtaining the user motion, the calibration reflection point and the camera device group are newly obtained. The spatial distance, that is, the new current spatial distance between the calibration point and the user after the motion is obtained. The process of obtaining the second spatial distance is similar to the process of obtaining the first spatial distance in step S202. Please refer to the related description in step S202, and details are not described herein again. If the user has exercised and the calibration reflective point is fixed, the second spatial distance and the first spatial distance are necessarily different.
S205、以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,建立该标定反光点与该摄像装置组的第二空间位置关系模型;S205: Taking a position where the camera device group moves as an origin, and establishing a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance;
该第二空间位置关系模型表示在该摄像装置组运动后新建的空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。The second spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the imaging device group in a newly created spatial coordinate system after the movement of the camera group.
以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,再次建立该标定反光点与该摄像装置组的空间位置关系模型,即在新的原点下建立第二空间位置关系模型。Taking the position of the camera device group as the origin, according to the second spatial distance, the spatial position relationship model between the calibration reflection point and the camera device group is established again, that is, the second spatial position relationship model is established under the new origin. .
具体地,将当前该摄像装置组所在的位置设为该第二空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第二空间距离,得到该标定反光点相对于该原点的坐标(x2,y2,z2)。由于该第一空间距离和该第二空间距离不同,因此该标定反光点的位置相对于该摄像装置组在运动前后的两个位置的坐标也不相同。Specifically, the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device. The second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
该第二空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。The second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
S206、比较该第一空间位置关系模型以及该第二空间位置关系模型,根据比较结果得出该用户的运动前后的位置变化信息。S206. Compare the first spatial position relationship model and the second spatial position relationship model, and obtain position change information before and after the movement of the user according to the comparison result.
在该第一空间位置关系模型以及该第二空间位置关系模型中,该摄像装置组位置的坐标,也即该用户位置的坐标都被当作原点构建了空间坐标系,而该标定反光点自身的绝对空间位置是固定不变的,那么,可能发生变化的就是它和该用户的相对空间位置,而这种相对空间位置变化是因该用户运动前后位置发生变化引起的。对应于该用户运动前后位置的变化,该标定反光点在此两个空间位置关系模型中的相对坐标也发生了变化,即从坐标(x1,y1,z1)变化为坐标(x2,y2,z2)。In the first spatial position relationship model and the second spatial position relationship model, the coordinates of the camera set position, that is, the coordinates of the user position are all constructed as a spatial coordinate system, and the calibration reflective point itself The absolute spatial position is fixed, then what may change is the relative spatial position of the user and the relative spatial position change caused by the change of the position of the user before and after the movement. Corresponding to the change of the position of the user before and after the movement, the relative coordinates of the calibration reflection point in the two spatial position relationship models also change, that is, from the coordinates (x 1 , y 1 , z 1 ) to the coordinates (x 2 , y 2 , z 2 ).
那么,比较该第一空间位置关系模型以及该第二空间位置关系模型,通过对比二者的变化,可得出该用户的运动前和运动后的位置变化信息,例如得到该用户运动后和运动前的位置的数值变化和方向的变化。Then, comparing the first spatial position relationship model and the second spatial position relationship model, by comparing the changes of the two, the position change information of the user before and after the exercise can be obtained, for example, obtaining the user after exercise and exercising The change in the value of the front position and the change in direction.
本发明实施例中,设置固定的标定反光点,获取该标定发光点的图像,以确定用户与该标定反光点的距离,从而构建用户当前位置与该标定反光点之间的第一空间位置关系模型,当用户运动时,佩戴在用户身上的摄像装置组同步运动,获取该标定反光点的图像,再次构建用户在运动后的位置与该标定反光点之间的第二空间位置关系模型,通过对比第一空间位置关系模型和第二空间位置关系模型的差异,反推出用户运动前后的位置变化,相较于现有技术,降低了定位用户位置过程中的计算量,降低定位技术难度,可在VR移动系统中实现定位相关产品的量化,提高产品生产率。In the embodiment of the present invention, a fixed calibration reflection point is set, an image of the calibration illumination point is obtained, and a distance between the user and the calibration reflection point is determined, thereby constructing a first spatial position relationship between the current position of the user and the calibration reflection point. a model, when the user moves, the camera device group worn on the user moves synchronously, acquires an image of the calibration reflection point, and reconstructs a second spatial positional relationship model between the position of the user after the movement and the calibration reflection point, Comparing the difference between the first spatial positional relationship model and the second spatial positional relationship model, the position change before and after the user's motion is reversed. Compared with the prior art, the calculation amount in the process of locating the user position is reduced, and the difficulty of the positioning technology is reduced. Quantify positioning-related products in VR mobile systems to increase product productivity.
请参阅图3,图3为本发明第二实施例提供的虚拟现实系统中的空间定位方法的实现流程示意图,主要包括以下步骤:Referring to FIG. 3, FIG. 3 is a schematic flowchart of an implementation process of a spatial positioning method in a virtual reality system according to a second embodiment of the present invention, which mainly includes the following steps:
S301、控制摄像装置组采集标定反光点的图像;S301. Control an imaging device group to collect an image of the calibration reflective point;
在头戴式显示器的控制下,摄像装置组采集标定反光点的图像,该摄像装置组佩戴在用户身上。该摄像装置组中包括多个摄像装置。这些摄像装置的数量可以根据各摄像装置的摄像视角而做不同的设置,设置的目的是以各摄像装置拍摄的图像能够覆盖整个空间,具体地,摄像装置的数量为360°除以摄像装置的摄像视角角度的商值。例如,若各摄像装置的摄像视角均为45度,那么为了组成该摄像装置组所需要设置的摄像装置的数量为360/45=8(个)。Under the control of the head mounted display, the camera set collects an image of the calibration reflective spot, and the camera set is worn on the user. The imaging device group includes a plurality of imaging devices. The number of these imaging devices can be differently set according to the imaging angle of view of each imaging device. The purpose of the setting is that the images captured by the respective imaging devices can cover the entire space. Specifically, the number of imaging devices is 360° divided by the imaging device. The quotient of the camera viewing angle. For example, if the imaging angle of view of each imaging device is 45 degrees, the number of imaging devices required to constitute the imaging device group is 360/45=8 (pieces).
该摄像装置组设置有摄像发光器件,可以发出指定波长的光,当该摄像发光器件照射在该标定反光点上,该标定反光点可以反光,增强该标定反光点在被采集的图像中的分辨度。优选地,该摄像发光器件为红外发光器件,该摄像装置中设置有红外滤光片,该红外滤光片过滤掉了该标定反光点发出的除红外光之外的其它波长的光线。头戴式显示器控制该红外发光器件照射该标定反光点,使得该标定反光点发出红外反射光,控制该摄像装置组中的各摄像装置采集经过该红外滤光片过滤后的该标定反光点的图像。在夜视场景下采集图像,红外光是最为理想的反射光线。The camera device group is provided with an imaging light-emitting device, which can emit light of a specified wavelength. When the imaging light-emitting device is irradiated on the calibration reflection point, the calibration reflection point can be reflected, and the resolution of the calibration reflection point in the acquired image is enhanced. degree. Preferably, the imaging light-emitting device is an infrared light-emitting device, and the image-capturing device is provided with an infrared filter that filters out light of other wavelengths than the infrared light emitted by the calibration light-reflecting point. The head-mounted display controls the infrared light-emitting device to illuminate the calibration reflection point, so that the calibration reflection point emits infrared reflection light, and controls each camera device in the camera device group to collect the calibration reflection point filtered by the infrared filter. image. When capturing images in night vision scenes, infrared light is the most ideal reflected light.
该标定反光点用于标定该标定反光点所在的位置。该标定反光点的设置,须能够使得在作图像分析时可以通过采集的图像区分出该标定反光点的空间位置。The calibration point is used to calibrate the location of the calibration point. The setting of the calibration reflection point shall be such that the spatial position of the calibration reflection point can be distinguished by the acquired image during image analysis.
S302、获取相邻的两个摄像装置采集的该标定反光点的第一图像,并通过预置的双目测距算法,根据该第一图像获得该标定反光点与该摄像装置组的第一空间距离;S302. Acquire a first image of the calibration reflection point acquired by two adjacent camera devices, and obtain a first calibration reflection point and a first image of the camera device group according to the first image by using a preset binocular ranging algorithm. Spatial distance
在该摄像组的各摄像装置中采集的图像中,获取相邻的两个摄像装置采集的该标定反光点的多个第一图像。In the images acquired by the respective imaging devices of the imaging group, a plurality of first images of the calibration reflection points acquired by two adjacent imaging devices are acquired.
根据预置的双目测距算法,根据该两个相邻摄像装置的焦距和中心距,以及,该两个相邻摄像装置采集的多个第一图像中的该标定反光点求得视差,则通过该预置的双目测距算法,获得该标定反光点与该摄像装置组的第一空间距离,即获得了该标定反光点与该用户之间的当前空间距离。According to the preset binocular ranging algorithm, the parallax is obtained according to the focal length and the center distance of the two adjacent camera devices, and the calibration reflection point in the plurality of first images acquired by the two adjacent camera devices, Then, the preset spatial distance between the calibration reflection point and the camera device group is obtained by the preset binocular ranging algorithm, that is, the current spatial distance between the calibration reflection point and the user is obtained.
需要说明的是,为了提高测距的准确度,在获取相邻的两个摄像装置采集的标定反光点的多个第一图像之后,在此多个第一图像中筛选出标定反光点的分辨率最大的两个图像作为该标定反光点的左右视图,进一步地,通过该预置的双目测距算法,根据筛选出的该两个图像获得该标定反光点与该摄像装置组的第一空间距离。It should be noted that, in order to improve the accuracy of the ranging, after acquiring the plurality of first images of the calibration reflection points collected by the two adjacent camera devices, the resolution of the calibration reflection points is selected in the plurality of first images. The two images with the highest rate are used as the left and right views of the calibration reflection point. Further, the preset binocular ranging algorithm obtains the calibration reflection point and the first of the camera group according to the selected two images. Space distance.
S303、以该摄像装置组的位置为原点,根据该第一空间距离,建立该标定反光点与该摄像装置组的第一空间位置关系模型;S303. Taking a position of the camera device group as an origin, establishing a first spatial position relationship model between the calibration reflection point and the camera device group according to the first spatial distance;
该第一空间位置关系模型,用于表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系,该空间位置关系,是一种相对的关系,当该标定反光点固定不动时,随着该摄像装置组的运动,该空间位置关系发生改变。The first spatial position relationship model is used to indicate a spatial position relationship between the calibration reflection point and the camera device group in the current space coordinate system, and the spatial position relationship is a relative relationship when the calibration reflection point is fixed. When not moving, the spatial positional relationship changes as the camera set moves.
将当前该摄像装置组的位置设为该第一空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第一空间距离,得到该标定反光点相对于该原点的坐标(x1,y1,z1)。Setting the current position of the camera device group as the coordinate system origin in the first spatial position relationship model, the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group The first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
因此,该第一空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。Therefore, the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
S304、当该摄像装置组随用户同步运动时,获取相邻的两个摄像装置采集的该标定反光点的第二图像,并通过该预置的双目测距算法,根据该第二图像获得该标定反光点与该摄像装置组的第二空间距离;S304. Acquire a second image of the calibration reflection point acquired by two adjacent camera devices when the camera device group moves synchronously with the user, and obtain, according to the second image, the preset binocular ranging algorithm. Aligning the reflective spot with the second spatial distance of the camera set;
在该用户运动时,控制该摄像装置组继续采集该标定反光点的图像,在该摄像装置组中所有摄像装置所采集的多个图像中,获取两个相邻的摄像装置采集的该标定反光点的多个第二图像。When the user moves, the camera device group is controlled to continue to collect the image of the calibration reflection point, and the calibration reflections acquired by two adjacent camera devices are acquired among the plurality of images collected by all the camera devices in the camera device group. Multiple second images of points.
具体地,如果该用户在没运动之前,是面对西墙上的标定反光点的,若该用户进行了转体运动,从面向西墙转向了面向东墙,则之前能够采集到该西墙上的标定反光点的摄像装置可能会因为摄像视角的限制,无法采集到该标定反光点,这时,需要从该摄像装置组中,确定能够采集到该西墙上的标定反光点图像的两个相邻的摄像装置,获取这两个相邻的摄像装置所采集的该标定反光点的图像。通过预置的双目测距算法,获得该标定反光点与该摄像装置组的第二空间距离,即,此处获得的是该用户运动后,该标定反光点与该摄像装置组的新的空间距离,也即获得了该标定反光点与该用户在运动后之间的新的当前空间距离。获得该第二空间距离的过程与步骤S202中获得该第一空间距离的过程相近似,请参见步骤S202中的相关描述,此处不再赘述。若该用户发生了运动,而该标定反光点固定不动,因此,该第二空间距离与该第一空间距离必然不相同。Specifically, if the user faces the calibration point on the western wall before moving, if the user performs a swivel movement and turns from the west facing wall to the east wall, the western wall can be collected before. The camera device that calibrates the reflective point on the camera may not be able to capture the calibration reflection point due to the limitation of the camera angle of view. In this case, it is necessary to determine from the camera device group two images of the calibration reflection point image that can be collected on the western wall. An adjacent camera device acquires an image of the calibration reflection point acquired by the two adjacent camera devices. Obtaining a second spatial distance between the calibration reflection point and the camera device group by using a preset binocular ranging algorithm, that is, obtaining the new target of the calibration reflection point and the camera device group after the user motion is obtained here The spatial distance, that is, the new current spatial distance between the calibration point and the user after the motion is obtained. The process of obtaining the second spatial distance is similar to the process of obtaining the first spatial distance in step S202. Please refer to the related description in step S202, and details are not described herein again. If the user has exercised and the calibration reflective point is fixed, the second spatial distance and the first spatial distance are necessarily different.
需要说明的是,为了提高测距的准确度,在获取相邻的两个摄像装置采集的标定反光点的多个第二图像,并通过预置的双目测距算法,根据多个第二图像获得该标定反光点与该摄像装置组的第二空间距离之后,在此多个第二图像中筛选出该标定反光点的分辨率最高的两个图像作为该预置的双目测距算法中对应的左右视图,通过该预置的双目测距算法,获得该标定反光点与该摄像装置组的第二空间距离。It should be noted that, in order to improve the accuracy of the ranging, a plurality of second images of the calibration reflection points acquired by the two adjacent camera devices are acquired, and the preset binocular ranging algorithm is used according to the plurality of second images. After the image obtains the second spatial distance between the calibration reflection point and the camera device group, the two images with the highest resolution of the calibration reflection point are selected as the preset binocular ranging algorithm in the plurality of second images. The corresponding left and right views are obtained by the preset binocular ranging algorithm, and the second spatial distance between the calibration reflection point and the camera device group is obtained.
S305、以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,建立该标定反光点与该摄像装置组的第二空间位置关系模型;S305: Taking a position where the camera device group moves as an origin, and establishing a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance;
该第二空间位置关系模型表示在该摄像装置组运动后新建的空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。The second spatial position relationship model represents a spatial positional relationship between the calibration reflection point and the imaging device group in a newly created spatial coordinate system after the movement of the camera group.
以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,再次建立该标定反光点与该摄像装置组的空间位置关系模型,即在新的原点下建立第二空间位置关系模型。Taking the position of the camera device group as the origin, according to the second spatial distance, the spatial position relationship model between the calibration reflection point and the camera device group is established again, that is, the second spatial position relationship model is established under the new origin. .
具体地,将当前该摄像装置组所在的位置设为该第二空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第二空间距离,得到该标定反光点相对于该原点的坐标(x2,y2,z2)。由于该第一空间距离和该第二空间距离不同,因此该标定反光点的位置相对于该摄像装置组在运动前后的两个位置的坐标也不相同。Specifically, the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device. The second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
该第二空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。The second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
S306、比较该第一空间位置关系模型中该标定反光点的第一坐标和在该第二空间关系模型中的第二坐标是否相同;S306. Compare whether the first coordinate of the calibration reflection point in the first spatial position relationship model and the second coordinate in the second spatial relationship model are the same;
即比较(x1,y1,z1)和(x2,y2,z2)中,x1是否与x2相同,y1是否与y2相同,z1是否与z2相同。That is, whether (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are compared, whether x 1 is the same as x 2 , whether y 1 is the same as y 2 , and whether z 1 is the same as z 2 .
S307、若相同,则确定该用户的运动前后位置没有变化,若不相同,则根据该第一坐标与该第二坐标的差值计算出该用户在运动后与运动前的位置差。S307. If they are the same, it is determined that there is no change in the position of the user before and after the movement. If not, the difference between the first coordinate and the second coordinate is calculated according to the difference between the first coordinate and the second coordinate.
若比较结果同时满足x1与x2相同,y1与y2相同,z1与z2相同,则用户可能走出去又退回了原处,因此,确定该用户的运动前后位置没有变化。If the comparison result satisfies that x 1 is the same as x 2 , y 1 is the same as y 2 , and z 1 is the same as z 2 , the user may go out and return to the original position. Therefore, it is determined that the user's position before and after the movement does not change.
若比较结果不是同时满足x1与x2相同,y1与y2相同,z1与z2相同,即,其中一对坐标值比较的结果是并不相同,则根据该第一坐标与该第二坐标的差值计算出该用户在运动后与运动前的位置差。包括该用户运动后和运动前的位置的数值变化和方向的变化。If the comparison result does not satisfy that x 1 is the same as x 2 at the same time, y 1 is the same as y 2 , and z 1 is the same as z 2 , that is, the result of comparing one pair of coordinate values is not the same, according to the first coordinate and the The difference in the second coordinate calculates the position difference of the user after the exercise and before the exercise. It includes changes in the value and direction of the position of the user after exercise and before exercise.
本发明实施例中,通过设置固定的标定反光点,及采集该标定发光点的图像,以确定用户与该标定反光点的距离,从而构建用户当前位置与该标定反光点之间的第一空间位置关系模型,当用户运动时,佩戴在用户身上的摄像装置组同步运动,根据此时采集的该标定反光点的图像,再次构建用户在运动后的位置与该标定反光点之间的第二空间位置关系模型,通过对比第一空间位置关系模型和第二空间位置关系模型的差异,反推出用户运动前后的位置变化,相较于现有技术,降低了定位过程中的计算量,降低定位技术难度,可在VR移动系统中实现定位产品量化,提高产品生产率。In the embodiment of the present invention, a fixed calibration reflection point is set, and an image of the calibration illumination point is collected to determine a distance between the user and the calibration reflection point, thereby constructing a first space between the current position of the user and the calibration reflection point. a positional relationship model, when the user moves, the camera device group worn on the user synchronously moves, and according to the image of the calibration reflective spot collected at this time, the second position between the user's position after the movement and the calibration reflective point is constructed again. The spatial position relationship model reduces the position change before and after the user's motion by comparing the difference between the first spatial position relationship model and the second spatial position relationship model. Compared with the prior art, the calculation amount in the positioning process is reduced, and the positioning is lowered. Technical difficulty, can achieve positioning product quantification in VR mobile system, improve product productivity.
参阅图4,图4是本发明第四实施例提供的虚拟现实系统中的空间定位装置的结构示意图,为了便于说明,仅示出了与本发明实施例相关的部分。图4示例的虚拟现实系统中的空间定位装置可以是前述图2和图3所示实施例提供的虚拟现实系统中的空间定位方法的执行主体,如头戴式显示器20或其中的一个控制模块。图4示例的虚拟现实系统中的空间定位装置,主要包括:控制模块401、获取模块402、计算模块403、建模模块404以及比较模块405。Referring to FIG. 4, FIG. 4 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a fourth embodiment of the present invention. For convenience of description, only parts related to the embodiment of the present invention are shown. The spatial positioning device in the virtual reality system illustrated in FIG. 4 may be an execution body of the spatial positioning method in the virtual reality system provided by the foregoing embodiments shown in FIG. 2 and FIG. 3, such as the head mounted display 20 or one of the control modules. . The spatial positioning device in the virtual reality system illustrated in FIG. 4 mainly includes: a control module 401, an acquisition module 402, a calculation module 403, a modeling module 404, and a comparison module 405.
以上各功能模块详细说明如下:The above functional modules are described in detail as follows:
其中,控制模块401,用于控制摄像装置组采集标定反光点的图像,该摄像装置组中包括多个摄像装置,该标定反光点用于标定该标定反光点所在的位置,该摄像装置组佩戴在用户身上。The control module 401 is configured to control the image capturing device group to collect an image of the calibration reflective spot, the camera device group includes a plurality of imaging devices, and the calibration reflective spot is used to calibrate the position where the calibration reflective spot is located, and the imaging device group is worn. On the user.
获取模块402,用于获取相邻的两个摄像装置采集的该标定反光点的第一图像。The obtaining module 402 is configured to acquire a first image of the calibration reflective spot acquired by two adjacent camera devices.
两个相邻的摄像装置采集的第一图像中的标定反光点,常常是同一个标定反光点,可以通过双目测距算法,获得该图像中的标定反光点与该摄像装置组的第一空间距离。The calibration reflection point in the first image acquired by two adjacent camera devices is often the same calibration reflection point, and the calibration reflection point in the image and the first of the camera device group can be obtained by the binocular ranging algorithm. Space distance.
计算模块403,用于通过预置的双目测距算法,根据该第一图像获得该标定反光点与该摄像装置组的第一空间距离。The calculating module 403 is configured to obtain, according to the first image, a first spatial distance between the calibration reflection point and the camera device group by using a preset binocular ranging algorithm.
根据该两个相邻摄像装置的焦距和中心距,以及,该两个相邻摄像装置采集的第一图像中的该标定反光点求得视差,则通过该预置的双目测距算法,获得该标定反光点与该摄像装置组的第一空间距离。即,获取该标定反光点与该用户之间的当前空间距离。Determining the parallax according to the focal length and the center distance of the two adjacent camera devices, and the calibration reflection point in the first image acquired by the two adjacent camera devices, by using the preset binocular ranging algorithm, Obtaining a first spatial distance between the calibration reflection point and the camera set. That is, the current spatial distance between the calibration reflective point and the user is obtained.
建模模块404,用于以该摄像装置组的位置为原点,根据该第一空间距离,建立该标定反光点与该摄像装置组的第一空间位置关系模型,该第一空间位置关系模型表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。The modeling module 404 is configured to establish, according to the first spatial distance, a first spatial position relationship model of the calibration reflection point and the camera device group, where the first spatial position relationship model is represented by using a position of the camera device group as an origin. In the current spatial coordinate system, the spatial relationship between the calibration reflection point and the camera set is determined.
该第一空间位置关系模型,用于表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系,该空间位置关系,是一种相对的关系,当该标定反光点固定不动时,随着该摄像装置组的运动,该空间位置关系发生改变。The first spatial position relationship model is used to indicate a spatial position relationship between the calibration reflection point and the camera device group in the current space coordinate system, and the spatial position relationship is a relative relationship when the calibration reflection point is fixed. When not moving, the spatial positional relationship changes as the camera set moves.
将当前该摄像装置组的位置设为该第一空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第一空间距离,得到该标定反光点相对于该原点的坐标(x1,y1,z1)。Setting the current position of the camera device group as the coordinate system origin in the first spatial position relationship model, the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group The first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
因此,该第一空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。Therefore, the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
获取模块402,还用于当该摄像装置组随该用户同步运动时,获取相邻的两个该摄像装置采集的该标定反光点的第二图像。The obtaining module 402 is further configured to acquire, when the camera set moves synchronously with the user, acquire a second image of the two calibration points collected by the adjacent two camera devices.
在该用户运动到下一个位置时,控制该摄像装置组继续采集该标定反光点的图像,在该摄像装置组中所有摄像装置所采集的多个图像中,获取两个相邻的摄像装置采集的该标定反光点的多个第二图像。When the user moves to the next position, the camera device group is controlled to continue to collect the image of the calibration reflective spot, and among the multiple images collected by all the imaging devices in the camera device group, two adjacent camera devices are acquired. The plurality of second images of the calibration point are calibrated.
计算模块403,用于通过该预置的双目测距算法,根据该第二图像获得该标定反光点与该摄像装置组的第二空间距离。The calculating module 403 is configured to obtain, by using the preset binocular ranging algorithm, a second spatial distance between the calibration reflection point and the camera device group according to the second image.
在所采集的该标定反光点的图像中,仍然通过该预置的双目测距算法,获得该标定反光点与该摄像装置组的第二空间距离,即,此处获得的是该用户运动后,该标定反光点与该摄像装置组的新的空间距离,也即获得了该标定反光点与该用户在运动后之间的新的当前空间距离。And obtaining, by the preset binocular ranging algorithm, a second spatial distance between the calibration reflection point and the camera device group, that is, the user motion obtained here is obtained in the acquired image of the calibration reflection point Thereafter, the calibration reflection point is new to the spatial distance of the camera set, that is, a new current spatial distance between the calibration reflection point and the user after the motion is obtained.
建模模块404,还用于以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,建立该标定反光点与该摄像装置组的第二空间位置关系模型,该第二空间位置关系模型表示在该摄像装置组运动后新建的空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。The modeling module 404 is further configured to use a position where the camera device group is moved as an origin, and establish a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance, the second space The positional relationship model indicates the spatial positional relationship between the calibration reflection point and the imaging device group in the newly created space coordinate system after the movement of the camera group.
具体地,将当前该摄像装置组所在的位置设为该第二空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第二空间距离,得到该标定反光点相对于该原点的坐标(x2,y2,z2)。由于该第一空间距离和该第二空间距离不同,因此该标定反光点的位置相对于该摄像装置组在运动前后的两个位置的坐标也不相同。Specifically, the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device. The second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
该第二空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。The second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
比较模块405,用于比较该第一空间位置关系模型以及该第二空间位置关系模型。The comparison module 405 is configured to compare the first spatial position relationship model and the second spatial position relationship model.
计算模块403,还用于根据比较模块405的比较结果得出该用户的运动前后的位置变化信息。The calculation module 403 is further configured to obtain position change information before and after the movement of the user according to the comparison result of the comparison module 405.
比较该第一空间位置关系模型以及该第二空间位置关系模型,通过对比二者的变化,可得出该用户的运动前和运动后的位置变化信息,例如得到该用户运动后和运动前的位置的数值变化和方向的变化。Comparing the first spatial position relationship model and the second spatial position relationship model, by comparing the changes of the two, the position change information of the user before and after the exercise can be obtained, for example, after the user is exercised and before the exercise The change in position and the change in direction.
本实施例未尽之细节,请参阅前述图1至图3所示实施例的描述,此处不再赘述。For details of the present embodiment, please refer to the description of the foregoing embodiment shown in FIG. 1 to FIG. 3, and details are not described herein again.
需要说明的是,以上图4示例的虚拟现实系统中的空间定位装置的实施方式中,各功能模块的划分仅是举例说明,实际应用中可以根据需要,例如相应硬件的配置要求或者软件的实现的便利考虑,而将上述功能分配由不同的功能模块完成,即将虚拟现实系统中的空间定位装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。而且,实际应用中,本实施例中的相应的功能模块可以是由相应的硬件实现,也可以由相应的硬件执行相应的软件完成。本说明书提供的各个实施例都可应用上述描述原则,以下不再赘述。It should be noted that, in the implementation manner of the spatial positioning device in the virtual reality system illustrated in FIG. 4 above, the division of each functional module is merely an example, and the actual application may be implemented according to requirements, such as corresponding hardware configuration requirements or software implementation. For the convenience of consideration, the above function distribution is completed by different functional modules, that is, the internal structure of the spatial positioning device in the virtual reality system is divided into different functional modules to complete all or part of the functions described above. Moreover, in practical applications, the corresponding functional modules in this embodiment may be implemented by corresponding hardware, or may be executed by corresponding hardware to execute corresponding software. The above description principles may be applied to various embodiments provided in this specification, and are not described herein again.
本发明实施例中,设置固定的标定反光点,获取该标定发光点的图像,以确定用户与该标定反光点的距离,从而构建用户当前位置与该标定反光点之间的第一空间位置关系模型,当用户运动时,佩戴在用户身上的摄像装置组同步运动,获取该标定反光点的图像,再次构建用户在运动后的位置与该标定反光点之间的第二空间位置关系模型,通过对比第一空间位置关系模型和第二空间位置关系模型的差异,反推出用户运动前后的位置变化,相较于现有技术,降低了定位过程中的计算量,降低技术难度,可在VR移动系统中实现产品量化,提高产品生产率。In the embodiment of the present invention, a fixed calibration reflection point is set, an image of the calibration illumination point is obtained, and a distance between the user and the calibration reflection point is determined, thereby constructing a first spatial position relationship between the current position of the user and the calibration reflection point. a model, when the user moves, the camera device group worn on the user moves synchronously, acquires an image of the calibration reflection point, and reconstructs a second spatial positional relationship model between the position of the user after the movement and the calibration reflection point, Comparing the difference between the first spatial position relationship model and the second spatial position relationship model, the position change before and after the user motion is reversed, and the calculation amount in the positioning process is reduced, the technical difficulty is reduced, and the VR movement can be reduced compared with the prior art. Product quantification is achieved in the system to increase product productivity.
请参阅图5,本发明第五实施例提供的虚拟现实系统中的空间定位装置的结构示意图,该装置主要包括:控制模块501、获取模块502、计算模块503、建模模块504、比较模块505以及筛选模块506。以上各功能模块详细说明如下:5 is a schematic structural diagram of a spatial positioning apparatus in a virtual reality system according to a fifth embodiment of the present invention. The apparatus mainly includes: a control module 501, an obtaining module 502, a calculating module 503, a modeling module 504, and a comparing module 505. And a screening module 506. The above functional modules are described in detail as follows:
其中,控制模块501,用于控制摄像装置组采集标定反光点的图像,该摄像装置组中包括多个摄像装置,该标定反光点用于标定该标定反光点所在的位置,该摄像装置组佩戴在用户身上,具体可以与头戴式显示器相连接。The control module 501 is configured to control the image capturing device group to collect an image of the calibration reflective spot, the camera device group includes a plurality of imaging devices, and the calibration reflective spot is used to calibrate the position where the calibration reflective spot is located, and the camera device group is worn. On the user, it can be specifically connected to the head mounted display.
这些摄像装置的数量可以根据各摄像装置的摄像视角而做不同的设置,设置的目的是以各摄像装置拍摄的图像能够覆盖整个空间,具体地,摄像装置的数量为360°除以摄像装置的摄像视角角度的商值。The number of these imaging devices can be differently set according to the imaging angle of view of each imaging device. The purpose of the setting is that the images captured by the respective imaging devices can cover the entire space. Specifically, the number of imaging devices is 360° divided by the imaging device. The quotient of the camera viewing angle.
该摄像装置组设置有摄像发光器件,可以发出指定波长的光,当该摄像发光器件照射在该标定反光点上,该标定反光点可以反光,增强该标定反光点在被采集的图像中的分辨度。优选地,该摄像发光器件为红外发光器件,该摄像装置中设置有红外滤光片,该红外滤光片过滤掉了该标定反光点发出的除红外光之外的其它波长的光线。The camera device group is provided with an imaging light-emitting device, which can emit light of a specified wavelength. When the imaging light-emitting device is irradiated on the calibration reflection point, the calibration reflection point can be reflected, and the resolution of the calibration reflection point in the acquired image is enhanced. degree. Preferably, the imaging light-emitting device is an infrared light-emitting device, and the image-capturing device is provided with an infrared filter that filters out light of other wavelengths than the infrared light emitted by the calibration light-reflecting point.
控制模块501,还用于控制该红外发光器件照射该标定反光点,使得该标定反光点发出红外反射光,以及控制该摄像装置组中的摄像装置采集经过该红外滤光片过滤后的该标定反光点的图像。The control module 501 is further configured to control the infrared light emitting device to illuminate the calibration reflective point, such that the calibration reflective point emits infrared reflected light, and controls the calibration of the camera device in the camera set through the infrared filter. An image of the reflective point.
获取模块502,用于获取相邻的两个摄像装置采集的跟标定反光点的第一图像。The obtaining module 502 is configured to acquire a first image of the calibration reflective spot collected by the two adjacent camera devices.
计算模块503,用于通过预置的双目测距算法,根据该第一图像获得该标定反光点与该摄像装置组的第一空间距离。The calculating module 503 is configured to obtain, by using a preset binocular ranging algorithm, a first spatial distance between the calibration reflection point and the camera device group according to the first image.
建模模块504,用于以该摄像装置组的位置为原点,根据该第一空间距离,建立该标定反光点与该摄像装置组的第一空间位置关系模型,该第一空间位置关系模型表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。The modeling module 504 is configured to establish, according to the first spatial distance, a first spatial position relationship model of the calibration reflection point and the camera device group, where the position of the camera device group is an origin, the first spatial position relationship model representation In the current spatial coordinate system, the spatial relationship between the calibration reflection point and the camera set is determined.
将当前该摄像装置组的位置设为该第一空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第一空间距离,得到该标定反光点相对于该原点的坐标(x1,y1,z1)。Setting the current position of the camera device group as the coordinate system origin in the first spatial position relationship model, the space coordinate of the origin is (0, 0, 0), according to the calibration reflective point and the camera device group The first spatial distance obtains the coordinates (x 1 , y 1 , z 1 ) of the calibration reflection point with respect to the origin.
因此,该第一空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。Therefore, the first spatial position relationship model may include information such as an origin of the current spatial coordinate system, coordinates of the calibration reflection point with respect to the current origin, and a spatial distance between the calibration reflection point and the camera set.
获取模块502,还用于当该摄像装置组随该用户同步运动时,获取相邻的两个该摄像装置采集的该标定反光点的第二图像。The obtaining module 502 is further configured to acquire, when the camera group moves synchronously with the user, acquiring a second image of the calibration reflective spot collected by two adjacent camera devices.
在该用户运动时,控制该摄像装置组继续采集该标定反光点的图像,在该摄像装置组中所有摄像装置所采集的多个图像中,获取两个相邻的摄像装置采集的该标定反光点的多个第二图像。When the user moves, the camera device group is controlled to continue to collect the image of the calibration reflection point, and the calibration reflections acquired by two adjacent camera devices are acquired among the plurality of images collected by all the camera devices in the camera device group. Multiple second images of points.
计算模块503,用于通过该预置的双目测距算法,根据该第二图像获得该标定反光点与该摄像装置组的第二空间距离。The calculating module 503 is configured to obtain, according to the preset binocular ranging algorithm, a second spatial distance between the calibration reflection point and the camera device group according to the second image.
建模模块504,还用于以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,建立该标定反光点与该摄像装置组的第二空间位置关系模型,该第二空间位置关系模型表示在该摄像装置组运动后新建的空间坐标系下,该标定反光点与该摄像装置组的空间位置关系。The modeling module 504 is further configured to use a position where the camera device group moves as an origin, and establish a second spatial position relationship model between the calibration reflection point and the camera device group according to the second spatial distance, the second space The positional relationship model indicates the spatial positional relationship between the calibration reflection point and the imaging device group in the newly created space coordinate system after the movement of the camera group.
具体地,将当前该摄像装置组所在的位置设为该第二空间位置关系模型中的坐标系原点,该原点的空间坐标为(0,0,0),根据该标定反光点与该摄像装置组之间的第二空间距离,得到该标定反光点相对于该原点的坐标(x2,y2,z2)。由于该第一空间距离和该第二空间距离不同,因此该标定反光点的位置相对于该摄像装置组在运动前后的两个位置的坐标也不相同。Specifically, the current location of the camera device group is set as the coordinate system origin in the second spatial position relationship model, and the spatial coordinate of the origin is (0, 0, 0), according to the calibration reflection point and the camera device. The second spatial distance between the groups gives the coordinates (x 2 , y 2 , z 2 ) of the nominal reflection point relative to the origin. Since the first spatial distance and the second spatial distance are different, the position of the calibration reflection point is different from the coordinates of the two positions before and after the movement of the camera group.
该第二空间位置关系模型中可包含当前空间坐标系的原点、该标定反光点相对于当前该原点的坐标,以及,该标定反光点与该摄像装置组的空间距离等信息。The second spatial positional relationship model may include an origin of the current spatial coordinate system, coordinates of the calibration reflective point with respect to the current origin, and information such as a spatial distance between the calibration reflection point and the camera set.
比较模块505,用于比较该第一空间位置关系模型以及该第二空间位置关系模型。The comparison module 505 is configured to compare the first spatial position relationship model and the second spatial position relationship model.
计算模块503,还用于根据该比较模块的比较结果得出该用户的运动前后的位置变化信息。The calculation module 503 is further configured to obtain position change information before and after the movement of the user according to the comparison result of the comparison module.
进一步地,比较模块505,还用于比较该第一空间位置关系模型中该标定反光点的第一坐标和在该第二空间关系模型中的第二坐标是否相同。即比较(x1,y1,z1)和(x2,y2,z2)中,x1是否与x2相同,y1是否与y2相同,z1是否与z2相同。Further, the comparison module 505 is further configured to compare whether the first coordinate of the calibration reflection point in the first spatial position relationship model and the second coordinate in the second spatial relationship model are the same. That is, whether (x 1 , y 1 , z 1 ) and (x 2 , y 2 , z 2 ) are compared, whether x 1 is the same as x 2 , whether y 1 is the same as y 2 , and whether z 1 is the same as z 2 .
计算模块503,还用于若比较模块505的比较结果是相同,则确定该用户的运动前后位置没有变化,若比较结果是不相同,则根据该第一坐标与该第二坐标的差值计算出该用户在运动后与运动前的位置差。The calculation module 503 is further configured to: if the comparison result of the comparison module 505 is the same, determine that the position of the user before and after the movement does not change, and if the comparison result is different, calculate the difference between the first coordinate and the second coordinate. The user is out of position after exercise and before exercise.
若比较结果同时满足x1与x2相同,y1与y2相同,z1与z2相同,则用户可能走出去又退回了原处,因此,确定该用户的运动前后位置没有变化。If the comparison result satisfies that x 1 is the same as x 2 , y 1 is the same as y 2 , and z 1 is the same as z 2 , the user may go out and return to the original position. Therefore, it is determined that the user's position before and after the movement does not change.
若比较结果不是同时满足x1与x2相同,y1与y2相同,z1与z2相同,即,其中一对坐标值比较的结果是并不相同,则根据该第一坐标与该第二坐标的差值计算出该用户在运动后与运动前的位置差。包括该用户运动后和运动前的位置的数值变化和方向的变化。If the comparison result does not satisfy that x 1 is the same as x 2 at the same time, y 1 is the same as y 2 , and z 1 is the same as z 2 , that is, the result of comparing one pair of coordinate values is not the same, according to the first coordinate and the The difference in the second coordinate calculates the position difference of the user after the exercise and before the exercise. It includes changes in the value and direction of the position of the user after exercise and before exercise.
进一步地,该装置还包括:Further, the device further includes:
筛选模块506,用于在该第一图像中筛选出该标定反光点的分辨率最大的两个图像。The screening module 506 is configured to filter out, in the first image, two images with the highest resolution of the calibration reflection point.
计算模块503,还用于通过该预置的双目测距算法,根据筛选出的两个图像获得该标定反光点与该摄像装置组的第一空间距离。The calculating module 503 is further configured to obtain, by using the preset binocular ranging algorithm, a first spatial distance between the calibration reflection point and the camera device group according to the two selected images.
本实施例未尽之细节,请参阅前述图1至图4所示实施例的描述,此处不再赘述。For details of the embodiment, please refer to the description of the embodiment shown in FIG. 1 to FIG. 4, and details are not described herein again.
本发明实施例中,设置固定的标定反光点,获取该标定发光点的图像,以确定用户与该标定反光点的距离,从而构建用户当前位置与该标定反光点之间的第一空间位置关系模型,当用户运动时,佩戴在用户身上的摄像装置组同步运动,获取该标定反光点的图像,再次构建用户在运动后的位置与该标定反光点之间的第二空间位置关系模型,通过对比第一空间位置关系模型和第二空间位置关系模型的差异,反推出用户运动前后的位置变化,相较于现有技术,降低了定位过程中的计算量,降低技术难度,可在VR移动系统中实现产品量化,提高产品生产率。In the embodiment of the present invention, a fixed calibration reflection point is set, an image of the calibration illumination point is obtained, and a distance between the user and the calibration reflection point is determined, thereby constructing a first spatial position relationship between the current position of the user and the calibration reflection point. a model, when the user moves, the camera device group worn on the user moves synchronously, acquires an image of the calibration reflection point, and reconstructs a second spatial positional relationship model between the position of the user after the movement and the calibration reflection point, Comparing the difference between the first spatial position relationship model and the second spatial position relationship model, the position change before and after the user motion is reversed, and the calculation amount in the positioning process is reduced, the technical difficulty is reduced, and the VR movement can be reduced compared with the prior art. Product quantification is achieved in the system to increase product productivity.
本发明实施例提供一种虚拟现实系统中的空间定位装置,该装置包括:一个或者多个处理器;An embodiment of the present invention provides a spatial positioning apparatus in a virtual reality system, where the apparatus includes: one or more processors;
存储器;Memory
一个或者多个程序,该一个或者多个程序存储在该存储器中,当被该一个或者多个处理器执行时:One or more programs, the one or more programs being stored in the memory, when executed by the one or more processors:
控制摄像装置组采集标定反光点的图像,该摄像装置组中包括多个摄像装置,该标定反光点用于标定该标定反光点所在的位置,该摄像装置组佩戴在用户身上,获取相邻的两个该摄像装置采集的该标定反光点的第一图像,并通过预置的双目测距算法,根据该第一图像获得该标定反光点与该摄像装置组的第一空间距离,以该摄像装置组的位置为原点,根据该第一空间距离,建立该标定反光点与该摄像装置组的第一空间位置关系模型,该第一空间位置关系模型表示在当前空间坐标系下,该标定反光点与该摄像装置组的空间位置关系,当该摄像装置组随该用户同步运动时,获取相邻的两个该摄像装置采集的该标定反光点的第二图像,并通过该预置的双目测距算法,根据该第二图像获得该标定反光点与该摄像装置组的第二空间距离,以该摄像装置组运动后所在的位置为原点,根据该第二空间距离,建立该标定反光点与该摄像装置组的第二空间位置关系模型,该第二空间位置关系模型表示在该摄像装置组运动后新建的空间坐标系下,该标定反光点与该摄像装置组的空间位置关系,比较该第一空间位置关系模型以及该第二空间位置关系模型,根据比较结果得出该用户的运动前后的位置变化信息。Controlling the camera device group to collect an image of the calibration reflection point, the camera device group includes a plurality of camera devices, the calibration reflection point is used for calibrating the position of the calibration reflection point, and the camera device group is worn on the user to obtain the adjacent a first image of the calibration reflection point acquired by the two camera devices, and obtaining a first spatial distance between the calibration reflection point and the camera device group according to the first image by a preset binocular ranging algorithm, The position of the camera device group is an origin, and according to the first spatial distance, a first spatial position relationship model between the calibration reflection point and the camera device group is established, and the first spatial position relationship model indicates that the calibration is performed in the current space coordinate system. a spatial position relationship between the reflection point and the camera device group, when the camera device group moves synchronously with the user, acquiring a second image of the calibration reflection point collected by two adjacent camera devices, and passing the preset a binocular ranging algorithm, according to the second image, obtaining a second spatial distance between the calibration reflection point and the camera device group, after the camera device group moves The position is an origin, and according to the second spatial distance, a second spatial position relationship model between the calibration reflection point and the camera device group is established, and the second spatial position relationship model represents a newly created space coordinate after the camera device group moves. And determining a spatial position relationship between the calibration reflection point and the camera device group, comparing the first spatial position relationship model and the second spatial position relationship model, and obtaining position change information before and after the movement of the user according to the comparison result.
本发明实施例的未尽描述细节,请参见前述各实施例的描述。For a detailed description of the embodiments of the present invention, refer to the description of the foregoing embodiments.
在本申请所提供的多个实施例中,应该理解到,所揭露的系统、装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个模块或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信链接可以是通过一些接口,装置或模块的间接耦合或通信链接,可以是电性,机械或其它的形式。In the various embodiments provided herein, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the device embodiments described above are merely illustrative. For example, the division of the modules is only a logical function division. In actual implementation, there may be another division manner, for example, multiple modules or components may be combined or Can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication link shown or discussed may be an indirect coupling or communication link through some interface, device or module, and may be electrical, mechanical or otherwise.
所述作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络模块上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。The modules described as separate components may or may not be physically separated. The components displayed as modules may or may not be physical modules, that is, may be located in one place, or may be distributed to multiple network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
另外,在本发明各个实施例中的各功能模块可以集成在一个处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。In addition, each functional module in each embodiment of the present invention may be integrated into one processing module, or each module may exist physically separately, or two or more modules may be integrated into one module. The above integrated modules can be implemented in the form of hardware or in the form of software functional modules.
所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。The integrated modules, if implemented in the form of software functional modules and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, or all or part of the technical solution, may be embodied in the form of a software product stored in a storage medium. A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in various embodiments of the present invention. The foregoing storage medium includes: a U disk, a mobile hard disk, a read only memory (ROM, Read-Only) Memory, random access memory (RAM), disk or optical disk, and other media that can store program code.
需要说明的是,对于前述的各方法实施例,为了简便描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本发明并不受所描述的动作顺序的限制,因为依据本发明,某些步骤可以采用其它顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定都是本发明所必须的。It should be noted that, for the foregoing method embodiments, for the sake of brevity, they are all described as a series of action combinations, but those skilled in the art should understand that the present invention is not limited by the described action sequence. Because certain steps may be performed in other sequences or concurrently in accordance with the present invention. In the following, those skilled in the art should also understand that the embodiments described in the specification are all preferred embodiments, and the actions and modules involved are not necessarily required by the present invention.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述的部分,可以参见其它实施例的相关描述。In the above embodiments, the descriptions of the various embodiments are all focused, and the parts that are not detailed in a certain embodiment can be referred to the related descriptions of other embodiments.
以上为对本发明所提供的虚拟现实系统中的空间定位方法、装置及系统的描述,对于本领域的一般技术人员,依据本发明实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上,本说明书内容不应理解为对本发明的限制。The above is a description of the spatial positioning method, device and system in the virtual reality system provided by the present invention. For those of ordinary skill in the art, according to the idea of the embodiment of the present invention, there will be changes in specific implementation modes and application scopes. In conclusion, the contents of the present specification should not be construed as limiting the invention.
Claims (12)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201511014777.4 | 2015-12-29 | ||
| CN201511014777.4A CN105867611A (en) | 2015-12-29 | 2015-12-29 | Space positioning method, device and system in virtual reality system |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2017113689A1 true WO2017113689A1 (en) | 2017-07-06 |
Family
ID=56624477
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2016/088579 Ceased WO2017113689A1 (en) | 2015-12-29 | 2016-07-05 | Method, device, and system for spatial positioning in virtual reality system |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN105867611A (en) |
| WO (1) | WO2017113689A1 (en) |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN106326930A (en) * | 2016-08-24 | 2017-01-11 | 王忠民 | Method for determining position of tracked object in virtual reality and device and system thereof |
| CN106340043A (en) * | 2016-08-24 | 2017-01-18 | 深圳市虚拟现实技术有限公司 | Image identification spatial localization method and image identification spatial localization system |
| CN106569337B (en) * | 2016-10-21 | 2019-11-08 | 北京小鸟看看科技有限公司 | A kind of virtual reality system and its localization method |
| CN106568434A (en) * | 2016-11-08 | 2017-04-19 | 深圳市虚拟现实科技有限公司 | Method and system for positioning virtual reality space |
| CN106774844B (en) * | 2016-11-23 | 2020-04-17 | 上海临奇智能科技有限公司 | Method and equipment for virtual positioning |
| CN106774992A (en) * | 2016-12-16 | 2017-05-31 | 深圳市虚拟现实技术有限公司 | The point recognition methods of virtual reality space location feature |
| CN106791399A (en) * | 2016-12-22 | 2017-05-31 | 深圳市虚拟现实技术有限公司 | Virtual reality zooming space localization method and system |
| WO2018188055A1 (en) * | 2017-04-14 | 2018-10-18 | 深圳市方鹏科技有限公司 | Virtual reality technology-based modeling space positioning device |
| CN107423720A (en) * | 2017-08-07 | 2017-12-01 | 广州明医医疗科技有限公司 | Target Tracking System and stereoscopic display device |
| TWI642903B (en) * | 2017-10-13 | 2018-12-01 | 緯創資通股份有限公司 | Locating method, locator, and locating system for head-mounted display |
| CN108519215B (en) * | 2018-03-28 | 2020-10-16 | 华勤技术有限公司 | Pupil distance adaptability test system and method and test host |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002072130A (en) * | 2000-08-29 | 2002-03-12 | Shimadzu Corp | Head-mounted information display device |
| CN103245322A (en) * | 2013-04-10 | 2013-08-14 | 南京航空航天大学 | Distance measurement method and system based on binocular stereo vision |
| CN103345064A (en) * | 2013-07-16 | 2013-10-09 | 卫荣杰 | Cap integrated with 3D identifying and 3D identifying method of cap |
| CN103744184A (en) * | 2014-01-24 | 2014-04-23 | 成都理想境界科技有限公司 | Hat-shaped head-mounted display device |
Family Cites Families (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101673161B (en) * | 2009-10-15 | 2011-12-07 | 复旦大学 | Visual, operable and non-solid touch screen system |
| CN102749991B (en) * | 2012-04-12 | 2016-04-27 | 广东百泰科技有限公司 | A kind of contactless free space sight tracing being applicable to man-machine interaction |
| JP6349660B2 (en) * | 2013-09-18 | 2018-07-04 | コニカミノルタ株式会社 | Image display device, image display method, and image display program |
| KR101430614B1 (en) * | 2014-05-30 | 2014-08-18 | 주식회사 모리아타운 | Display device using wearable eyeglasses and method for operating the same |
| CN104436634B (en) * | 2014-11-19 | 2017-09-19 | 重庆邮电大学 | A live-action shooting game system using immersive virtual reality technology and its implementation method |
-
2015
- 2015-12-29 CN CN201511014777.4A patent/CN105867611A/en active Pending
-
2016
- 2016-07-05 WO PCT/CN2016/088579 patent/WO2017113689A1/en not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JP2002072130A (en) * | 2000-08-29 | 2002-03-12 | Shimadzu Corp | Head-mounted information display device |
| CN103245322A (en) * | 2013-04-10 | 2013-08-14 | 南京航空航天大学 | Distance measurement method and system based on binocular stereo vision |
| CN103345064A (en) * | 2013-07-16 | 2013-10-09 | 卫荣杰 | Cap integrated with 3D identifying and 3D identifying method of cap |
| CN103744184A (en) * | 2014-01-24 | 2014-04-23 | 成都理想境界科技有限公司 | Hat-shaped head-mounted display device |
Also Published As
| Publication number | Publication date |
|---|---|
| CN105867611A (en) | 2016-08-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2017113689A1 (en) | Method, device, and system for spatial positioning in virtual reality system | |
| WO2017010695A1 (en) | Three dimensional content generating apparatus and three dimensional content generating method thereof | |
| JP2005500757A (en) | 3D video conferencing system | |
| JP2003532062A (en) | Combined stereoscopic, color 3D digitization and motion capture system | |
| WO2012118322A2 (en) | Apparatus for projecting a grid pattern | |
| WO2019142997A1 (en) | Apparatus and method for compensating for image change caused by optical image stabilization motion | |
| WO2022085966A1 (en) | Oral image processing device and oral image processing method | |
| WO2009151292A2 (en) | Image conversion method and apparatus | |
| CN109923856A (en) | Light supplementing control device, system, method and mobile device | |
| JP5963006B2 (en) | Image conversion apparatus, camera, video system, image conversion method, and recording medium recording program | |
| WO2012091326A2 (en) | Three-dimensional real-time street view system using distinct identification information | |
| WO2022059937A1 (en) | Robot and control method therefor | |
| WO2017082539A1 (en) | Augmented reality providing apparatus and method for user styling | |
| JP5987584B2 (en) | Image processing apparatus, video projection system, and program | |
| WO2023128100A1 (en) | Three-dimensional virtual model provision method and three-dimensional virtual model provision system therefor | |
| EP3539289A1 (en) | Method of projecting image onto curved projection area and projection system therefor | |
| WO2018182066A1 (en) | Method and apparatus for applying dynamic effect to image | |
| WO2020149527A1 (en) | Apparatus and method for encoding in structured depth camera system | |
| WO2023282579A1 (en) | Data processing apparatus for processing oral model and operating method therefor | |
| CN114596359A (en) | Method, device, equipment and medium for superposing double light images | |
| WO2021221341A1 (en) | Augmented reality device and control method for same | |
| JP6633140B2 (en) | Constant calibration system and method | |
| WO2023003192A1 (en) | Image processing apparatus and image processing method | |
| WO2019112169A1 (en) | Electronic device and method for generating 3d image | |
| WO2023063661A1 (en) | Training set generation method, training set generation apparatus, and training set generation system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16880490 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 16880490 Country of ref document: EP Kind code of ref document: A1 |