HK1138668B - Method and device for creating at least two key images corresponding to a three-dimensional object - Google Patents
Method and device for creating at least two key images corresponding to a three-dimensional object Download PDFInfo
- Publication number
- HK1138668B HK1138668B HK10104139.2A HK10104139A HK1138668B HK 1138668 B HK1138668 B HK 1138668B HK 10104139 A HK10104139 A HK 10104139A HK 1138668 B HK1138668 B HK 1138668B
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- creation
- environment
- key
- parameters
- Prior art date
Links
Description
The present invention relates to the combination of real and virtual images in real time, also known as augmented reality, and in particular a process and device for creating key images corresponding to a three-dimensional object.
Augmented reality is the process of inserting one or more virtual objects into the images of a video stream. Depending on the type of application, the position and orientation of these virtual objects may be determined by data external to the scene represented by the images, for example coordinates directly from a game scenario, or by data related to certain elements of that scene, for example coordinates from a particular point in the scene such as a player's hand. When the position and orientation are determined by data related to certain elements of that scene, it may be necessary to track these elements in camera movements or the movements of these elements themselves in the scene.
Err1:Expecting ',' delimiter: line 1 column 521 (char 520)
The objective of this visual tracking algorithm is to find, in a real scene, the position, i.e. position and orientation, of an object with a three-dimensional mesh available, or to find the extrinsic position and orientation parameters, relative to that object, of a camera filming that object, still, through image analysis.
The common video image is compared with one or more key images recorded to find a significant number of matches between these pairs of images in order to estimate the position of the object. To this end, a key image is composed of two elements: a captured image of the video stream and a pose (orientation and position) of a three-dimensional model appearing in this image. Key images should be distinguished from key images online or on-line.Offline key frames are created and saved offline , i.e. outside the permanent regime of the application. online key frames are dynamically stored during the execution of the tracking program. They are calculated when the error, i.e. the distance between the points of interest matches, is small.Learning new key images online also results in making the application more robust to variations in exterior light and camera colorimetry variations. However, they have the disadvantage of introducing a vibration effect on the object's position in time. When learning a new key image online, it replaces the previous key image, offline or online. It is used as a common key image.
Each key image, offline or online, includes an image in which the object is present and a pose to characterize the location of that object as well as a number of points of interest that characterize the object in the image.
Before starting the application, it is necessary to determine one or more offline key frames. These are usually images extracted from the video stream, which contain the object to be tracked, and which are associated with a position and orientation of the three-dimensional model of this object. To do this, an operator performs a manual operation that consists of visually matching a wire model to the real object. The manual preparation phase therefore consists of finding a first estimate of the position of the object in an image extracted from the video stream, which is equivalent to formalizing the initial affine transformation Tp→c which corresponds to the pass matrix between the marker attached to the object to the marker associated with the camera.The initial affine transformation can be broken down into a first To→c transformation relative to an initial position of the object, e.g. in the center of the screen, i.e. a transformation related to the change of reference between the camera's reference and the object's reference, and a second To→c transformation relative to the movement and rotation of the object from its initial position in the center of the screen to the position and orientation in which the object is actually located on the image, where Tp→c = Tp→o · To→c. If the key values a, b and g correspond to the translation of the object from its initial position in the center of the image to its position in the image and if the key values q,f and j correspond to the rotation of the object from its initial position in the center of the image to its position in the key image along the x, y and z axes, the transformation Tp→o can then be expressed as the following matrix,
The use of this model makes it possible to establish a link between the coordinates of the points of the three-dimensional model of the object expressed in the object's reference and the coordinates of these points in the reference of the camera.
When the application is initialized, offline keyframes are processed to position points of interest according to the parameters chosen at the time of application launch. These parameters are empirically specified for each type of application use and allow modulating the matching detection core and achieving better quality in estimating the object's position according to the characteristics of the real environment. Then, when the real object in the current image is in a position that is close to the position of that same object in one of the offline keyframes, the number of matches becomes important.
When such a match has been found, the algorithm switches to a permanent mode, tracking the object's movements from one frame to another and compensating for any drift by information contained in the offline key image used at initialization and the online key image calculated at application execution.
The tracking application combines two types of algorithm: a point of interest detection, for example a modified version of Harris point detection, and a technique of re-projection of points of interest positioned on the three-dimensional model towards the flat image. This re-projection allows to predict the result of a spatial transformation from one frame to the other. These two algorithms combined allow robust tracking of an object according to six degrees of freedom.
In general, a point p of the image is the projection of a point P of the real scene with p ∼ PI · PE · Tp→c. P where PI is the matrix of the intrinsic parameters of the camera, i.e. its focal length, the center of the image and the shift, PE is the matrix of the extrinsic parameters of the camera, i.e. the position of the camera in real space, and Tp→c is the affine matrix of passage between the reference associated with the tracked object to the reference of the camera.
However, it is important to note that when the error measure becomes too large, i.e. when the number of matches between the current key image and the current image becomes too small, the tracking is stuttered (the estimate of the object's position is considered no longer sufficiently consistent) and a new initialization phase using always the same offline key images is necessary.
The position of an object is estimated by the correspondences between the points of interest of the current image from the video stream, the points of interest of the current key image and the points of interest of the previous image from the video stream. These operations are called the matching phase. From the most significant correlations, the software calculates the position of the object that best corresponds to the observations.
Figures 1 and 2 illustrate this monitoring application:
The solutions proposed are often research-based and do not take into account the constraints of implementing commercial systems. In particular, problems related to robustness, the ability to launch the application quickly without requiring a manual phase of creating offline key images needed to initialize the tracking system, detecting errors of the type stalling (when the object to be tracked is lost ) and automatic and real-time reset after such errors are often left out.
The invention allows at least one of the problems described above to be solved.
The invention thus concerns a process for creating at least two key images, each comprising an image representing at least one three-dimensional object in a three-dimensional environment, and placing the object in that environment from the point of view of the associated image, this process being characterised by the following steps:
acquisition of a first image representing the object in a determined initial position; creation of a first key image from the first acquired image and the relative position of the object in its environment; acquisition of at least a second image representing that object, the point of view of which is at least one second image different from that of the first image; determination of the relative position of the object in its environment according to the difference in the points of view of the first and of the second image, each of which is determined in relation to a position and orientation; and creation of a second key image from that second image and the relative position of the object in its environment.
The process according to the invention thus allows the creation of a plurality of key images to be automated, in particular to initialize or reset an augmented reality application using automatic real-time tracking of three-dimensional geometric objects, without a marker, in a video stream.
Depending on a particular feature, the acquisition of at least one second image representing the object is done by means of a tracking application.
According to another particular feature, the object is at least in part a real object.
Another peculiar feature is that the object is at least partially a virtual object.
According to a particular characteristic, the virtual object is a representation of a real object according to a virtual model.
Another particular feature is that the object includes at least part of the environment.
Depending on the method of execution, the viewpoints of the images belong to a set of predetermined points.
According to this feature, the construction of key images is carried out according to a defined field of vision.
Depending on the particular method of production, the steps of the creation process are repeated for at least part of the object.
The invention also concerns a device for creating at least two key images each comprising an image representing at least one three-dimensional object in a three-dimensional environment and placing the object in that environment from the point of view of the associated image, this device being characterised by the following:
means of acquiring a first image representing the object in a given initial position;means of creating a first key image from the first image acquired and the relative position of the object in its environment;means of acquiring at least a second key image representing that object, the point of view of at least one of those images being different from that of the first image;means of determining the relative position of the object in its environment according to the difference in the points of view of the first and of at least one of those two images, each of those points of view being determined in relation to a position and orientation;means of creating a second key image from at least one second image acquired and the relative position of that object in its environment.
This has the same advantages as the method briefly described above and will therefore not be mentioned here.
The present invention also relates to a storage medium, possibly partially or totally removable, which is readable by a computer or microprocessor and contains instructions for the execution of the steps of the process described above.
The present invention also concerns a computer program containing instructions for the implementation of each step of the process as described above.
Other advantages, purposes and features of the present invention are shown by the following detailed description, made as a non-limiting example, in the light of the attached drawings in which:
Figure 1 schematically represents the essential principles of the object tracking application developed by the Ecole Polytechnique Fédérale de Lausanne;Figure 2 illustrates some steps of the process for determining the position of an object in an image of a video stream from key images and the previous image of the video stream;Figure 3 represents the overall scheme for creating key images of a three-dimensional object and any geometry, in an environment implementing the invention;Figure 4 shows an example of a device for at least partially implementing the invention;Figure 5 illustrates an example of an image tracking key image from a video stream generator vehicle;Figure 6 illustrates an algorithm for automatically tracking key images in accordance with a virtual landscape;Figure 9 illustrates an algorithm for automatically tracking key images in accordance with a landscape tracking algorithm;Figure 11 illustrates an example of a path for the creation of objects in accordance with a virtual landscape tracking algorithm;Figure 9 illustrates an algorithm for automatically tracking key images in accordance with a landscape tracking algorithm;Figure 11 illustrates a path for the creation of objects in accordance with an algorithm for tracking key tracking key values in accordance with a landscape tracking algorithm;Figure 9 illustrates an example of an algorithm for automatically tracking key images in accordance with a landscape tracking key values in accordance with a landscape tracking algorithm;Figure 11 illustrates a path for the creation of objects in a landscape tracking key landscape in accordance with a landscape tracking algorithm in accordance with a landscape tracking algorithm.
The purpose of the process is in particular to create, in particular automatically, key images of a three-dimensional object in an environment for the purpose of automating the initialization and reset phases after a drop-off of the object tracking application on images from a video stream. A multitude of key images can allow the application to initialize for any type of relative poses between the object to be tracked and the camera.
As shown in Figure 3, creating key images of an object in an environment and running a tracking application (300) using these key images comprises four phases: a phase of creating a first key image (I), an automated phase of creating subsequent key images (II), a phase of initiating tracking using the previously created key image (s) (III) and an object tracking phase (IV) which corresponds to the application's permanent regime.
The first key image (I) phase consists mainly of acquiring a first image representing the three-dimensional object in an initial position. This acquisition is done, in particular, from a means of shooting such as a camera or a camera. After acquiring the image of the three-dimensional object (step 305), a first key image is created (step 310) including, on the one hand, the first image acquired and, on the other hand, the relative position of the object in the environment according to the point of view of the image.
Depending on the state of the art, to construct a key image, the three-dimensional mesh corresponding to the object must be placed manually on it in the image. However, this step is tedious. However, knowledge of the type of application can reduce or simplify the creation of a key image (step 310).
In order to improve the robustness of the tracking algorithm, it is sometimes important to capture a series of key images corresponding to several relative positions between the camera and the subject. In the creation phase of these next key images (II), a first step is to acquire a new image representing the subject (step 315), the second image being different from the first image. Then the relative position of the subject in its environment is determined by the difference in the viewpoints of the images (step 320), each of these viewpoints being determined with respect to a position and orientation.This step can be done in several ways. First, if the textured three-dimensional virtual model of the object to be tracked is available, it is possible to create these new key images by varying the object's positioning parameters in front of the camera. It is also particularly interesting to use the tracking application (335) to generate new key images. Thus, the new key images created online can be reused to improve the quality of initialization of the tracking algorithm. Finally, from each new image and the relative positioning of the object in its environment, a new key image is created (325).
The steps in this phase are repeated to create a plurality of key images.
In the initialization phase (III), from all the key frames created in phase I and phase II, the tracking application is initialized by searching for a key frame representing the object in the video stream containing the object to be tracked (step 330) and closest to the current configuration (relative position between the camera and the object).
When the object position is determined in the first frame and the current key image is selected (key image determined during the initialization phase) (step 330), the tracking application can find the object (step IV) in successive frames of the video stream according to a tracking mechanism (step 335). According to this mechanism, the object's movements (object movement in the scene or movement induced by camera movement in the scene) are followed from frame to frame and any drift is compensated by the information contained in the offline key captured during initialization and, possibly, the actual offline key image calculated when the application is running (which can serve as the key image itself to initialize the image), which can then be used to automatically compensate for the difference in the new images in the initialization phase.
When the error measure becomes too large, the monitoring is stalling and a reset phase is required.
Figure 4 shows a schematic representation of an apparatus suitable for the implementation of the invention.
Err1:Expecting ',' delimiter: line 1 column 339 (char 338)
Err1:Expecting ',' delimiter: line 1 column 178 (char 177)
The communication bus allows communication and interoperability between the various elements included in or connected to the apparatus 400. The representation of the bus is not limited and, in particular, the central unit is capable of communicating instructions to any element of the apparatus 400 directly or through another element of the apparatus 400.
The executable code of each program enabling the programmable device to implement the processes of the invention may be stored, for example, on hard disk 420 or in memory 406.
In one variant, the executable code of the programs can be received via the communication network 428, via the interface 426, to be stored in the same way as described above.
Memory cards may be replaced by any media for information, such as a compact disc (CD-ROM or DVD). In general, memory cards may be replaced by information storage media, readable by a computer or microprocessor, whether or not integrated into the device, possibly removable, and suitable for storing one or more programs the execution of which allows the implementation of the process of the invention.
In general, the programme (s) may be loaded into one of the storage media of the apparatus 400 before being executed.
The central unit 404 will command and direct the execution of instructions or portions of software code from the program (s) of the invention, instructions which are stored in the hard disk 420 or in the memory 406 or in the other storage elements mentioned above. When powered on, the program (s) which are stored in nonvolatile memory, e.g. the hard disk 420 or memory 406, are transferred to the flash memory 408 which then contains the executable code of the program (s) of the invention, as well as registers to store the variables and parameters necessary for the implementation of the invention.
It should be noted that the communication device incorporating the device of the invention may also be a programmed device, which contains the code of the computer program (s) for example, frozen in an application specific integrated circuit (ASIC).
Alternatively, the image from the video card 416 may be transmitted to the display or projector 418 through the communication interface 426 and the distributed communication network 428. Similarly, the camera 412 may be connected to a video acquisition card 410', separate from the camera 400, so that the images from the camera 412 are transmitted to the camera 400 through the distributed communication network 428 and the communication interface 426.
Due to the simplification of implementation by the invention process, the creation of key images can be implemented without the use of a specialist. After the creation of a set of key images, a tracking application can be initialized from this set and used in a standard way to follow an object in a sequence of images from a video stream, for example to embed a video sequence on a scene object taking into account the position and orientation of this object, but also to determine the movement of a camera according to the analysis of a scene object. In this case, the object is part of the scene model and the retrieval of this object in the scene is therefore returned to the scene in the virtual scene.
In particular, according to a first embodiment, the application may consist of estimating the installation of a three-dimensional object, e.g. an engine in a vehicle, and adding information on this object to give the user information about this object, e.g. the assembly and disassembly of the engine.
To do this, the application requires learning several key frames which will then allow the tracking application to automatically initialize in the image. Since the user's position is known approximately, the camera's position in relation to the three-dimensional object to be tracked is incidentally known. Thus, the creation of the key frames (phases I and II) and initialization (phase III) are made simpler by the fact that the user's position in relation to the three-dimensional object is known and a small number of key frames are required to make the automatic initialization possible.
In a first embodiment, to allow the automatic initialization of the 3D object tracking system, a learning phase is required to acquire a number of key images containing the 3D object in the user's shooting area.
The shooting area may be adapted to the use case.
In a second embodiment, the key image learning phase is performed by means of three-dimensional synthesis textured models of the object, e.g. the engine and the car. In this embodiment, machine learning is performed by varying the angles Θ and φ corresponding to the vertical and horizontal angle of the camera (real or virtual) and the distance of the camera from the three-dimensional model of the object.
The three-dimensional rendering of the object from different angles is also associated with the known position of the object, determined from the parameters Θ, φ and the distance of the camera from the object. Thus, a set of key images is created in a totally automatic manner (phases I and II).
The learning phase is thus inexpensive and allows for preparation for maintenance of a product even before its manufacture, since the CAD (Computer Aided Design) model of the product is most often modelled before its actual construction.
In addition, the textured CAD model of an object used in different lighting conditions allows the approximation of the conditions of the real environment.
The next phase (phase III) is automatic initialization from the key images.
This initialization consists of a first estimate of the position of the three-dimensional object. In the example, the estimate of the position of the engine is made when the user opens the hood of the car.
Figure 6 illustrates the automatic initialization of key images from real or virtual three-dimensional objects.
According to a second example of implementation, the application may consist of finding the placement of an object in a plurality of target objects, which can be activated for tracking.
Consider, for example, a machine tool.
In the current application, for example, the installation of the whole machine is in the remote camera position, whereas from close up only a very specific part of the machine tool, e.g. an electrical enclosure, will be monitored, first from the outside and then from the inside.
According to this method, the use of state-transition automata allows the key images to be changed according to the current state and to follow the placement of different objects.
Figure 7 shows a transition state automaton for the example.
Transitions from one state to another state are triggered when the user approaches a new object to be followed and commands, for example with a button, a change of the target object to be followed.
The tracking application then switches to an intermediate unlocked mode, equivalent to an initialization mode, which requires the use of a new series of key images corresponding to the selected target object and associated with the new state of the automaton.
According to a third example of implementation, the application may consist of finding the position of the user's aiming axis. Indeed, in an application such as a shooting simulator, according to this method of implementation, the application may consist of finding the position of the gun's aiming axis, in particular by matching points of interest between the current image and the landscape of the shooting simulator.
In this mode of shooting, the camera has a static position since it is fixed on a fixed tripod and the external conditions are constant.
According to this method of implementation, the application includes in particular two steps, as illustrated in Figure 8, namely a landscape learning step (step 800) consisting of the creation of a plurality of key landscape images (phase I and II), associated with values of the parameters of lacet ( yaw ), pitch ( pitch ) and roll ( roll ) determined by means of, for example, a sensor, and a monitoring step in the landscape (step 805) in which the previously stored key images are used for matching and estimating the current of the camera.
Depending on the embodiment, the tracking application is coupled with a motion sensor, e.g. a Sensor Mti type sensor from MTI Instrument or an inertiaCube3 type sensor from Intersense.
The learning phase consists of moving the camera around its vertical axis, notably defined by the tripod, and acquiring a number of key images depending on the orientation of the camera according to the parameters of lacet ( yaw ), tangage ( pitch ) and roll ( roll ).
This phase is characterised by the use of the object tracking algorithm to create new key images (steps 325).
This algorithm is illustrated in Figure 9.
The algorithm starts at a position 0 , by acquiring a first key frame and finding points of interest in that first frame. The camera then undergoes a rotation according to the lacet or tangage parameter. The tracking application then finds the correspondences between the points of interest of the first key frame acquired at the position 0 and the current image of the video stream.
When the number of matches becomes too small, a new image is acquired at the current position according to the lacet and tangage parameters.
To limit drift on this type of learning, the camera can be returned to its initial position periodically.
Figure 10 shows an example of a path for learning a landscape, which can be used to avoid drifting.
A tracking algorithm is now described according to this method of implementation, as shown in Figure 11.
When the tracking application is running, the information previously learned is used to retrieve the key image offline for automatic initialization of the tracking phase.
After this initialization step, the follow-up application is executed (step 1105).
However, tracking of an object may be interrupted (step 1110), for example when the weather changes or when the user quickly turns the camera or the user leaves the learned landscape area.
The tracking algorithm then tries to hang up by, for example, performing an automatic initialization by means of the orientation parameters emitted by the sensor.
In the off-mode mode, the monitoring is therefore maintained by the use of the sensor, in particular the Mti sensor.
When the tracking application is running, camera movements allow a new key image to be found for tracking, because the movements are reproduced according to the parameters of the camera's orientation and tangage.
Naturally, to meet specific needs, a person competent in the field of the invention may apply modifications to the previous description.
Claims (17)
- Method of automatic creation of at least two initialization key frames each comprising an image representing at least one three-dimensional object in a three-dimensional environment and the pose of the object in that environment according to the associated image viewpoint, this method being characterized in that it comprises the following steps:acquisition of a first image representing the object in a particular initial position;creation of a first key frame from the first image acquired and the relative pose of the object in its environment;acquisition of at least one second image representing said object, the viewpoint of said at least one second image being different from the viewpoint of said first image;determination of the relative pose of the object in its environment according to the difference of the viewpoints of the first image and said at least one second image, each of said viewpoints being determined relative to position and orientation parameters, at least one of said parameters being determined independently of said images; andcreation of said at least one second key frame from the acquired at least one second image and the relative pose of the object in its environment.
- Creation method according to claim 1, wherein said at least one of said parameters is determined according to a predetermined position.
- Creation method according to claim 1 or claim 2, wherein the calculation of said at least one of said parameters is based on a value coming from an orientation sensor.
- Creation method according to one of the preceding claims, characterized in that the acquisition of at least one second image representing said object is effected by means of a tracking application.
- Creation method according to one of the preceding claims characterized in that said object is at least part of a real object.
- Creation method according to one of the preceding claims, characterized in that the object is at least part of a virtual object.
- Creation method according to claim 6, characterized in that the virtual object is a representation of a real object according to a virtual model.
- Creation method according to claim 6 or claim 7, characterized in that the object comprises at least part of the environment.
- Creation method according to one of the preceding claims, characterized in that the viewpoints of the images belong to a set of predetermined points.
- Creation method according to one of the preceding claims, characterized in that the steps of the creation method are repeated for at least one part of the object.
- Computer program including instructions adapted to execute each of the steps of the method according to one of the preceding claims.
- Information storage medium, removable or otherwise, partly or totally readable by a computer or a microprocessor, containing code instructions of a computer program for executing each of the steps of the method according to one of the claims 1 to 10.
- Device for automatic creation of at least two initialization key frames each comprising an image representing at least one three-dimensional object in a three-dimensional environment and the pose of the object in that environment according to the associated image viewpoint, this device being characterized in that it comprises:means for acquisition of a first image representing the object in a particular initial position;means for creation of a first key frame from the acquired first image and the relative pose of the object in its environment;means for acquisition of at least one second image representing said object, the viewpoint of said at least one second image being different from the viewpoint of said first image;means for determination of the relative pose of the object in its environment according to the difference of the viewpoints of the first image and said at least one second image, each of said viewpoints being determined relative to position and orientation parameters, at least one of said parameters being determined independently of said images; andmeans for creation of said at least one second key frame from the acquired at least one second image and the relative pose of the object in its environment.
- Creation device according to claim 13 further comprising means for acquiring at least one value from an angular sensor, said value being used to calculate said at least one of said parameters.
- Creation device according to claim 13 or claim 14, characterized in that the means for acquisition of at least one second image representing said object are adapted to acquire at least one second image by means of a tracking application.
- Creation device according to one of the claim 13 to 15, characterized in that the object is at least in part a real object or at least in part a virtual object.
- Creation device according to one of the claims 13 to 16, characterized in that the image viewpoints belong to a set of predetermined points.
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| FR0752810 | 2007-01-22 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1138668A HK1138668A (en) | 2010-08-27 |
| HK1138668B true HK1138668B (en) | 2019-01-11 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US8675972B2 (en) | Method and device for determining the pose of a three-dimensional object in an image and method and device for creating at least one key image for object tracking | |
| US8315432B2 (en) | Augmented reality method and devices using a real time automatic tracking of marker-free textured planar geometrical objects in a video stream | |
| JP7488435B2 (en) | AR-Corresponding Labeling Using Aligned CAD Models | |
| US8614705B2 (en) | Method and device for creating at least two key frames corresponding to a three-dimensional object | |
| JP5442261B2 (en) | Automatic event detection method and system in sports stadium | |
| US20030012410A1 (en) | Tracking and pose estimation for augmented reality using real features | |
| CN111445526A (en) | Estimation method and estimation device for pose between image frames and storage medium | |
| CN107990899A (en) | A kind of localization method and system based on SLAM | |
| US20160210761A1 (en) | 3d reconstruction | |
| Alvarez et al. | Providing guidance for maintenance operations using automatic markerless augmented reality system | |
| JP6229041B2 (en) | Method for estimating the angular deviation of a moving element relative to a reference direction | |
| CN113438469B (en) | Automatic testing method and system for security camera | |
| JP2023065371A (en) | Manufacturing assistance system, method, and program | |
| Vacchetti et al. | A stable real-time AR framework for training and planning in industrial environments | |
| CN120356099B (en) | Methods for building and recognizing image libraries in large-scale, incomplete, multi-view, and multi-modal scenarios | |
| CN113344981B (en) | Method, device and electronic equipment for processing posture data | |
| HK1138668B (en) | Method and device for creating at least two key images corresponding to a three-dimensional object | |
| CA2634933C (en) | Group tracking in motion capture | |
| HK1138668A (en) | Method and device for creating at least two key images corresponding to a three-dimensional object | |
| JP3668168B2 (en) | Moving image processing device | |
| CN120569755A (en) | Computer-implemented method and apparatus for verifying component correctness | |
| Becker et al. | Visual tracking for augmented reality: No universal solution but many powerful building blocks | |
| Li et al. | Camera tracking in virtual studio | |
| CN110850969A (en) | Data delay processing method and system based on linked list queue | |
| Nair et al. | 3D Position based multiple human servoing by low-level-control of 6 DOF industrial robot |