[go: up one dir, main page]

CN111024067A - Information processing method, device and equipment and computer storage medium - Google Patents

Information processing method, device and equipment and computer storage medium Download PDF

Info

Publication number
CN111024067A
CN111024067A CN201911303348.7A CN201911303348A CN111024067A CN 111024067 A CN111024067 A CN 111024067A CN 201911303348 A CN201911303348 A CN 201911303348A CN 111024067 A CN111024067 A CN 111024067A
Authority
CN
China
Prior art keywords
data
optical flow
imu
flow data
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911303348.7A
Other languages
Chinese (zh)
Other versions
CN111024067B (en
Inventor
霍盈盈
褚文博
熊丽音
李庆建
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guoqi Beijing Intelligent Network Association Automotive Research Institute Co ltd
Original Assignee
Guoqi Beijing Intelligent Network Association Automotive Research Institute Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guoqi Beijing Intelligent Network Association Automotive Research Institute Co ltd filed Critical Guoqi Beijing Intelligent Network Association Automotive Research Institute Co ltd
Priority to CN201911303348.7A priority Critical patent/CN111024067B/en
Publication of CN111024067A publication Critical patent/CN111024067A/en
Application granted granted Critical
Publication of CN111024067B publication Critical patent/CN111024067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Navigation (AREA)

Abstract

The invention discloses an information processing method, an information processing device, information processing equipment and a computer storage medium. The method comprises the following steps: receiving optical flow data and Inertial Measurement Unit (IMU) data of a target object; judging whether the optical flow data and the IMU data are valid or not according to the optical flow data and the IMU data; and if the optical flow data and the IMU data are both effective, performing extended Kalman filtering fusion on the optical flow data and the IMU data, and determining the state information of the target object. The state information of the target object after fusion calculation is obtained by performing extended Kalman filtering fusion on the optical flow data and IMU data of the target object, so that the stability of obtaining the state information of the target object is enhanced, and the safety of the target object in the motion process is improved.

Description

Information processing method, device and equipment and computer storage medium
Technical Field
The present invention relates to the field of information processing, and in particular, to an information processing method, apparatus, device, and computer storage medium.
Background
In many navigation systems, unmanned control systems, and mobile robot systems, identification and positioning of a target object are important for estimating state information of the target object, where the state information includes position information, velocity information, and attitude information.
The existing state information estimation method mainly utilizes a multi-sensor fusion technology, and each sensor exerts respective advantages, so that the state information is more accurate. Currently, the mainstream sensor combination mode is a combination of a Global Positioning System (GPS) and an Inertial Measurement Unit (IMU). However, the method of combining the GPS and the IMU depends on the state of the GPS, and when the GPS is broken or there is no GPS for a long time, the state information of the target object cannot be stably and accurately estimated, which results in unstable identification and positioning of the target object, and further causes huge loss of manpower and property.
Therefore, how to stably acquire the state information of the target object becomes a problem to be solved.
Disclosure of Invention
The embodiment of the invention provides an information processing method, device, equipment and computer storage medium, which are used for performing extended Kalman filtering fusion through optical flow data and IMU data of a target object to obtain state information after fusion calculation, so that the stability of obtaining the state information of the target object is enhanced, and the safety of the target object in the motion process is further improved.
In a first aspect, the present application provides an information processing method, including: receiving optical flow data and IMU data for a target object; judging whether the optical flow data and the IMU data are valid or not according to the optical flow data and the IMU data; and if the optical flow data and the IMU data are both effective, performing extended Kalman filtering fusion on the optical flow data and the IMU data, and determining the state information of the target object.
In one possible implementation, the determining whether the optical flow data is valid according to the optical flow data and the IMU data specifically includes: acquiring a receiving time difference between the receiving time of the optical flow data and the receiving time of the IMU data; and if the receiving time difference is smaller than the preset receiving time difference threshold value, the optical flow data is valid.
In one possible implementation, the determining whether the IMU data is valid according to the optical flow data and the IMU data specifically includes: determining a pose matrix of the target object according to the optical flow data and the IMU data; determining an inclination angle parameter according to the attitude matrix; if the tilt angle parameter is greater than the preset tilt angle parameter threshold, the IMU data is valid.
And the tested normal data is used for performing extended Kalman filtering fusion, so that the stability of the Kalman filter can be ensured.
In one possible implementation, before performing the extended kalman filter fusion on the bitstream data and the IMU data, the method further includes: determining an observation matrix according to the optical flow data and the IMU data; determining innovation variance according to the observation matrix and a preset noise value, wherein the innovation variance is used for checking whether a covariance matrix of Kalman filtering is normal; and under the condition that the innovation variance is greater than or equal to a preset noise value, performing extended Kalman filtering fusion on the data of the optical flow and the IMU data.
In one possible implementation, before performing the extended kalman filter fusion on the bitstream data and the IMU data, the method further includes: respectively acquiring a measured value of the optical flow data and a predicted value of the optical flow data, wherein the predicted values are calculated based on an optical flow model; determining the consistency check rate of the optical flow according to the measured value and the predicted value; performing extended Kalman filtering fusion on the optical flow data and the IMU data, which specifically comprises the following steps: and if the optical flow consistency check rate is smaller than a preset threshold value, performing extended Kalman filtering fusion on the optical flow data and the IMU data.
Therefore, the error result generated by fusion by using error data can be effectively prevented, and the stability of the data processing result is improved.
In one possible implementation, prior to receiving the optical flow data and the inertial measurement unit IMU data, the method further comprises: detecting whether positioning data collected by a Global Positioning System (GPS) machine is received or not; performing extended Kalman filtering fusion on the optical flow data and the IMU data, which specifically comprises the following steps: in the event that positioning data is not received, the optical flow data and IMU data are received.
In one possible implementation, performing extended kalman filter fusion on the bitstream data and the IMU data to determine state information of the target object includes: determining the relative velocity of the target object relative to the inertial system according to the optical flow data and the IMU data; and determining the state information of the target object according to the relative speed.
In one possible implementation, determining a relative velocity of the target object with respect to the inertial frame from the optical flow data and the IMU data includes: acquiring a transformation matrix of a coordinate system where a target object is located relative to an inertial coordinate system; determining a pose matrix of the target object according to the optical flow data and the IMU data; and determining the relative speed of the target object relative to the inertial system according to the attitude matrix and the transformation matrix.
In one possible implementation, determining the state information of the target object according to the relative speed includes: acquiring a distance value between a receiver of the optical flow data and the ground; determining an observation equation according to the distance value and the relative speed of the target object relative to the inertial system; determining an observation matrix according to an observation equation; and determining the state information of the target object according to the observation matrix.
In a second aspect, an embodiment of the present invention provides an information processing apparatus, including: a receiving module for receiving optical flow data and IMU data of a target object; the judging module is used for judging whether the optical flow data and the IMU data are valid or not according to the optical flow data and the IMU data; and the fusion module is used for performing extended Kalman filtering fusion on the optical flow data and the IMU data to determine the state information of the target object if the optical flow data and the IMU data are both effective.
In a third aspect, an embodiment of the present invention provides a computing device, where the device includes: a processor and a memory storing computer program instructions; the processor, when executing the computer program instructions, implements the processing methods as provided by embodiments of the present invention.
In a fourth aspect, an embodiment of the present invention provides a computer storage medium, where computer program instructions are stored, and when the computer program instructions are executed by a processor, the computer program instructions implement the processing method provided by the embodiment of the present invention.
The information processing method, the device, the equipment and the computer storage medium of the embodiment of the invention determine the state information of the target object by performing the extended Kalman filter fusion on the optical flow data and the IMU data of the target object. Compared with the GPS, the image sensor for collecting the optical flow data has higher stability, so that the stability of the determined state information of the target object can be improved based on the optical flow data and the IMU data, and the safety of the target object in the motion process is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an information processing method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an information processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of an exemplary hardware architecture provided by an embodiment of the present invention.
Detailed Description
Features and exemplary embodiments of various aspects of the present invention will be described in detail below, and in order to make objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described in detail below with reference to the accompanying drawings and specific embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not to be construed as limiting the invention. It will be apparent to one skilled in the art that the present invention may be practiced without some of these specific details. The following description of the embodiments is merely intended to provide a better understanding of the present invention by illustrating examples of the present invention.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
In many navigation systems, unmanned control systems, and mobile robot systems, identification and positioning of a target object are important for estimating state information of the target object, where the state information includes position information, velocity information, and attitude information. The conventional estimation method of position information, velocity information and attitude information mainly includes fusion by using GPS data, IMU data, wheel speed meter data or data of other sensors.
The current mainstream combined navigation mode utilizes the combination of GPS data and IMU data, because of the characteristics of the IMU, the error of the predicted speed information and the error of the position information can be accumulated along with the increase of time, so that the obtained state information of a target object is not very accurate, the error of the GPS data can not be accumulated along with the time, the GPS data can provide accurate position information and speed information, and the integral value of the IMU is corrected at intervals, so that the purpose of combined navigation is achieved.
However, GPS data is easily interfered, a method combining GPS and IMU strictly depends on the state of the GPS, once the GPS data is interfered, the system can still work normally in a short time, but the navigation system can disperse under the condition of no GPS data for a long time. However, when the GPS cannot be used or the GPS data cannot be accepted for a long time, the target object cannot be identified and positioned stably, so that the target object cannot be guaranteed to work normally, which may have disastrous consequences and bring huge losses of labor and property.
With the rapid development of camera technology in recent years, the performance of computers has been improved, and it is becoming a popular trend to perform calculations using images in the field of information processing.
In order to solve the problem that a target object cannot be stably identified and positioned at present, the optical flow data acquired by an image sensor can be fused with an IMU (inertial measurement Unit), the advantages of two sensors can be fully exerted, state information with high data rate and low drift can be obtained, the sensor can work in a GPS (global positioning system) failure environment to play a key role, and then the function of accurately positioning in the fields of unmanned systems, map navigation and the like is realized. Based on this, the embodiment of the invention provides an information processing method.
The following describes an information processing method provided by an embodiment of the present invention.
Fig. 1 is a schematic flow chart illustrating an information processing method according to an embodiment of the present invention.
As shown in fig. 1, the information processing method may include S101-S105, and the method is applied to a server, and specifically as follows:
s101, receiving optical flow data and IMU data of a target object.
S102, judging whether the optical flow data is effective or not according to the optical flow data and the IMU data.
And S103, judging whether the IMU data is valid or not according to the optical flow data and the IMU data.
And S104, if the optical flow data and the IMU data are both effective, performing extended Kalman filtering fusion on the optical flow data and the IMU data.
S105, determining the state information of the target object.
According to the information processing method, the state information of the target object is determined by performing extended Kalman filter fusion on optical flow data and IMU data of the target object. Compared with the GPS, the image sensor for collecting the optical flow data has higher stability, so that the stability of the determined state information of the target object can be improved based on the optical flow data and the IMU data, and the safety of the target object in the motion process is improved.
The contents of S101-S105 are described below:
first, a specific implementation of S101 will be described.
In one embodiment, an image sensor and an IMU sensor are mounted on a target object, and optical flow data collected by the image sensor and IMU data collected by the IMU sensor are received.
The optical flow data in the embodiment of the present invention refers to a velocity at which image information obtained by an image sensor is converted into a target object. Algorithms for converting image information obtained by an image sensor into optical flow data mainly include an LK optical flow method (Lucas-Kanade optical flow method), a block matching method, a Horm-Schunck method, and the like.
The optical flow methods mainly use the change of pixels in an image sequence in a time domain and the correlation between adjacent frames to find the corresponding relation between the previous frame and the current frame, so as to calculate the motion information of an object between the adjacent frames. The basic principle of detecting moving objects by an optical flow method is as follows: each pixel point in the image is endowed with a velocity vector, so that an image motion field is formed, at a specific moment of motion, the points on the image correspond to the points on the three-dimensional object one to one, the corresponding relation can be obtained by projection relation, and the dynamic analysis of the image is realized according to the velocity vector characteristics of each pixel point.
In one embodiment, an image sensor is mounted on a target object such as a mobile robot or a cart, with a lens of the image sensor mounted downward.
The IMU data includes acceleration data and angular velocity data, and generally, an IMU includes three single-axis accelerometers and three single-axis gyroscopes, the accelerometers being used to detect acceleration data of an object in three independent axes of a target object coordinate system, and the gyroscopes being used to detect angular velocity data of the target object relative to a navigation coordinate system. The IMU is used to measure angular velocity and acceleration of an object in three-dimensional space.
In one embodiment, prior to receiving the optical flow data and the IMU data, the method further comprises: detecting whether positioning data collected by a Global Positioning System (GPS) machine is received or not; performing extended Kalman filtering fusion on the optical flow data and the IMU data, which specifically comprises the following steps: in the event that positioning data is not received, the optical flow data and IMU data are received.
If the positioning data collected by the GPS can be received, the positioning data provided by the GPS can be utilized to correct the numerical value of IMU integral at intervals so as to achieve the purpose of combined navigation. Due to the characteristics of the IMU, errors of predicted speed information and errors of predicted position information are accumulated along with the increase of time, so that the state information of the target object is not very accurate, and GPS errors are not accumulated along with time, so that the accumulated errors of the IMU can be corrected at intervals under the condition that the GPS state is good.
However, GPS data is susceptible to interference, is greatly affected by human factors, and can be masked indoors or when blocked by buildings. When the GPS is damaged or the GPS data cannot be received for a long time, the normal work of the system cannot be effectively ensured, and at the moment, under the condition that positioning data is determined not to be received, the optical flow data and the IMU data are received, and then the extended Kalman filtering fusion is carried out by utilizing the optical flow data and the IMU data to determine the state information of the target object.
The optical flow data and the IMU data are fused, so that the advantages of the image sensor and the IMU sensor can be fully exerted, the state information with high data rate and low drift can be obtained, and the method can work in the environment with failure of the GPS. The fusion positioning of the optical flow data and the IMU data can obtain the state information of the target object through fusion calculation of the optical flow data and the IMU data, and further assist an unmanned system or a map to realize accurate positioning.
Next, a specific implementation of S102 is described.
Before the fusion of the light stream data and the IMU data, the validity detection of the data is performed, which is mainly divided into two aspects, namely, detecting the quality of the light stream data and detecting the range of the tilt angle. The validity detection of the data can effectively prevent error data from entering the filter to generate error results.
In one embodiment, determining whether the optical flow data is valid according to the optical flow data and the IMU data specifically includes: acquiring a receiving time difference between the receiving time of the optical flow data and the receiving time of the IMU data; and if the receiving time difference is smaller than the preset receiving time difference threshold value, the optical flow data is valid.
The sampling frequency of the image sensor is different from that of the IMU sensor, and the image sensor and the IMU sensor are not aligned in time, so that the time when the current image tracks effective data is time _ imagflow, namely the receiving time of optical flow data; let the current time of the IMU be time _ IMU, i.e., the time of receipt of the IMU data. The reception time difference is detected based on the reception time of the IMU data.
Determining a reception time difference between the reception time of the optical flow data and the reception time of the IMU data, and comparing the reception time difference with a preset reception time difference threshold to determine whether the optical flow data is valid. For example, when time _ imu-time _ imagflow < 1 second, the condition is satisfied, it indicates that the image data is normally received, the optical flow data tracked by the image data is valid, and the flag valid _ imagflow may be set to 1, and the optical flow data is valid.
If the time _ imu-time _ imagflow is greater than or equal to 1s, it is described that the image data is not received for a long time, and it is considered that the optical flow data is unreliable and cannot satisfy the condition, and the flag bit valid _ imagflow may be set to 0. Optical flow data is not valid.
Here, the validity of the optical flow data for filter calculation can be ensured.
Then, a specific implementation of S103 is introduced.
In one embodiment, determining whether the IMU data is valid according to the optical flow data and the IMU data specifically includes: determining a pose matrix of the target object according to the optical flow data and the IMU data; determining an inclination angle parameter according to the attitude matrix; if the tilt angle parameter is greater than the preset tilt angle parameter threshold, the IMU data is valid.
Determining a pose matrix for the target object based on the optical flow data and the IMU data is shown below,
Figure BDA0002322422750000081
from the attitude matrix, the tilt angle can be derived, i.e. the meaning of the state value represented by the third row, the third column cz of the attitude matrix is cos θ cos γ.
When cos θ cos γ > δ, the tilt angle is specified to be within the standard range. Where δ is a preset tilt angle parameter threshold. The setting can be performed according to the selected sensor, and at this time, the flag bit tilt _ good can be set to 1, and the IMU data is valid.
When cos theta cos gamma is less than delta, the inclination angle is too large and is not in the standard range, and the extended kalman filter fusion cannot be performed, the flag bit tilt _ good is set to 0, and the IMU data is invalid.
When the inclination angle is within the standard range, the image sensor works normally, and the output data is reliable. And the tested normal data is used for performing extended Kalman filtering fusion, so that the stability of the Kalman filter can be ensured.
In the case of valid _ imagflow ═ 1 and tilt _ good ═ 1, the optical flow data and IMU data are both valid, and then the optical flow data and IMU data are fused by extended kalman filter.
In one embodiment, prior to performing extended kalman filter fusion on the bitstream data and the IMU data, the method further comprises: determining an observation matrix according to the optical flow data and the IMU data; determining innovation variance according to the observation matrix and a preset noise value, wherein the innovation variance is used for checking whether a covariance matrix of Kalman filtering is normal; and under the condition that the innovation variance is greater than or equal to a preset noise value, performing extended Kalman filtering fusion on the data of the optical flow and the IMU data.
Before the fusion of the extended Kalman filtering is prepared, a covariance matrix (P matrix) of the filtering system needs to be checked to determine whether the covariance matrix (P matrix) of the obtained filtering system is non-negative, wherein the P matrix is a filtering error covariance matrix, and the non-negative covariance matrix can enable the filtering system to stably and reliably operate. The test method comprises the following steps:
first, the innovation variance is calculated
Figure BDA0002322422750000092
Innovation variance
Figure BDA0002322422750000093
The variance is determined by the observation matrix H obtained in the following S104 and the preset noise value R
Figure BDA0002322422750000094
Calculated according to the following formula
Figure BDA0002322422750000095
Secondly, the innovation variance is judged
Figure BDA0002322422750000096
With a predetermined noise value, if the variance is updated
Figure BDA0002322422750000097
The noise value is greater than or equal to a preset noise value, the P array is positive, namely the P array is not negative; otherwise, the P matrix has problems and the fusion can not be carried out. The specific reasons causing the P-matrix problem may be various, such as noise increase caused by some influence on the optical flow data.
In one embodiment, prior to performing extended kalman filter fusion on the bitstream data and the IMU data, the method further comprises: respectively acquiring a measured value of the optical flow data and a predicted value of the optical flow data, wherein the predicted values are calculated based on an optical flow model; determining the consistency check rate of the optical flow according to the measured value and the predicted value; performing extended Kalman filtering fusion on the optical flow data and the IMU data, which specifically comprises the following steps: and if the optical flow consistency check rate is smaller than a preset threshold value, performing extended Kalman filtering fusion on the optical flow data and the IMU data.
After the P-array inspection, the P-array can be ensured to be normal and not to be dispersed, but harmful noise in the image cannot be limited within a required range, and in order to enable the optical flow data to reach a relatively ideal balanced state during fusion, the optical flow innovation consistency inspection needs to be set according to a preset threshold value.
First, an optical flow consistency verification rate is calculated, which is shown as the following formula,
Figure BDA0002322422750000091
the parameter is a parameter adjusted according to actual engineering, and the outlier of the optical flow can be removed by setting the value of the parameter. The measured value is an output value of the actual optical flow, and the predicted value is a value calculated by an optical flow model.
Then, judging the consistency check rate, if flowtestRatio is less than 1, indicating that the optical flow data state in the image is healthy and can be fused; if flowtestRatio is more than 1, the optical flow data state in the image is unhealthy, and data fusion is not carried out.
Therefore, whether the optical flow data and the IMU data are effective or not is judged, the error result generated by fusion of the error data can be effectively prevented, and the stability of the data processing result is improved.
Next, a specific implementation of S104 will be described.
Namely, when the time _ IMU-time _ imagflow is less than 1s and cos theta cos gamma is greater than delta, the optical flow data and the IMU data can be determined to be both effective, and the extended Kalman filtering fusion can be performed on the optical flow data and the IMU data, so that the state information of the target object is determined.
In one embodiment, performing extended kalman filter fusion on the data of the optical flow and the IMU data to determine the state information of the target object includes: determining the relative velocity of the target object relative to the inertial system according to the optical flow data and the IMU data; and determining the state information of the target object according to the relative speed.
Here, the calculation efficiency can be improved, and the stability of the result of determining the state information can be further improved.
In one embodiment, the relative velocity of the target object is determined from the optical flow data and the IMU data relative to an inertial frame, referred to herein as the earth coordinate system. For example, when the target object is an automobile, the relative velocity refers to the velocity of the automobile relative to the earth. And then calculating according to the relative speed to determine the state information of the target object.
Here, the calculation can be simplified, and the stability of the determined state information result can be further improved.
In one embodiment, determining the relative velocity of the target object with respect to the inertial frame from the optical flow data and the IMU data includes: acquiring a transformation matrix of a coordinate system where a target object is located relative to an inertial coordinate system; determining a pose matrix of the target object according to the optical flow data and the IMU data; and determining the relative speed of the target object relative to the inertial system according to the attitude matrix and the transformation matrix.
Firstly, a transformation matrix of a coordinate system of the target object relative to an inertial coordinate system
Figure BDA0002322422750000101
Wherein n represents an inertial system and b represents a coordinate system in which the target object is located.
Second, a pose matrix for the target object is determined from the optical flow data and the IMU data.
In determining a pose matrix for the target object based on the optical flow data and the IMU data. First, a dimension of a state quantity for filtering is determined, and here, the following 16-dimensional state quantity as shown in table 1 is set according to the requirements of the present embodiment. As shown in the following table, the current position, velocity, and attitude (quaternion) of the target object can be obtained by performing pure inertial navigation integral calculation on the IMU data.
Table 116 dimensional state quantities
Figure BDA0002322422750000111
Then, the attitude matrix of the target object is determined based on the 16-dimensional state quantity and the optical flow data, and the attitude matrix of the target object is determined based on the optical flow data and the IMU data
Figure BDA0002322422750000114
Then, determining the relative speed of the target object relative to the inertial system according to the attitude matrix and the transformation matrix, wherein the calculation process of the relative speed is as follows:
Figure BDA0002322422750000112
wherein,
Figure BDA0002322422750000113
the transform matrix of the coordinate system of the target object relative to the inertial coordinate system is obtained by 16-dimensional state quantities, namely vn, ve and vd.
In one embodiment, determining the state information of the target object according to the relative velocity includes: acquiring a distance value between a receiver of the optical flow data and the ground; determining an observation equation according to the distance value and the relative speed of the target object relative to the inertial system; determining an observation matrix according to an observation equation; and determining the state information of the target object according to the observation matrix.
First, a range of a distance between a receiver of the optical flow data and the ground is obtained, that is, a distance from the center of the image sensor installed on the target object to the ground is the range.
Then, the angular velocities (flowrateX, flowrateY) of the target object relative to the X axis and the relative Y axis can be obtained according to the distance value range and the relative velocity relsel of the target object relative to the inertial system, and an observation equation determined according to the distance value and the relative velocity of the target object relative to the inertial system is shown as the following formula:
flowrateX=relvel(2)/range; (4)
flowrateY=-relvel(1)/range; (5)
where, relve (l2) and relsel (1) are the horizontal two components in relsel, for example, the velocity in the X, Y directions.
Secondly, an observation matrix is determined according to an observation equation, and the partial derivatives of all state quantities in the observation equation are solved to respectively obtain a Jacobian matrix observed on the X axis and a Jacobian matrix observed on the Y axis.
The X-axis observation matrix H _ flowx is:
Figure BDA0002322422750000121
Figure BDA0002322422750000122
Figure BDA0002322422750000123
Figure BDA0002322422750000124
Figure BDA0002322422750000125
Figure BDA0002322422750000126
Figure BDA0002322422750000127
H_flowx(8)~H_flowx(16)=0 (13)
the Y-axis observation matrix H _ flowy is:
Figure BDA0002322422750000128
Figure BDA0002322422750000129
Figure BDA00023224227500001210
Figure BDA00023224227500001211
Figure BDA00023224227500001212
Figure BDA0002322422750000131
Figure BDA0002322422750000132
H_flowy(8)~H_flowy(16)=0 (21)
thus, an observation matrix is obtained. After the observation matrix is obtained, a preset noise value R of the sensor is set, and the preset noise value R can be determined according to statistical data of a large number of tests. And after the determination, calculating a fusion gain K according to a Kalman filtering formula, wherein the fusion gain K is used for subsequently determining the state information of the target object. The kalman filter equation is as follows:
K(k)=P-(k)HT(HP-(k)HT+R)-1(22)
finally, a specific implementation of S105 is described.
And performing data fusion according to the following formula by using the result calculated in the step S104 to complete one-time filtering updating. And outputting the state information of the filtered target object, wherein the state information comprises position information, speed information and attitude information.
K(k)=P-(k)HT(HP-(k)HT+R)-1(23)
Figure BDA0002322422750000133
P(k)=(I-K(k)H)P-(k) (25)
Wherein K is the fusion gain; h is an observation matrix, HTIs the transpose of an observation matrix H, R is a preset noise value;
Figure BDA0002322422750000134
the representation is a corrected state; p (k) denotes a covariance matrix, which is a covariance matrix of the filtering error, for the next round of calculation; zKThe state information is used for representing the state information of the target object, and comprises position information, speed information and attitude information, and in the actual calculation, the position information, the speed information and the attitude information are uniformly output in a matrix or vector form.
In the following information processing procedure, the server continues to acquire optical flow data and IMU data of the real-time target object, and repeats the above-described methods and steps of S101 to S105 again.
According to the information processing method provided by the embodiment of the invention, the optical flow data and the IMU data of the target object can be fused, and the state information of the fused target object is more stable and accurate. And furthermore, the unmanned system or map navigation can be assisted to realize accurate positioning through the state information with strong stability.
Fig. 2 shows a schematic configuration diagram of an information processing apparatus 200 according to an embodiment of the present invention. As shown in fig. 2, the apparatus 200 may include a calculation unit 21 and a control unit 22. The calculating unit 21 is mainly configured to obtain optical flow data and IMU data of the target object, and then perform extended kalman filter fusion calculation on the optical flow data and the IMU data to determine state information of the target object; the control unit 22 may be mainly used to determine whether to perform the extended kalman filter fusion on the optical flow data and the IMU data according to the data acquisition condition.
Further, the computing unit 21 may be specifically configured to acquire optical flow data and IMU data of the target object.
The calculation unit 21 may be specifically configured to acquire a reception time difference between the reception time of the optical flow data and the reception time of the IMU data, and determine whether the optical flow data is valid or not based on the reception time difference. Determining a posture matrix of the target object according to the optical flow data and the IMU data, determining an inclination angle parameter according to the posture matrix, and determining whether the IMU data is effective or not according to the inclination angle parameter.
The calculation unit 21 may be specifically configured to perform extended kalman filter fusion on the optical flow data and the IMU data to determine the state information of the target object.
Based on this, the control unit 22 may specifically be configured to control the calculation unit 21 to receive the optical stream data and the IMU data in a case where it is determined that the positioning data is not received.
The control unit 22 may be specifically configured to control the computing unit 21 not to perform the extended kalman filter fusion on the optical flow data and the IMU data when the optical flow data is invalid.
The control unit 22 may be specifically configured to control the computing unit 21 not to perform the extended kalman filter fusion on the optical flow data and the IMU data when the IMU data is invalid.
The information processing device 200 mainly uses the image sensor to fuse the optical flow data and the IMU data obtained by the image sensor, and once the fusion of the GPS and the IMU is out of order, the fusion of the optical flow and the IMU can be timely switched, so that the stability of state information processing is enhanced, and the stability and adaptability of the unmanned control system, the mobile robot system, or the navigation system, to which the state information of the target object is applied, are improved.
In addition, based on the information processing method, an embodiment of the present invention further provides an information processing apparatus, which is specifically described in detail with reference to fig. 3.
Fig. 3 is a block diagram of an apparatus according to an embodiment of the present invention.
As shown in fig. 3, the apparatus 300 may include:
a receiving module 310 for receiving optical flow data and IMU data of the target object.
A determining module 320, configured to determine whether the optical flow data and the IMU data are valid according to the optical flow data and the IMU data.
As an example, the determining module 320 is specifically configured to obtain a receiving time difference between a receiving time of the optical flow data and a receiving time of the IMU data; and if the receiving time difference is smaller than the preset receiving time difference threshold value, the optical flow data is valid.
As an example, the determining module 320 is specifically configured to determine a pose matrix of the target object according to the optical flow data and the IMU data; determining an inclination angle parameter according to the attitude matrix; if the tilt angle parameter is greater than the preset tilt angle parameter threshold, the IMU data is valid.
And the tested normal data is used for performing extended Kalman filtering fusion, so that the stability of the Kalman filter can be ensured.
The determining module 320 is further configured to, before performing the extended kalman filtering fusion on the bitstream data and the IMU data, further include: determining an observation matrix according to the optical flow data and the IMU data; determining innovation variance according to the observation matrix and a preset noise value, wherein the innovation variance is used for checking whether a covariance matrix of Kalman filtering is normal; and under the condition that the innovation variance is greater than or equal to a preset noise value, performing extended Kalman filtering fusion on the data of the optical flow and the IMU data.
The determining module 320 is further configured to, before performing the extended kalman filtering fusion on the bitstream data and the IMU data, further include: respectively acquiring a measured value of the optical flow data and a predicted value of the optical flow data, wherein the predicted values are calculated based on an optical flow model; determining the consistency check rate of the optical flow according to the measured value and the predicted value; performing extended Kalman filtering fusion on the optical flow data and the IMU data, which specifically comprises the following steps: and if the optical flow consistency check rate is smaller than a preset threshold value, performing extended Kalman filtering fusion on the optical flow data and the IMU data.
Therefore, the error result generated by fusion by using error data can be effectively prevented, and the stability of the data processing result is improved.
A fusion module 330, configured to perform extended kalman filter fusion on the optical flow data and the IMU data to determine state information of the target object if the optical flow data and the IMU data are both valid.
As an example, the fusion module 330 is specifically configured to perform extended kalman filter fusion on the bitstream data and the IMU data, and determine the state information of the target object, where the method includes: determining the relative velocity of the target object relative to the inertial system according to the optical flow data and the IMU data; and determining the state information of the target object according to the relative speed.
As one example, the fusion module 330 is specifically configured to determine a relative velocity of the target object with respect to the inertial frame based on the optical flow data and the IMU data, and includes: acquiring a transformation matrix of a coordinate system where a target object is located relative to an inertial coordinate system; determining a pose matrix of the target object according to the optical flow data and the IMU data; and determining the relative speed of the target object relative to the inertial system according to the attitude matrix and the transformation matrix.
As an example, the fusion module 330 is specifically configured to determine the state information of the target object according to the relative speed, and includes: acquiring a distance value between a receiver of the optical flow data and the ground; determining an observation equation according to the distance value and the relative speed of the target object relative to the inertial system; determining an observation matrix according to an observation equation; and determining the state information of the target object according to the observation matrix.
In conclusion, the extended kalman filter fusion is performed on the optical flow data and the IMU data of the target object, and the state information of the target object is determined. Compared with the GPS, the image sensor for collecting the optical flow data has higher stability, so that the stability of the determined state information of the target object can be improved based on the optical flow data and the IMU data, and the safety of the target object in the motion process is improved.
Fig. 4 is a diagram illustrating an exemplary hardware architecture provided by an embodiment of the present invention.
The positioning device may comprise a processor 401 and a memory 402 storing computer program instructions.
Specifically, the processor 401 may include a Central Processing Unit (CPU), or an Application Specific Integrated Circuit (ASIC), or may be configured as one or more Integrated circuits implementing embodiments of the present invention.
Memory 402 may include mass storage for data or instructions. By way of example, and not limitation, memory 402 may include a Hard Disk Drive (HDD), floppy Disk Drive, flash memory, optical Disk, magneto-optical Disk, tape, or Universal Serial Bus (USB) Drive or a combination of two or more of these. Memory 402 may include removable or non-removable (or fixed) media, where appropriate. The memory 402 may be internal or external to the integrated gateway disaster recovery device, where appropriate. In a particular embodiment, the memory 402 is a non-volatile solid-state memory. In a particular embodiment, the memory 402 includes Read Only Memory (ROM). Where appropriate, the ROM may be mask-programmed ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), electrically rewritable ROM (EAROM), or flash memory or a combination of two or more of these.
The processor 401 realizes any one of the information processing methods in the above-described embodiments by reading and executing computer program instructions stored in the memory 402.
In one example, the positioning device may also include a communication interface 403 and a bus 410. As shown in fig. 4, the processor 401, the memory 402, and the communication interface 403 are connected via a bus 410 to complete communication therebetween.
The communication interface 403 is mainly used for implementing communication between modules, apparatuses, units and/or devices in the embodiments of the present invention.
Bus 410 includes hardware, software, or both to couple the components of the information processing device to each other. By way of example, and not limitation, a bus may include an Accelerated Graphics Port (AGP) or other graphics bus, an Enhanced Industry Standard Architecture (EISA) bus, a Front Side Bus (FSB), a Hypertransport (HT) interconnect, an Industry Standard Architecture (ISA) bus, an infiniband interconnect, a Low Pin Count (LPC) bus, a memory bus, a Micro Channel Architecture (MCA) bus, a Peripheral Component Interconnect (PCI) bus, a PCI-Express (PCI-X) bus, a Serial Advanced Technology Attachment (SATA) bus, a video electronics standards association local (VLB) bus, or other suitable bus or a combination of two or more of these. Bus 410 may include one or more buses, where appropriate. Although specific buses have been described and shown in the embodiments of the invention, any suitable buses or interconnects are contemplated by the invention.
The processing device may execute the information processing method in the embodiment of the present invention, thereby implementing the information processing method described in conjunction with fig. 1.
In addition, in combination with the information processing method in the above embodiments, the embodiments of the present invention may be implemented by providing a computer storage medium. The computer storage medium having computer program instructions stored thereon; the computer program instructions, when executed by a processor, implement any of the information processing methods in the embodiments described above.
It is to be understood that the embodiments of the invention are not limited to the particular configurations and processes described above and shown in the drawings. A detailed description of known methods is omitted herein for the sake of brevity. In the above embodiments, several specific steps are described and shown as examples. However, the method processes of the embodiments of the present invention are not limited to the specific steps described and illustrated, and those skilled in the art can make various changes, modifications and additions or change the order between the steps after comprehending the spirit of the embodiments of the present invention.
The functional blocks shown in the above-described structural block diagrams may be implemented as software, and the elements of the embodiments of the present invention are programs or code segments used to perform desired tasks. The program or code segments may be stored in a machine-readable medium or transmitted by a data signal carried in a carrier wave over a transmission medium or a communication link. A "machine-readable medium" may include any medium that can store or transfer information. Examples of a machine-readable medium include circuits, semiconductor memory devices, ROM, flash memory, Erasable ROM (EROM), floppy disks, CD-ROMs, optical disks, hard disks, fiber optic media, Radio Frequency (RF) links, and so forth. The code segments may be downloaded via computer networks such as the internet, intranet, etc.
It should also be noted that the exemplary embodiments mentioned in this patent describe some methods or systems based on a series of steps or devices. However, the embodiments of the present invention are not limited to the order of the above steps, that is, the steps may be performed in the order mentioned in the embodiments, may be performed in an order different from the order in the embodiments, or may be performed simultaneously.
As described above, only the specific embodiments of the present invention are provided, and it can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the module and the unit described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. It should be understood that the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the present invention, and these modifications or substitutions should be covered within the scope of the present invention.

Claims (12)

1. An information processing method, wherein the method comprises:
receiving optical flow data and Inertial Measurement Unit (IMU) data of a target object;
judging whether the optical flow data and the IMU data are valid or not according to the optical flow data and the IMU data;
and if the optical flow data and the IMU data are both effective, performing extended Kalman filtering fusion on the optical flow data and the IMU data, and determining the state information of the target object.
2. The method of claim 1, wherein determining whether the optical flow data is valid based on the optical flow data and the IMU data comprises:
acquiring a receiving time difference between the receiving time of the optical flow data and the receiving time of the IMU data;
and if the receiving time difference is smaller than a preset receiving time difference threshold value, the optical flow data is valid.
3. The method according to claim 1 or 2, wherein determining whether the IMU data is valid based on the optical flow data and the IMU data comprises:
determining a pose matrix for the target object from the optical flow data and the IMU data;
determining an inclination angle parameter according to the attitude matrix;
and if the tilt angle parameter is larger than a preset tilt angle parameter threshold value, the IMU data is valid.
4. The method of claim 1, wherein prior to said performing extended kalman filter fusion on said optical flow data and said IMU data, said method further comprises:
determining an observation matrix from the optical flow data and the IMU data;
determining innovation variance according to the observation matrix and a preset noise value, wherein the innovation variance is used for checking whether a covariance matrix of Kalman filtering is normal;
and performing extended Kalman filtering fusion on the optical flow data and the IMU data under the condition that the innovation variance is greater than or equal to the preset noise value.
5. The method of claim 1, wherein prior to said performing extended kalman filter fusion on said optical flow data and said IMU data, said method further comprises:
respectively acquiring a measured value of the optical flow data and a predicted value of the optical flow data, wherein the predicted values are calculated based on an optical flow model;
determining an optical flow consistency verification rate according to the measured value and the predicted value;
the performing extended kalman filter fusion on the optical flow data and the IMU data specifically includes:
and under the condition that the optical flow consistency check rate is smaller than a preset threshold value, performing extended Kalman filtering fusion on the optical flow data and the IMU data.
6. The method of claim 1, wherein prior to said receiving optical flow data and Inertial Measurement Unit (IMU) data, the method further comprises:
detecting whether positioning data collected by a Global Positioning System (GPS) machine is received or not;
the performing extended kalman filter fusion on the optical flow data and the IMU data specifically includes:
receiving the optical flow data and IMU data without receiving the positioning data.
7. The method of claim 1, wherein performing extended kalman filter fusion on the optical flow data and the IMU data to determine the state information of the target object comprises:
determining a relative velocity of the target object with respect to an inertial frame from the optical flow data and the IMU data;
and determining the state information of the target object according to the relative speed.
8. The method of claim 7, wherein said determining a relative velocity of said target object with respect to an inertial frame from said optical flow data and said IMU data comprises:
acquiring a transformation matrix of a coordinate system where the target object is located relative to an inertial coordinate system;
determining a pose matrix for the target object from the optical flow data and the IMU data;
and determining the relative speed of the target object relative to an inertial system according to the attitude matrix and the transformation matrix.
9. The method of claim 7, wherein determining the state information of the target object from the relative velocity comprises:
acquiring a distance value between a receiver of the optical flow data and the ground;
determining an observation equation according to the distance value and the relative speed of the target object relative to an inertial system;
determining the observation matrix according to the observation equation;
and determining the state information of the target object according to the observation matrix.
10. An information processing apparatus characterized by comprising:
a receiving module for receiving optical flow data and IMU data of a target object;
the judging module is used for judging whether the optical flow data and the IMU data are valid or not according to the optical flow data and the IMU data;
and the fusion module is used for performing extended Kalman filtering fusion on the optical flow data and the IMU data to determine the state information of the target object if the optical flow data and the IMU data are both effective.
11. A computing device, the device comprising: a processor and a memory storing computer program instructions;
the processor, when executing the computer program instructions, implements the information processing method of any one of claims 1-9.
12. A computer storage medium having stored thereon computer program instructions which, when executed by a processor, implement the information processing method according to any one of claims 1 to 9.
CN201911303348.7A 2019-12-17 2019-12-17 Information processing method, device and equipment and computer storage medium Active CN111024067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911303348.7A CN111024067B (en) 2019-12-17 2019-12-17 Information processing method, device and equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911303348.7A CN111024067B (en) 2019-12-17 2019-12-17 Information processing method, device and equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN111024067A true CN111024067A (en) 2020-04-17
CN111024067B CN111024067B (en) 2021-09-28

Family

ID=70210061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911303348.7A Active CN111024067B (en) 2019-12-17 2019-12-17 Information processing method, device and equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN111024067B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284380A (en) * 2020-09-23 2021-01-29 深圳市富临通实业股份有限公司 Nonlinear estimation method and system based on fusion of optical flow and IMU (inertial measurement Unit)
CN112669594A (en) * 2020-12-11 2021-04-16 国汽(北京)智能网联汽车研究院有限公司 Method, device, equipment and storage medium for predicting traffic road conditions
CN112887262A (en) * 2020-12-28 2021-06-01 北京航空航天大学 Automobile information safety protection method and device based on multi-source information fusion

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102314547A (en) * 2010-06-08 2012-01-11 霍尼韦尔国际公司 The system and method that is used for virtual submeter
CN102506892A (en) * 2011-11-08 2012-06-20 北京航空航天大学 Configuration method for information fusion of a plurality of optical flow sensors and inertial navigation device
US8946606B1 (en) * 2008-03-26 2015-02-03 Arete Associates Determining angular rate for line-of-sight to a moving object, with a body-fixed imaging sensor
CN106813662A (en) * 2016-06-08 2017-06-09 极翼机器人(上海)有限公司 A kind of air navigation aid based on light stream
CN107390704A (en) * 2017-07-28 2017-11-24 西安因诺航空科技有限公司 A kind of multi-rotor unmanned aerial vehicle light stream hovering method based on IMU pose compensations
US20180112985A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN109540126A (en) * 2018-12-03 2019-03-29 哈尔滨工业大学 A kind of inertia visual combination air navigation aid based on optical flow method
CN110542414A (en) * 2018-05-28 2019-12-06 北京京东尚科信息技术有限公司 Navigation module management method and device of unmanned aerial vehicle

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8946606B1 (en) * 2008-03-26 2015-02-03 Arete Associates Determining angular rate for line-of-sight to a moving object, with a body-fixed imaging sensor
CN102314547A (en) * 2010-06-08 2012-01-11 霍尼韦尔国际公司 The system and method that is used for virtual submeter
CN102506892A (en) * 2011-11-08 2012-06-20 北京航空航天大学 Configuration method for information fusion of a plurality of optical flow sensors and inertial navigation device
CN106813662A (en) * 2016-06-08 2017-06-09 极翼机器人(上海)有限公司 A kind of air navigation aid based on light stream
US20180112985A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Inertial Navigation with Variable Contrast Tracking Residual
CN107390704A (en) * 2017-07-28 2017-11-24 西安因诺航空科技有限公司 A kind of multi-rotor unmanned aerial vehicle light stream hovering method based on IMU pose compensations
CN110542414A (en) * 2018-05-28 2019-12-06 北京京东尚科信息技术有限公司 Navigation module management method and device of unmanned aerial vehicle
CN109520497A (en) * 2018-10-19 2019-03-26 天津大学 The unmanned plane autonomic positioning method of view-based access control model and imu
CN109540126A (en) * 2018-12-03 2019-03-29 哈尔滨工业大学 A kind of inertia visual combination air navigation aid based on optical flow method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VOLKER GRABE 等: "A Comparison of Scale Estimation Schemes for a Quadrotor UAV based on Optical Flow and IMU Measurements", 《2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS)》 *
周立青 等: "基于硬件Kalman滤波器的航拍云台姿态获取", 《电子技术应用》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112284380A (en) * 2020-09-23 2021-01-29 深圳市富临通实业股份有限公司 Nonlinear estimation method and system based on fusion of optical flow and IMU (inertial measurement Unit)
CN112669594A (en) * 2020-12-11 2021-04-16 国汽(北京)智能网联汽车研究院有限公司 Method, device, equipment and storage medium for predicting traffic road conditions
CN112887262A (en) * 2020-12-28 2021-06-01 北京航空航天大学 Automobile information safety protection method and device based on multi-source information fusion

Also Published As

Publication number Publication date
CN111024067B (en) 2021-09-28

Similar Documents

Publication Publication Date Title
CN111854740B (en) Inertial navigation system capable of dead reckoning in a vehicle
CN113147738A (en) Automatic parking positioning method and device
US11035915B2 (en) Method and system for magnetic fingerprinting
CN110057356B (en) Method and device for locating vehicle in tunnel
CN111024067B (en) Information processing method, device and equipment and computer storage medium
CN106123900B (en) Magnetic heading solution method for indoor pedestrian navigation based on improved complementary filtering
CN109143304B (en) Method and device for determining pose of unmanned vehicle
CN107490803A (en) Using GPS and inertial navigation system to robot localization orientation method
Mu et al. A GNSS/INS-integrated system for an arbitrarily mounted land vehicle navigation device
CN115060275A (en) A method for selecting optimal navigation information for multiple inertial navigation devices
WO2016198009A1 (en) Heading checking method and apparatus
CN111026081B (en) Error calculation method, device, equipment and storage medium
TW201711011A (en) Positioning and directing data analysis system and method thereof
CN114119744A (en) Method, device and equipment for constructing point cloud map and storage medium
CN109827595B (en) Indoor inertial navigation instrument direction calibration method, indoor navigation device and electronic equipment
CN117346785A (en) A multi-sensor fusion positioning device and method based on radar and integrated navigation
US10466054B2 (en) Method and system for estimating relative angle between headings
CN117760464A (en) Position correction method, device, storage medium and vehicle for navigation system
CN114910095B (en) Method, device, equipment and storage medium for determining centering rod error
CN114964217B (en) A method and device for estimating state information
US10274317B2 (en) Method and apparatus for determination of misalignment between device and vessel using radius of rotation
CN109710594B (en) Map data validity judging method and device and readable storage medium
CN115407370B (en) Reliability verification method and device for GNSS signals, electronic equipment and medium
CN116625359A (en) A visual-inertial positioning method and device for adaptive fusion single-frequency RTK
CN103047955A (en) Method for obtaining position of portable device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Information processing method, device, equipment and computer storage medium

Effective date of registration: 20221223

Granted publication date: 20210928

Pledgee: China Construction Bank Corporation Beijing Economic and Technological Development Zone sub branch

Pledgor: GUOQI (BEIJING) INTELLIGENT NETWORK ASSOCIATION AUTOMOTIVE RESEARCH INSTITUTE Co.,Ltd.

Registration number: Y2022110000347