[go: up one dir, main page]

CN111814752A - Indoor positioning implementation method, server, intelligent mobile device and storage medium - Google Patents

Indoor positioning implementation method, server, intelligent mobile device and storage medium Download PDF

Info

Publication number
CN111814752A
CN111814752A CN202010817857.8A CN202010817857A CN111814752A CN 111814752 A CN111814752 A CN 111814752A CN 202010817857 A CN202010817857 A CN 202010817857A CN 111814752 A CN111814752 A CN 111814752A
Authority
CN
China
Prior art keywords
intelligent mobile
candidate
positioning
mobile device
mobile equipment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010817857.8A
Other languages
Chinese (zh)
Other versions
CN111814752B (en
Inventor
张干
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Mumu Jucong Robot Technology Co ltd
Original Assignee
Shanghai Mumu Jucong Robot Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Mumu Jucong Robot Technology Co ltd filed Critical Shanghai Mumu Jucong Robot Technology Co ltd
Priority to CN202010817857.8A priority Critical patent/CN111814752B/en
Publication of CN111814752A publication Critical patent/CN111814752A/en
Application granted granted Critical
Publication of CN111814752B publication Critical patent/CN111814752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/38Outdoor scenes
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Remote Sensing (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides an indoor positioning implementation method, a server, intelligent mobile equipment and a storage medium, wherein the method comprises the following steps: acquiring an identified position list of the intelligent mobile equipment according to video image data acquired from each monitoring equipment in a preset space; according to the identification position list and the known spatial position of the intelligent mobile equipment which is positioned, searching out the intelligent mobile equipment to be positioned and acquiring a corresponding candidate position list; and sending the candidate position list to the corresponding intelligent mobile equipment to be positioned, and finishing the positioning of the intelligent mobile equipment to be positioned according to the matching scoring result sent by the intelligent mobile equipment to be positioned. The invention realizes the indoor positioning of the intelligent mobile equipment, avoids the problem of positioning loss caused by manually moving the intelligent mobile equipment away, and improves the accuracy and effectiveness of the indoor positioning.

Description

Indoor positioning implementation method, server, intelligent mobile device and storage medium
Technical Field
The invention relates to the technical field of indoor positioning, in particular to an indoor positioning implementation method, a server, intelligent mobile equipment and a storage medium.
Background
With the high development of science and technology, the application fields of intelligent mobile devices such as mobile robots and unmanned vehicles are more and more extensive, such as industry, agriculture, medical treatment and the like. With the wide application of smart mobile devices, intelligence becomes an important direction for their development. As one of the important service smart mobile device types, the smart mobile device has one direction of intellectualization, namely navigation and obstacle avoidance during moving. An important link of the intelligent mobile device controlled by the computer in the motion process is that the computer needs to know where the intelligent mobile device is, namely the problem of positioning of the intelligent mobile device.
Generally, an intelligent mobile device relying on a laser SLAM technology or a visual SLAM technology has the problem of starting self-positioning or 'kidnapping' of the intelligent mobile device, namely, the problem of position loss exists after the intelligent mobile device is started or is moved by a person. At present, the problem of auxiliary realization of self-positioning during starting by using modes such as WIFI positioning, uwb positioning, two-dimensional code positioning, fixed position starting and the like exists. On one hand, the methods need to additionally add hardware, so that the deployment cost of the intelligent mobile equipment is increased, and on the other hand, the use difficulty of the intelligent mobile equipment is increased.
Disclosure of Invention
The invention aims to provide an indoor positioning implementation method, a server, an intelligent mobile device and a storage medium, which are used for implementing indoor positioning of the intelligent mobile device, avoiding the problem of positioning loss caused by manually moving the intelligent mobile device away and improving the accuracy and effectiveness of indoor positioning.
The technical scheme provided by the invention is as follows:
the invention provides an indoor positioning implementation method, which is characterized in that the method is applied to a server and comprises the following steps:
acquiring an identified position list of the intelligent mobile equipment according to video image data acquired from each monitoring equipment in a preset space;
according to the identification position list and the known spatial position of the intelligent mobile equipment which is positioned, searching out the intelligent mobile equipment to be positioned and acquiring a corresponding candidate position list;
and sending the candidate position list to the corresponding intelligent mobile equipment to be positioned, and completing the positioning of the intelligent mobile equipment to be positioned according to the matching scoring result sent by the intelligent mobile equipment to be positioned.
Further, the step of obtaining the identified location list of the intelligent mobile device according to the video image data obtained from each monitoring device in the preset space includes:
acquiring video image data of all monitoring devices distributed in a preset space, carrying out image recognition on the video image data, and screening out target monitoring devices of which the intelligent mobile devices are recognized;
acquiring attitude information corresponding to each target monitoring device;
and calculating to obtain the position coordinates of the intelligent mobile equipment according to the posture information corresponding to the target monitoring equipment, and summarizing all the position coordinates to obtain the identification position list.
Further, the step of sending the candidate position list to the corresponding intelligent mobile device to be positioned and completing the positioning of the intelligent mobile device to be positioned according to the matching scoring result sent by the intelligent mobile device to be positioned includes:
respectively sending the candidate position lists to corresponding intelligent mobile equipment to be positioned;
receiving a preliminary matching scoring result sent by the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively performing matching calculation on the intelligent mobile equipment to be positioned according to candidate information in the candidate position list; the candidate information comprises candidate floor maps and candidate position coordinates corresponding to the candidate floor maps;
determining a candidate floor map with the maximum score value and candidate position coordinates as a positioning result corresponding to the intelligent mobile equipment to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
Further, the candidate floor map and the candidate position coordinates with the maximum determined score value are positioning results corresponding to the intelligent mobile device to be positioned; the positioning result comprises the following steps after the floor and the position of the intelligent mobile device to be positioned are included:
obtaining a verification matching scoring result which is sent again after the intelligent mobile equipment moves a preset distance;
if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that positioning is wrong for repositioning;
and the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile device and the corresponding candidate floor map.
The invention provides an indoor positioning implementation method, which is applied to intelligent mobile equipment and comprises the following steps:
receiving a candidate position list sent by a server; the candidate position list is obtained by matching and screening the identified position list of the intelligent mobile equipment according to the identified position list and the known spatial position of the intelligent mobile equipment which is positioned completely after the server obtains the identified position list of the identified intelligent mobile equipment according to the video image data; the video image data is acquired from each monitoring device in a preset space;
if the positioning is not completed, scanning the surrounding environment to acquire environment sensing data, and sequentially loading the candidate information of the candidate position list to respectively perform matching calculation to obtain a preliminary matching scoring result;
determining a candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning results of the candidate floor map and the candidate position coordinates are self-positioned; the positioning result comprises the floor and the position of the positioning result.
Further, the step of scanning the surrounding environment to obtain environment sensing data if the positioning is not completed by the mobile terminal, and sequentially loading the candidate information of the candidate position list to perform matching calculation respectively to obtain an initial matching scoring result includes:
scanning the current surrounding environment of the environment sensing sensor by the environment sensing sensor, acquiring and obtaining the environment sensing data and extracting to obtain environment characteristics; the environmental sensing data includes image observation data and laser observation data;
and respectively carrying out matching calculation on the structural characteristics corresponding to the candidate floor maps in each candidate information and the environmental characteristics to obtain the preliminary matching scoring result.
Further, the step of determining the candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning results of the candidate floor map and the candidate position coordinates, and the step of determining the candidate floor map with the maximum score value after self-positioning is completed, comprises the following steps:
scanning again to obtain environment sensing data after the mobile terminal moves a preset distance so as to calculate to obtain a verification matching scoring result;
if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the self is in a to-be-positioned state for repositioning;
and the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile device and the corresponding candidate floor map.
The invention also provides a server, which comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the processor is used for executing the computer program stored in the memory to realize the operation executed by the indoor positioning realization method.
The present invention also provides a storage medium having at least one instruction stored therein, where the instruction is loaded and executed by a processor to implement the operations performed by the indoor positioning implementation method.
The invention also provides an intelligent mobile device, which comprises a processor, a memory and a computer program stored in the memory and capable of running on the processor, wherein the processor is used for executing the computer program stored in the memory to realize the operation executed by the indoor positioning realization method.
By the indoor positioning implementation method, the server, the intelligent mobile device and the storage medium, the indoor positioning of the intelligent mobile device can be achieved, the problem of positioning loss caused by manually moving the intelligent mobile device away is solved, and the accuracy and effectiveness of the indoor positioning are improved.
Drawings
The above features, technical features, advantages and implementations of the indoor positioning implementation method, the server, the smart mobile device and the storage medium will be further described in the following description of preferred embodiments in a clearly understandable manner with reference to the accompanying drawings.
FIG. 1 is a flow chart of one embodiment of an indoor positioning implementation of the present invention;
FIG. 2 is a flow chart of another embodiment of an indoor positioning implementation of the present invention;
FIG. 3 is a schematic diagram of the transformation of the camera coordinate system, world coordinate system and imaging plane coordinate system;
fig. 4 is a flowchart of another embodiment of an indoor positioning implementation method of the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
For the sake of simplicity, the drawings only schematically show the parts relevant to the present invention, and they do not represent the actual structure as a product. In addition, in order to make the drawings concise and understandable, components having the same structure or function in some of the drawings are only schematically illustrated or only labeled. In this document, "one" means not only "only one" but also a case of "more than one".
It should be further understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
In addition, in the description of the present application, the terms "first", "second", and the like are used only for distinguishing the description, and are not intended to indicate or imply relative importance.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the following description will be made with reference to the accompanying drawings. It is obvious that the drawings in the following description are only some examples of the invention, and that for a person skilled in the art, other drawings and embodiments can be derived from them without inventive effort.
An embodiment of the present invention, as shown in fig. 1, is an indoor positioning implementation method applied to a server, including the steps of:
s110, acquiring an identified position list of the intelligent mobile device according to video image data acquired from each monitoring device in a preset space;
specifically, the preset spaces of hospitals, office buildings, markets and the like are activity areas of the intelligent mobile device, and monitoring devices such as monitoring cameras are laid in all corners of a common site according to requirements in order to better protect property safety and provide a good video evidence-taking restoration site when disputes or divergent events occur.
Can install wireless communication chip on supervisory equipment, arrange WIFI wireless network in the place, like this, supervisory equipment can be with video image data and server wireless communication connection, certainly, the server can pass through 485 bus wired connection with each supervisory equipment. Therefore, the server can acquire the video image data collected by the monitoring device. Then, the server carries out framing processing on the video image data to obtain a plurality of image frames, the server carries out image recognition through a deep learning technology, whether the intelligent mobile device is detected and recognized or not is judged, and if the intelligent mobile device is detected, the image frames detected and recognized to the intelligent mobile device are marked to obtain a target image frame. The server acquires the position coordinates of the identified intelligent mobile device according to the target image frame, and the installation position of the monitoring device corresponding to the identified target image frame of the intelligent mobile device is known, so that the floor where the identified intelligent mobile device is located can be acquired. Therefore, the identified position coordinates of the intelligent mobile device and the floor map corresponding to the intelligent mobile device are bound to obtain an integral element list, the identified position list comprises a plurality of integral elements, and each integral element comprises the position coordinates and the floor map corresponding to the position coordinates.
S120, according to the identification position list and the known spatial position of the intelligent mobile device which is positioned, searching out the intelligent mobile device to be positioned and acquiring a corresponding candidate position list;
specifically, the positioning of the intelligent mobile device is completed by determining the spatial position of the intelligent mobile device at the current time, that is, the spatial positions of the positioning of the intelligent mobile device at the same time are one. The candidate position list comprises a plurality of candidate information, and each candidate information comprises a candidate position and a candidate floor map corresponding to the candidate position.
Since the smart mobile devices in a fixed area (e.g., charging post, preset parking area) can accurately know their own known spatial location (known spatial location includes known floors and known locations). Or the staff inputs the known spatial position of the intelligent mobile device on the interactive interface of the intelligent mobile device, and then the known spatial position of the staff is reported to the server after the positioning of the intelligent mobile device is completed. Of course, when the smart mobile device passes through a fixed area provided with an infrared sensor, the server may obtain a known spatial position of the smart mobile device passing through the fixed area by performing a movement trajectory estimation based on a motion sensor provided on the smart mobile device. Of course, the method for obtaining the positioning result corresponding to the intelligent mobile device to be positioned according to the embodiment may also be used as one of the ways of positioning the intelligent device and the known spatial position thereof. After the server acquires the identified position list of the identified intelligent mobile equipment and acquires the known spatial position of the intelligent mobile equipment which is positioned in the above way, and finding out the position coordinate with the maximum matching degree with the known position as the target coordinate position according to the known floor in the known spatial position, deleting the position coordinate with the same target coordinate position in the identification position list, thus, all target coordinate positions can be deleted in a matching way, the known spatial positions of the targets are not reported to the server, but reports that the server is in an unset state, the intelligent mobile devices are the intelligent mobile devices to be positioned, then, position coordinates except the known spatial position in the identification position list are screened out to serve as a plurality of candidate positions of each intelligent mobile device to be positioned, and the candidate positions and candidate floor maps corresponding to the candidate positions are collected to obtain a candidate position list.
Illustratively, video image data of three robots a, b and c are captured by a monitoring device A, B, C at the elevator hall of the first floor. The server can acquire three position coordinates which are respectively position coordinates D1, D2 and D3 through an image recognition technology, the robot A informs that the server belongs to the intelligent mobile equipment which is positioned and the known spatial position of the robot A, and the robot B and the robot C inform that the server belongs to the intelligent mobile equipment to be positioned. The server compares the position coordinates with the known spatial position reported by the robot A to find out the position coordinates D1 which are most similar to the known spatial position reported by the server, and deletes the position coordinates D1 in the identified position list, so that the position coordinates D2 and D3 are candidate positions, the floor map of the first floor corresponding to the position coordinates D2 and D3 is a candidate floor map, and the candidate position list is composed of first candidate information (including the position coordinates D2+ the floor map of the first floor) and second candidate information (including the position coordinates D3+ the floor map of the first floor).
S130, the candidate position list is sent to the corresponding intelligent mobile equipment to be positioned, and the positioning of the intelligent mobile equipment to be positioned is completed according to the matching scoring result sent by the intelligent mobile equipment to be positioned.
Specifically, after the server acquires the candidate position list, the candidate position list is sent to each intelligent mobile device to be positioned, the intelligent mobile device to be positioned can automatically detect and acquire environment sensing data of the intelligent mobile device to be positioned on the surrounding environment of the position where the intelligent mobile device to be positioned, and after the candidate position list is received, matching evaluation is performed according to the candidate position list, the floor map corresponding to each floor and the environment sensing data to obtain a preliminary matching scoring result. Certainly, after receiving the candidate location list, the intelligent mobile device to be located may detect and acquire the environment sensing data of the environment around the location by itself, and then perform matching evaluation according to the candidate location list and the environment sensing data to obtain a preliminary matching scoring result. The sequence of the smart mobile device acquiring the environment sensing data and receiving the candidate location list is not limited herein, and is within the scope of the present invention. After the intelligent mobile device to be positioned acquires the preliminary matching scoring result in the mode, the preliminary matching scoring result is sent to the server, and the server determines the positioning result corresponding to the intelligent mobile device to be positioned according to the preliminary matching scoring result.
In the embodiment, the method and the system are combined with the existing indoor monitoring equipment, the approximate position of each intelligent mobile equipment in the field is identified through an object identification technology, the global positioning capability of the intelligent mobile equipment is utilized, the indoor positioning of the intelligent mobile equipment is realized, the problem of positioning loss caused by manually moving the intelligent mobile equipment away is avoided, the indoor positioning result is more accurate, and the indoor positioning accuracy is improved. In addition, the roughly-positioned positions of the intelligent mobile devices in the field are identified through the existing indoor monitoring devices to realize preliminary screening, so that the positioned intelligent mobile devices are screened out and only the intelligent mobile devices to be positioned are positioned, the positioning matching range can be greatly reduced, and the overall positioning efficiency of all the intelligent mobile devices in the field is improved.
An embodiment of the present invention, as shown in fig. 2, is an indoor positioning implementation method applied to a server, including the steps of:
s111, acquiring video image data of all monitoring devices distributed in a preset space, performing image recognition on the video image data, and screening out target monitoring devices of which the intelligent mobile devices are recognized;
specifically, each monitoring device collects video image data in a monitoring area of the monitoring device, each video image data includes unique identification information of the monitoring device obtained by shooting, and the unique identification information includes, but is not limited to, deployment information of the monitoring device, a monitoring device code, and a device serial number. After the server acquires the video image data, the server performs framing processing on the video image data to obtain an image frame, wherein the image frame has shooting time information and unique identification information. Then, the server performs image preprocessing such as graying and binarization on each image frame respectively to obtain images to be identified, wherein each image to be identified has shooting time information and unique identification information, that is, each image to be identified is bound and associated with the shooting time information and the unique identification information, so that the corresponding unique identification information can be found out subsequently according to the images to be identified, here, the unique identification information can correspond to a plurality of images to be identified, and one image to be identified corresponds to only one unique identification information.
After the server acquires the images to be recognized, the images to be recognized are respectively recognized through the neural network model obtained through training in advance, whether the intelligent mobile equipment is recognized in the images to be recognized is judged, the intelligent mobile equipment is recognized not only by the whole outline of the intelligent mobile equipment, but also by the partial outline of the intelligent mobile equipment, of course, one image to be recognized may recognize one or at least two intelligent mobile equipment, and in addition, the same intelligent mobile equipment may also recognize in a plurality of images to be recognized. If the intelligent mobile device is identified in the current image to be identified, determining the monitoring device corresponding to the unique identification information associated with the current image to be identified as the target monitoring device, and screening out all the target monitoring devices which identify the intelligent mobile device by analogy.
S112, acquiring attitude information corresponding to each target monitoring device;
specifically, the attitude information includes the installation position and the shooting angle of the monitoring device. The installation position of each monitoring device is known, that is, the world coordinates (X, Y, Z) of each monitoring device can be obtained, wherein X is the X-axis coordinate of the monitoring device relative to the preset origin, Y is the Y-axis coordinate of the monitoring device relative to the preset origin, H is the Z-axis coordinate of the monitoring device relative to the preset origin, that is, the height value, and the installation position of each monitoring device relative to the preset origin can be determined according to the world coordinates (X, Y, Z) of each monitoring device. If the monitoring devices are fixedly installed and the shooting visual field range is not rotatably adjusted, the shooting angle of each monitoring device relative to a preset origin can be determined according to the world coordinates (X, Y, Z) of each monitoring device, wherein the shooting angle comprises a pitch angle and a shooting direction. If the monitoring equipment is fixedly installed and the shooting visual field range can be adjusted in a rotating mode, then the pitch angle alpha of the monitoring equipment and the rotating angle beta of the monitoring equipment can be obtained through calculation according to an acceleration sensor or a gyroscope arranged on the monitoring equipment, the shooting angle of each monitoring equipment can be obtained through calculation according to world coordinates (X, Y, Z), the pitch angle alpha and the rotating angle beta of the monitoring equipment, and therefore the posture information corresponding to each target monitoring equipment is obtained.
S113, calculating to obtain the position coordinates of the intelligent mobile device according to the posture information corresponding to the target monitoring device, and summarizing all the position coordinates to obtain an identification position list;
specifically, a preset feature point of the target monitoring device (a central point of the intelligent mobile device, or a central point of a camera of the intelligent mobile device, a head central point of the intelligent mobile device, or other preselected points) is selected, and a pixel coordinate of a projection point of the preset feature point projected on the imaging plane is obtained.
The coordinate origin of the world coordinate system is set as required, and any point of the preset space can be used as the coordinate origin of the world coordinate system, and the world coordinate system can represent the space coordinates of the object in the space on the preset space. The camera coordinate system is shown in fig. 3 with the optical center Fc of the monitoring device as the origin, and the camera coordinate system (Xc, Yc, Zc) coincides with the optical axis OA, i.e. the z-axis of the camera coordinate system (Xc, Yc, Zc) points to the front of the monitoring device C, and the positive directions of the x-axis and the y-axis of the camera coordinate system (Xc, Yc, Zc) are parallel to the world coordinate system. The imaging plane coordinate system (u, v) represents the positions of the pixels, and the origin of coordinates is the position of the intersection of the optical axis OA of the monitoring device and the imaging plane coordinate system (u, v). The origin of coordinates of the pixel coordinate system is in the upper left corner. According to the conversion relation among the pixel coordinate system, the world coordinate system, the camera coordinate system (Xc, Yc, Zc), the imaging plane coordinate system (u, v), and the posture information corresponding to the target monitoring device, the position coordinates of the preset feature point P of the smart mobile device can be calculated and acquired, and then the position coordinates of the preset feature point can be used as the position coordinates of the preset feature point P of the smart mobile device. The conversion relationship among the pixel coordinate system, the world coordinate system, the camera coordinate system (Xc, Yc, Zc), and the imaging plane coordinate system (u, v) is prior art and will not be described in detail here.
In addition, because the installation position of each monitoring device is known, the installation floor information of the monitoring device can be obtained according to the installation position of the monitoring device, and the floor where the intelligent mobile device is located, which is identified in the video image data acquired by the monitoring device, is the same as the installation floor information of the monitoring device, so that the server can acquire the floor map where the identified intelligent mobile device is located according to the target image frame. Each position coordinate corresponds to a floor map of a floor where the position coordinate is located, and each floor map corresponds to a plurality of position coordinates. After the position coordinates of the identified intelligent mobile devices and the floor maps corresponding to the intelligent mobile devices are obtained according to the embodiment, the position coordinates of each identified intelligent mobile device and the floor maps corresponding to the intelligent mobile device are respectively bound to obtain the identified position list.
And taking the floor map and the position coordinates as an integral element to serve as an element point of the identification position list. Continuing with the above example, video image data of three robots a, b, and c are captured by a monitoring device A, B, C at the first floor elevator hall. The server acquires three position coordinates, and the monitoring devices are all installed on the first floor, so that the three position coordinates all correspond to the floor map of the first floor, and the three coordinate positions are respectively and correspondingly bound with the floor map of the first floor to obtain the identification position list corresponding to the first floor. And analogizing in turn, obtaining the list of the identification positions of each floor, which is not described in detail herein.
S120, according to the identification position list and the known spatial position of the intelligent mobile device which is positioned, searching out the intelligent mobile device to be positioned and acquiring a corresponding candidate position list;
s131, respectively sending the candidate position lists to corresponding intelligent mobile equipment to be positioned;
s132, receiving a preliminary matching scoring result sent by the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively carrying out matching calculation on the candidate information in the candidate position list by the intelligent mobile equipment to be positioned; the candidate information comprises candidate floor maps and candidate position coordinates corresponding to the candidate floor maps;
specifically, the floor map has known structural features and color features, the structural features include but are not limited to straight line segments, corners, points, vertical lines, and the like, and the corresponding examples are walls, corners, convex corners, doors, and the like. The environment features comprise geometric feature information and color feature information, the geometric feature information comprises but is not limited to features such as straight line segments, corners, points, vertical lines and the like, and corresponding examples are features such as walls, corners, convex corners, doors and the like.
After the server acquires the candidate position list, the candidate position list is sent to each intelligent mobile device to be positioned, image observation data are acquired through a vision sensor, or laser observation data are acquired through a laser sensor, feature extraction is carried out through the image observation data or the laser observation data, and the environmental features around the position where the server is located at the current moment are acquired. The candidate position list comprises corresponding candidate floor maps, so that the structural features of the candidate floor maps can be obtained through calling, then the intelligent mobile device to be positioned can respectively match the environmental features with the structural features corresponding to the candidate floor maps in the candidate position list, respectively obtain the similarity between the environmental features and the candidate floor maps so as to obtain a preliminary matching scoring result, and then the intelligent mobile device to be positioned sends the preliminary matching scoring result to the server.
The following specifically describes a process of obtaining environmental features by feature extraction through laser observation data: the prior art is that the laser observation data is acquired, the laser observation data is subjected to region segmentation, then the geometric feature information included in the laser observation data is extracted through an angular point detection algorithm and a straight line fitting algorithm, and the extraction of the geometric feature information from the laser observation data is not described in detail herein. The geometric characteristic information may be used to represent an environmental characteristic corresponding to a position where the smart mobile device obtains the laser observation data, and obtain geometric characteristic information of the smart mobile device to be positioned, which is obtained by scanning the smart mobile device at a current position (or a start-up position, which may be any position in the preset space), as the environmental characteristic.
The following specifically describes a process of obtaining environmental features through feature extraction performed on image observation data: acquiring image observation data, performing gray processing and binarization processing on visual observation data, namely a shot image, extracting geometric feature information included in the image observation data by using an edge detection algorithm such as a SIFT algorithm, a Sobel operator, a Previtt operator and the like, and extracting the geometric feature information from the image is the prior art and is not described in detail herein. The geometric characteristic information can be used for representing one of the environmental characteristics corresponding to the position where the intelligent mobile device obtains the image observation data. In addition, if the camera installed on the smart mobile device is a depth camera, the smart mobile device may further scan, by using an image recognition algorithm, a color attribute corresponding to the geometric feature information of the image observation data obtained by the smart mobile device to be positioned at the current position corresponding to each geometric feature information in the captured image as one of the environmental features.
S133, determining a candidate floor map with the maximum score value and candidate position coordinates as a positioning result corresponding to the intelligent mobile device to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
Specifically, after receiving a primary matching scoring result sent by each intelligent mobile device to be positioned, the server compares the sizes of a plurality of similarities according to the primary matching scoring result, and determines a candidate floor map with the maximum similarity and a corresponding candidate position coordinate to obtain a positioning result of the intelligent mobile device to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
In the embodiment, the indoor positioning method is combined with the existing indoor monitoring equipment, the rough positions of the intelligent mobile equipment in the site are preliminarily positioned and obtained through an object recognition technology to carry out preliminary screening, and then the intelligent mobile equipment is used for collecting environment sensing data, namely, the preliminary indoor positioning is realized by carrying out quick matching in the map of each candidate floor according to the environment sensing data (laser observation data and/or image observation data) based on the laser observation data collected by the laser sensor or the image observation data collected by the visual sensor, so that the problem of positioning loss caused by manually moving the intelligent mobile equipment is avoided, the indoor positioning result is more accurate, and the accuracy of the indoor positioning is improved.
Secondly, because the roughly position that each intelligent mobile device in the place was discerned through indoor existing supervisory equipment realizes preliminary screening earlier to screen out the intelligent mobile device that has already been fixed a position and only treat the intelligent mobile device that fixes a position and fix a position, can dwindle the location match scope greatly, be favorable to improving the whole location efficiency of all intelligent mobile devices in the place.
And finally, after the intelligent mobile equipment is started, according to laser observation data acquired by a laser sensor or image observation data acquired by a visual sensor, finding out a candidate floor map with the maximum similarity value and a corresponding position coordinate as a positioning result of the intelligent mobile equipment to be positioned in an enumeration matching mode, thereby completing the positioning of the initial position. According to the embodiment, the environment does not need to be modified, the modification such as landmark sticking, light reflecting strips and the like in the environment does not need to be utilized in the traditional method, and the applicability is wide. Moreover, after the initial position is positioned, the movement track of the intelligent mobile equipment is monitored by utilizing the movement data of the intelligent mobile equipment, so that the positioning of the intelligent mobile equipment in the movement process can be tracked and acquired in real time, and the accuracy and the reliability of indoor positioning are greatly improved.
The method comprises the steps of acquiring an identification position list by identifying video image data acquired by monitoring equipment which is deployed in a preset space in advance according to the video image data, then matching the identification position list with the known space position of the intelligent mobile equipment which is positioned, screening out all intelligent equipment to be positioned and a candidate position list, scanning the surrounding environment of the current position of the intelligent mobile equipment by utilizing laser scanning equipment (laser radar, millimeter wave radar and the like) or visual scanning equipment (comprising a camera, a depth camera, a binocular camera and the like) which is installed on the intelligent mobile equipment, extracting features, and performing matching positioning with structural features corresponding to candidate floor maps in the candidate position list.
By the method, the intelligent mobile equipment has global positioning and positioning recovery capability, and meanwhile, the real-time performance of positioning recovery is greatly guaranteed. By obtaining the environmental features, when the environmental features meet the structural features, similarity matching is carried out on the environmental features and the structural features, and the structural feature with the maximum similarity is determined to be the candidate floor map corresponding to the environment where the intelligent mobile device to be positioned is located. The method has the advantages that the number of the matching points is greatly reduced by extracting the straight lines in the environment when the environment straight line features are obvious by utilizing more straight line features in the environment, and the algorithm can be quickly converged when the floor and the position of the intelligent mobile device are positioned according to the matching points, so that the algorithm efficiency is improved, and the positioning result of the intelligent mobile device to be positioned can be quickly obtained.
S140, acquiring a verification matching scoring result which is sent again after the intelligent mobile equipment moves a preset distance;
s150, if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning is wrong so as to reposition;
and the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile device and the corresponding candidate floor map.
Specifically, after the positioning result corresponding to the intelligent mobile device to be positioned obtained by the preliminary positioning is obtained according to the preliminary matching scoring result in the above manner, the intelligent mobile device moves for a distance, that is, the intelligent mobile device moves for a preset distance (for example, 0.5m or 1m) or turns around in place to change the direction, the manner of obtaining the preliminary matching scoring result in the above embodiment is continued, the intelligent mobile device obtains image observation data through the acquisition of the visual sensor again, or obtains laser observation data through the acquisition of the laser sensor, then obtains the self verification matching scoring result through the matching of the image observation data or the laser observation data, the server receives the verification matching scoring result sent by the intelligent mobile device, then compares the verification matching scoring result with the preliminary matching scoring result, if the change of the verification matching scoring result and the preliminary matching scoring result does not exceed the threshold value, the positioning result obtained by determining the positioning is correct. And if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning result obtained by positioning is wrong, and returning the intelligent mobile equipment with the positioning error to the intelligent mobile equipment without positioning again.
In the embodiment, the positioning result obtained by primarily positioning the intelligent mobile device is verified by comparing the verification matching scoring result with the primary matching scoring result, so that the positioning accuracy and reliability of the intelligent mobile device in a building activity scene are improved.
An embodiment of the present invention, as shown in fig. 4, is an indoor positioning implementation method applied to an intelligent mobile device, including:
s210, receiving a candidate position list sent by a server; the candidate position list is obtained by matching and screening the identified position list of the intelligent mobile equipment according to the identified position list and the known spatial position of the intelligent mobile equipment which is positioned completely after the server obtains the identified position list of the identified intelligent mobile equipment according to the video image data; video image data are acquired from each monitoring device in a preset space;
specifically, how to obtain the identified identification position list of the intelligent mobile device after the server obtains the video image data may refer to the embodiment corresponding to fig. 2, which is not described in detail herein.
S220, if the positioning is not completed, scanning the surrounding environment to acquire environment sensing data, and sequentially loading candidate information of the candidate position list to respectively perform matching calculation to obtain a preliminary matching scoring result;
specifically, the smart mobile device may query its own log data, determine that its own positioning is not completed if the positioning result at the current time does not exist in the log data, and determine that its own positioning is completed if the positioning result at the current time exists in the log data.
According to the embodiment, the intelligent mobile device which has finished positioning reports the known spatial position of the intelligent mobile device to the server, and the intelligent mobile device to be positioned reports the unfinished positioning of the intelligent mobile device to the server, so that the server can directly send the candidate position list to the intelligent mobile device to be positioned. After the server acquires the candidate position list, the candidate position list can be sent to each intelligent mobile device, each intelligent mobile device can judge whether the intelligent mobile device completes positioning, if the intelligent mobile device determines that the intelligent mobile device completes positioning, the candidate position list is ignored, if the intelligent mobile device determines that the intelligent mobile device does not complete positioning, the intelligent mobile device can automatically detect and acquire environment sensing data of the intelligent mobile device for the surrounding environment of the position where the intelligent mobile device is located, and after the candidate position list is received, matching evaluation is carried out according to the candidate position list, the floor map corresponding to each floor and the environment sensing data to obtain a preliminary matching scoring result.
Certainly, after the intelligent mobile device that determines that the positioning of the intelligent mobile device is not completed by the intelligent mobile device itself receives the candidate position list, the intelligent mobile device firstly detects and acquires the environment sensing data of the intelligent mobile device itself about the surrounding environment of the position where the intelligent mobile device is located, and then the intelligent mobile device performs matching evaluation according to the candidate position list and the environment sensing data to obtain a preliminary matching scoring result. The sequence of the smart mobile device acquiring the environment sensing data and receiving the candidate location list is not limited herein, and is within the scope of the present invention.
S230, determining a candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning results of the candidate floor map and the candidate position coordinates are self-positioning completed; the positioning result comprises the floor and the position of the positioning result.
Specifically, the intelligent mobile device compares the sizes of the multiple similarities according to the preliminary matching scoring result, and determines a candidate floor map with the maximum similarity and a corresponding candidate position coordinate to obtain a positioning result of the intelligent mobile device to be positioned, wherein the positioning result includes the floor and the position of the intelligent mobile device which is not positioned, namely the intelligent mobile device to be positioned.
In the embodiment, the method and the system are combined with the existing indoor monitoring equipment, the approximate position of each intelligent mobile equipment in the field is identified through an object identification technology, the global positioning capability of the intelligent mobile equipment is utilized, the indoor positioning of the intelligent mobile equipment is realized, the problem of positioning loss caused by manually moving the intelligent mobile equipment away is avoided, the indoor positioning result is more accurate, and the indoor positioning accuracy is improved. In addition, the roughly-positioned positions of the intelligent mobile devices in the field are identified through the existing indoor monitoring devices to realize preliminary screening, so that the positioned intelligent mobile devices are screened out and only the intelligent mobile devices to be positioned are positioned, the positioning matching range can be greatly reduced, and the overall positioning efficiency of all the intelligent mobile devices in the field is improved.
One embodiment of the present invention provides an indoor positioning implementation method, which is applied to an intelligent mobile device, and includes:
s210, receiving a candidate position list sent by a server; the candidate position list is obtained by matching and screening the identified position list of the intelligent mobile equipment according to the identified position list and the known spatial position of the intelligent mobile equipment which is positioned completely after the server obtains the identified position list of the identified intelligent mobile equipment according to the video image data; video image data are acquired from each monitoring device in a preset space;
s221, scanning the current surrounding environment of the environment sensing sensor through the environment sensing sensor, acquiring and obtaining environment sensing data and extracting environment characteristics; the environment sensing data comprises image observation data and laser observation data;
specifically, the floor map has known structural features and color features, the structural features include but are not limited to straight line segments, corners, points, vertical lines, and the like, and the corresponding examples are walls, corners, convex corners, doors, and the like. The environment features comprise geometric feature information and color feature information, the geometric feature information comprises but is not limited to features such as straight line segments, corners, points, vertical lines and the like, and corresponding examples are features such as walls, corners, convex corners, doors and the like.
After the server acquires the candidate position list, the candidate position list is sent to each intelligent mobile device to be positioned, image observation data are acquired through a vision sensor, or laser observation data are acquired through a laser sensor, feature extraction is carried out through the image observation data or the laser observation data, and the environmental features around the position where the server is located at the current moment are acquired.
The following specifically describes a process of obtaining environmental features by feature extraction through laser observation data: the prior art is that the laser observation data is acquired, the laser observation data is subjected to region segmentation, then the geometric feature information included in the laser observation data is extracted through an angular point detection algorithm and a straight line fitting algorithm, and the extraction of the geometric feature information from the laser observation data is not described in detail herein. The geometric characteristic information may be used to represent an environmental characteristic corresponding to a position where the smart mobile device obtains the laser observation data, and obtain geometric characteristic information of the smart mobile device to be positioned, which is obtained by scanning the smart mobile device at a current position (or a start-up position, which may be any position in the preset space), as the environmental characteristic.
The following specifically describes a process of obtaining environmental features through feature extraction performed on image observation data: acquiring image observation data, performing gray processing and binarization processing on visual observation data, namely a shot image, extracting geometric feature information included in the image observation data by using an edge detection algorithm such as a SIFT algorithm, a Sobel operator, a Previtt operator and the like, and extracting the geometric feature information from the image is the prior art and is not described in detail herein. The geometric characteristic information can be used for representing one of the environmental characteristics corresponding to the position where the intelligent mobile device obtains the image observation data. In addition, if the camera installed on the smart mobile device is a depth camera, the smart mobile device may further use a color attribute corresponding to each geometric feature information in the captured image as one of the environmental features through an image recognition algorithm.
S222, respectively carrying out matching calculation on the structural features corresponding to the candidate floor maps in the candidate information and the environmental features to obtain a preliminary matching scoring result;
specifically, the positioning of the intelligent mobile device is completed by determining the spatial position of the intelligent mobile device at the current time, that is, the spatial positions of the positioning of the intelligent mobile device at the same time are one. The candidate position list comprises a plurality of candidate information, and each candidate information comprises a candidate position and a candidate floor map corresponding to the candidate position.
The candidate position list comprises candidate positions and candidate floor maps corresponding to the candidate positions, so that the intelligent mobile device can obtain the structural features of the candidate floor maps, and then the intelligent mobile device can respectively match the environmental features with the structural features corresponding to the candidate floor maps in the candidate position list, and respectively obtain the similarity between the environmental features and the candidate floor maps so as to obtain a preliminary matching scoring result. And then, determining the intelligent mobile equipment which does not finish positioning, namely the intelligent mobile equipment to be positioned, and comparing the sizes of the intelligent mobile equipment to be positioned according to a plurality of similarities in the preliminary matching scoring result, and determining a candidate floor map with the maximum similarity and corresponding candidate position coordinates to obtain a self positioning result, wherein the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
In the embodiment, the indoor positioning method is combined with the existing indoor monitoring equipment, the rough positions of the intelligent mobile equipment in the site are preliminarily positioned and obtained through an object recognition technology to carry out preliminary screening, and then the intelligent mobile equipment is used for collecting environment sensing data, namely, the preliminary indoor positioning is realized by carrying out quick matching in the map of each candidate floor according to the environment sensing data (laser observation data and/or image observation data) based on the laser observation data collected by the laser sensor or the image observation data collected by the visual sensor, so that the problem of positioning loss caused by manually moving the intelligent mobile equipment is avoided, the indoor positioning result is more accurate, and the accuracy of the indoor positioning is improved.
Secondly, because the roughly position that each intelligent mobile device in the place was discerned through indoor existing supervisory equipment realizes preliminary screening earlier to screen out the intelligent mobile device that has already been fixed a position and only treat the intelligent mobile device that fixes a position and fix a position, can dwindle the location match scope greatly, be favorable to improving the whole location efficiency of all intelligent mobile devices in the place.
And finally, after the intelligent mobile equipment is started, according to laser observation data acquired by a laser sensor or image observation data acquired by a visual sensor, finding out a candidate floor map with the maximum similarity value and a corresponding position coordinate as a positioning result of the intelligent mobile equipment to be positioned in an enumeration matching mode, thereby completing the positioning of the initial position. According to the embodiment, the environment does not need to be modified, the modification such as landmark sticking, light reflecting strips and the like in the environment does not need to be utilized in the traditional method, and the applicability is wide. Moreover, after the initial position is positioned, the movement track of the intelligent mobile equipment is monitored by utilizing the movement data of the intelligent mobile equipment, so that the positioning of the intelligent mobile equipment in the movement process can be tracked and acquired in real time, and the accuracy and the reliability of indoor positioning are greatly improved.
The method comprises the steps of acquiring an identification position list by identifying video image data acquired by monitoring equipment which is deployed in a preset space in advance according to the video image data, then matching the identification position list with the known space position of the intelligent mobile equipment which is positioned, screening out all intelligent equipment to be positioned and a candidate position list, scanning the surrounding environment of the current position of the intelligent mobile equipment by utilizing laser scanning equipment (laser radar, millimeter wave radar and the like) or visual scanning equipment (a camera, a depth camera, a binocular camera and the like) which is installed on the intelligent mobile equipment, extracting features, and performing matching positioning with structural features corresponding to candidate floor maps in the candidate position list.
By the method, the intelligent mobile equipment has global positioning and positioning recovery capability, and meanwhile, the real-time performance of positioning recovery is greatly guaranteed. By obtaining the environmental features, when the environmental features meet the structural features, similarity matching is carried out on the environmental features and the structural features, and the structural feature with the maximum similarity is determined to be the candidate floor map corresponding to the environment where the intelligent mobile device to be positioned is located. The method has the advantages that the number of the matching points is greatly reduced by extracting the straight lines in the environment when the environment straight line features are obvious by utilizing more straight line features in the environment, and the algorithm can be quickly converged when the floor and the position of the intelligent mobile device are positioned according to the matching points, so that the algorithm efficiency is improved, and the positioning result of the intelligent mobile device to be positioned can be quickly obtained.
S230, determining a candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning results of the candidate floor map and the candidate position coordinates are self-positioning completed; the positioning result comprises the floor and the position of the positioning result;
and S240, after the mobile terminal moves a preset distance, the mobile terminal scans and acquires the environment sensing data again to calculate and obtain a verification matching scoring result.
In the embodiment, the indoor positioning method is combined with the existing indoor monitoring equipment, the rough positions of the intelligent mobile equipment in the site are preliminarily positioned and obtained through an object recognition technology to carry out preliminary screening, and then the intelligent mobile equipment is used for collecting environment sensing data, namely, the preliminary indoor positioning is realized by carrying out quick matching in the map of each candidate floor according to the environment sensing data (laser observation data and/or image observation data) based on the laser observation data collected by the laser sensor or the image observation data collected by the visual sensor, so that the problem of positioning loss caused by manually moving the intelligent mobile equipment is avoided, the indoor positioning result can be more accurate, and the accuracy of the indoor self-positioning of the intelligent mobile equipment is improved.
In addition, after the intelligent mobile device is started, according to laser observation data collected by a laser sensor or image observation data collected by a visual sensor, a candidate floor map with the maximum score value and a corresponding position coordinate are found out in an enumeration matching mode to be a positioning result corresponding to the intelligent mobile device, so that the initial position is positioned. According to the embodiment, the environment does not need to be modified, the modification such as landmark sticking, light reflecting strips and the like in the environment does not need to be utilized in the traditional method, and the applicability is wide. Moreover, after the initial position is positioned, the motion track of the intelligent mobile equipment is monitored by utilizing the motion data of the intelligent mobile equipment, so that the positioning of the intelligent mobile equipment in the moving process can be tracked and obtained in real time, and the accuracy and the reliability of the indoor self-positioning of the intelligent mobile equipment are greatly improved.
S250, if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the self is in a to-be-positioned state to reposition;
and the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile device and the corresponding candidate floor map.
Specifically, after the positioning result corresponding to the intelligent mobile device to be positioned obtained by the preliminary positioning is obtained according to the preliminary matching scoring result in the above way, the intelligent mobile device moves for a certain distance, i.e. the intelligent mobile device advances a preset distance (e.g. 0.5m or 1m), or turns around in place to change the orientation, continuing the manner of obtaining the preliminary matching scoring result in the above embodiment, the intelligent mobile device obtains the image observation data again through the visual sensor acquisition, or acquiring laser observation data through the laser sensor, matching through the image observation data or the laser observation data to acquire a self verification matching scoring result, comparing the verification matching scoring result with the preliminary matching scoring result, and if the variation of the verification matching scoring result and the preliminary matching scoring result does not exceed the threshold value, determining that the positioning result obtained by positioning is correct. And if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the positioning result obtained by self-positioning is wrong, and then controlling the self-positioning to carry out self-positioning again.
In the embodiment, the positioning result obtained by primarily positioning the intelligent mobile device is verified by comparing the verification matching scoring result with the primary matching scoring result, so that the self-positioning accuracy and reliability of the intelligent mobile device in a building activity scene are improved.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of program modules is illustrated, and in practical applications, the above-described distribution of functions may be performed by different program modules, that is, the internal structure of the apparatus may be divided into different program units or modules to perform all or part of the above-described functions. Each program module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one processing unit, and the integrated unit may be implemented in a form of hardware, or may be implemented in a form of software program unit. In addition, the specific names of the program modules are only used for distinguishing the program modules from one another, and are not used for limiting the protection scope of the application.
In one embodiment of the invention, a server comprises a processor and a memory, wherein the memory is used for storing a computer program; and a processor, configured to execute the computer program stored in the memory, to implement the method for implementing indoor positioning in any one of the method embodiments corresponding to fig. 1-2.
The server can be a desktop computer, a notebook, a palm computer, a tablet computer, a man-machine interaction screen and other equipment.
In one embodiment of the invention, the intelligent mobile device comprises a device body, a moving mechanism, a processor and a memory, wherein the moving mechanism is arranged at the lower part of the device body and comprises a plurality of travelling wheels connected with the lower part of the device body so as to move the travelling body; a memory for storing a computer program; and a processor, configured to execute the computer program stored in the memory, to implement the method for implementing indoor positioning in any one of the method embodiments corresponding to fig. 4-5.
The server or smart mobile device may include, but is not limited to, a processor, a memory. Those skilled in the art will appreciate that the above is merely an example of a server or a smart mobile device, and does not constitute a limitation of a server or a smart mobile device, and may include more or less components than those described, or combine certain components, or different components, such as: the server or smart mobile device may also include input/output interfaces, display devices, network access devices, communication buses, communication interfaces, and the like. A communication interface and a communication bus, and may further comprise an input/output interface, wherein the processor, the memory, the input/output interface and the communication interface complete communication with each other through the communication bus. The memory stores a computer program, and the processor is used for executing the computer program stored on the memory to realize the indoor positioning realization method in the corresponding method embodiment.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be an internal storage unit of the server or smart mobile device, such as: hard disk or memory of the terminal device. The memory may also be an external storage device of the terminal device, such as: the terminal equipment is provided with a plug-in hard disk, an intelligent memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) and the like. Further, the memory may also include both an internal storage unit and an external storage device of the server or the smart mobile device. The memory is used to store the computer program and other programs and data required by the server or smart mobile device. The memory may also be used to temporarily store data that has been output or is to be output.
A communication bus is a circuit that connects the described elements and enables transmission between the elements. For example, the processor receives commands from other elements through the communication bus, decrypts the received commands, and performs calculations or data processing according to the decrypted commands. The memory may include program modules such as a kernel (kernel), middleware (middleware), an Application Programming Interface (API), and applications. The program modules may be comprised of software, firmware or hardware, or at least two of the same. The input/output interface forwards commands or data entered by a user via the input/output interface (e.g., sensor, keyboard, touch screen). The communication interface connects the server or the intelligent mobile device with other network devices, user equipment and networks. For example, the communication interface may be connected to a network by wire or wirelessly to connect to external other network devices or user devices. The wireless communication may include at least one of: wireless fidelity (WiFi), Bluetooth (BT), Near Field Communication (NFC), Global Positioning Satellite (GPS) and cellular communications, among others. The wired communication may include at least one of: universal Serial Bus (USB), high-definition multimedia interface (HDMI), asynchronous transfer standard interface (RS-232), and the like. The network may be a telecommunications network and a communications network. The communication network may be a computer network, the internet of things, a telephone network. The server or the smart mobile device may connect to the network through the communication interface, and a protocol by which the server or the smart mobile device communicates with other network devices may be supported by at least one of an application, an Application Programming Interface (API), middleware, a kernel, and a communication interface.
In an embodiment of the present invention, a storage medium stores at least one instruction, and the instruction is loaded and executed by a processor to implement the operations performed by the corresponding embodiment of any one of the indoor positioning implementation methods in fig. 1-2. For example, the storage medium may be a read-only memory (ROM), a Random Access Memory (RAM), a compact disc read-only memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
They may be implemented in program code that is executable by a computing device such that it is executed by the computing device, or separately, or as individual integrated circuit modules, or as a plurality or steps of individual integrated circuit modules. Thus, the present invention is not limited to any specific combination of hardware and software.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or recited in detail in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units may be stored in a storage medium if they are implemented in the form of software functional units and sold or used as separate products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by sending instructions to relevant hardware through a computer program, where the computer program may be stored in a storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program may be in source code form, object code form, an executable file or some intermediate form, etc. The storage medium may include: any entity or device capable of carrying the computer program, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signal, telecommunication signal, software distribution medium, etc. It should be noted that the content of the storage medium may be increased or decreased as appropriate according to the requirements of legislation and patent practice in the jurisdiction, for example: in certain jurisdictions, in accordance with legislation and patent practice, computer-readable storage media do not include electrical carrier signals and telecommunications signals.
It should be noted that the above embodiments can be freely combined as necessary. The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An indoor positioning implementation method is applied to a server and comprises the following steps:
acquiring an identified position list of the intelligent mobile equipment according to video image data acquired from each monitoring equipment in a preset space;
according to the identification position list and the known spatial position of the intelligent mobile equipment which is positioned, searching out the intelligent mobile equipment to be positioned and acquiring a corresponding candidate position list;
and sending the candidate position list to the corresponding intelligent mobile equipment to be positioned, and completing the positioning of the intelligent mobile equipment to be positioned according to the matching scoring result sent by the intelligent mobile equipment to be positioned.
2. The method for implementing indoor positioning according to claim 1, wherein the step of obtaining the list of the identified locations of the identified smart mobile devices according to the video image data obtained from each monitoring device in the preset space comprises:
acquiring video image data of all monitoring devices distributed in a preset space, carrying out image recognition on the video image data, and screening out target monitoring devices of which the intelligent mobile devices are recognized;
acquiring attitude information corresponding to each target monitoring device;
and calculating to obtain the position coordinates of the intelligent mobile equipment according to the posture information corresponding to the target monitoring equipment, and summarizing all the position coordinates to obtain the identification position list.
3. The indoor positioning implementation method according to claim 1 or 2, wherein the step of sending the candidate position list to the corresponding intelligent mobile device to be positioned, and completing positioning of the intelligent mobile device to be positioned according to the matching score result sent by the intelligent mobile device to be positioned includes:
respectively sending the candidate position lists to corresponding intelligent mobile equipment to be positioned;
receiving a preliminary matching scoring result sent by the intelligent mobile equipment to be positioned; the preliminary matching scoring result is obtained by respectively performing matching calculation on the intelligent mobile equipment to be positioned according to candidate information in the candidate position list; the candidate information comprises candidate floor maps and candidate position coordinates corresponding to the candidate floor maps;
determining a candidate floor map with the maximum score value and candidate position coordinates as a positioning result corresponding to the intelligent mobile equipment to be positioned; the positioning result comprises the floor and the position of the intelligent mobile equipment to be positioned.
4. The indoor positioning implementation method of claim 3, wherein the candidate floor map and the candidate position coordinates with the maximum determined score value are positioning results corresponding to the intelligent mobile device to be positioned; the positioning result comprises the following steps after the floor and the position of the intelligent mobile device to be positioned are included:
obtaining a verification matching scoring result which is sent again after the intelligent mobile equipment moves a preset distance;
if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that positioning is wrong for repositioning;
and the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile device and the corresponding candidate floor map.
5. An indoor positioning implementation method is applied to intelligent mobile equipment and comprises the following steps:
receiving a candidate position list sent by a server; the candidate position list is obtained by matching and screening the identified position list of the intelligent mobile equipment according to the identified position list and the known spatial position of the intelligent mobile equipment which is positioned completely after the server obtains the identified position list of the identified intelligent mobile equipment according to the video image data; the video image data is acquired from each monitoring device in a preset space;
if the positioning is not completed, scanning the surrounding environment to acquire environment sensing data, and sequentially loading the candidate information of the candidate position list to respectively perform matching calculation to obtain a preliminary matching scoring result;
determining a candidate floor map with the maximum score value according to the preliminary matching scoring result, wherein the candidate position coordinates are the positioning results of the candidate floor map and the candidate position coordinates are self-positioned; the positioning result comprises the floor and the position of the positioning result.
6. The indoor positioning implementation method of claim 5, wherein the step of scanning the surrounding environment to obtain the environment sensing data if the positioning is not completed by the positioning system, and sequentially loading the candidate information of the candidate location list to perform the preliminary matching scoring results obtained by the matching calculation respectively comprises:
scanning the current surrounding environment of the environment sensing sensor by the environment sensing sensor, acquiring and obtaining the environment sensing data and extracting to obtain environment characteristics; the environmental sensing data includes image observation data and laser observation data;
and respectively carrying out matching calculation on the structural characteristics corresponding to the candidate floor maps in each candidate information and the environmental characteristics to obtain the preliminary matching scoring result.
7. The indoor positioning realization method according to claim 5 or 6, wherein the candidate floor map with the largest score value is determined according to the preliminary matching scoring result, and the candidate position coordinates are that the self-positioning is completed for the positioning result of the candidate position coordinate; the positioning result comprises the following steps after the floor and the position of the positioning result comprise the following steps:
scanning again to obtain environment sensing data after the mobile terminal moves a preset distance so as to calculate to obtain a verification matching scoring result;
if the variation of the verification matching scoring result and the preliminary matching scoring result exceeds a threshold value, determining that the self is in a to-be-positioned state for repositioning;
and the preliminary matching scoring result and the verification matching scoring result are the similarity between the environment sensing data acquired by the intelligent mobile device and the corresponding candidate floor map.
8. A server, comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor is configured to execute the computer program stored in the memory to perform the operations performed by the indoor positioning implementation method according to any one of claims 1 to 4.
9. A storage medium having stored therein at least one instruction, which is loaded and executed by a processor to perform operations performed by an indoor positioning implementation method according to any one of claims 1 to 4.
10. An intelligent mobile device, comprising a processor, a memory, and a computer program stored in the memory and executable on the processor, wherein the processor is configured to execute the computer program stored in the memory to perform the operations performed by the indoor positioning implementation method according to any one of claims 5 to 7.
CN202010817857.8A 2020-08-14 2020-08-14 Indoor positioning realization method, server, intelligent mobile device and storage medium Active CN111814752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010817857.8A CN111814752B (en) 2020-08-14 2020-08-14 Indoor positioning realization method, server, intelligent mobile device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010817857.8A CN111814752B (en) 2020-08-14 2020-08-14 Indoor positioning realization method, server, intelligent mobile device and storage medium

Publications (2)

Publication Number Publication Date
CN111814752A true CN111814752A (en) 2020-10-23
CN111814752B CN111814752B (en) 2024-03-12

Family

ID=72859047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010817857.8A Active CN111814752B (en) 2020-08-14 2020-08-14 Indoor positioning realization method, server, intelligent mobile device and storage medium

Country Status (1)

Country Link
CN (1) CN111814752B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112850388A (en) * 2020-12-31 2021-05-28 济宁市海富电子科技有限公司 Method and device for service robot to enter elevator
CN113177054A (en) * 2021-05-28 2021-07-27 广州南方卫星导航仪器有限公司 Equipment position updating method and device, electronic equipment and storage medium
CN113587917A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Indoor positioning method, device, equipment, storage medium and computer program product
CN114413903A (en) * 2021-12-08 2022-04-29 上海擎朗智能科技有限公司 Positioning method for multiple robots, robot distribution system, and computer-readable storage medium
CN114582476A (en) * 2022-02-21 2022-06-03 北京融威众邦电子技术有限公司 Intelligent assessment triage system and method based on ANN
WO2022121606A1 (en) * 2020-12-08 2022-06-16 北京外号信息技术有限公司 Method and system for obtaining identification information of device or user thereof in scenario
CN114663491A (en) * 2020-12-08 2022-06-24 北京外号信息技术有限公司 Method and system for providing information to a user in a scene
CN115484342A (en) * 2021-06-15 2022-12-16 南宁富联富桂精密工业有限公司 Indoor positioning method, mobile terminal and computer-readable storage medium
CN117191021A (en) * 2023-08-21 2023-12-08 深圳市晅夏机器人有限公司 Indoor vision line-following navigation method, device, equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646067A (en) * 2009-05-26 2010-02-10 华中师范大学 Digital full-space intelligent monitoring system and method
US20110320116A1 (en) * 2010-06-25 2011-12-29 Microsoft Corporation Providing an improved view of a location in a spatial environment
CN105246039A (en) * 2015-10-20 2016-01-13 深圳大学 An indoor positioning method and system based on image processing
CN106455050A (en) * 2016-09-23 2017-02-22 微梦创科网络科技(中国)有限公司 Bluetooth and Wifi-based indoor positioning method, apparatus and system
EP3299917A1 (en) * 2015-05-18 2018-03-28 TLV Co., Ltd. Device management system and device management method
CN107920386A (en) * 2017-10-10 2018-04-17 深圳数位传媒科技有限公司 Sparse independent positioning method, server, system and computer-readable recording medium
CN108573268A (en) * 2017-03-10 2018-09-25 北京旷视科技有限公司 Image-recognizing method and device, image processing method and device and storage medium
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
CN110428449A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium
CN110579215A (en) * 2019-10-22 2019-12-17 上海木木机器人技术有限公司 Localization method, mobile robot and storage medium based on environment feature description
WO2020052319A1 (en) * 2018-09-14 2020-03-19 腾讯科技(深圳)有限公司 Target tracking method, apparatus, medium, and device
CN111145223A (en) * 2019-12-16 2020-05-12 盐城吉大智能终端产业研究院有限公司 Multi-camera personnel behavior track identification analysis method
CN111339363A (en) * 2020-02-28 2020-06-26 钱秀华 Image recognition method and device and server

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646067A (en) * 2009-05-26 2010-02-10 华中师范大学 Digital full-space intelligent monitoring system and method
US20110320116A1 (en) * 2010-06-25 2011-12-29 Microsoft Corporation Providing an improved view of a location in a spatial environment
EP3299917A1 (en) * 2015-05-18 2018-03-28 TLV Co., Ltd. Device management system and device management method
CN105246039A (en) * 2015-10-20 2016-01-13 深圳大学 An indoor positioning method and system based on image processing
CN106455050A (en) * 2016-09-23 2017-02-22 微梦创科网络科技(中国)有限公司 Bluetooth and Wifi-based indoor positioning method, apparatus and system
CN108573268A (en) * 2017-03-10 2018-09-25 北京旷视科技有限公司 Image-recognizing method and device, image processing method and device and storage medium
CN107920386A (en) * 2017-10-10 2018-04-17 深圳数位传媒科技有限公司 Sparse independent positioning method, server, system and computer-readable recording medium
CN108717710A (en) * 2018-05-18 2018-10-30 京东方科技集团股份有限公司 Localization method, apparatus and system under indoor environment
US20200226782A1 (en) * 2018-05-18 2020-07-16 Boe Technology Group Co., Ltd. Positioning method, positioning apparatus, positioning system, storage medium, and method for constructing offline map database
WO2020052319A1 (en) * 2018-09-14 2020-03-19 腾讯科技(深圳)有限公司 Target tracking method, apparatus, medium, and device
CN110428449A (en) * 2019-07-31 2019-11-08 腾讯科技(深圳)有限公司 Target detection tracking method, device, equipment and storage medium
CN110579215A (en) * 2019-10-22 2019-12-17 上海木木机器人技术有限公司 Localization method, mobile robot and storage medium based on environment feature description
CN111145223A (en) * 2019-12-16 2020-05-12 盐城吉大智能终端产业研究院有限公司 Multi-camera personnel behavior track identification analysis method
CN111339363A (en) * 2020-02-28 2020-06-26 钱秀华 Image recognition method and device and server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李承;胡钊政;胡月志;吴华伟;: "基于GPS与图像融合的智能车辆高精度定位算法", 交通运输系统工程与信息, no. 03 *
李承;胡钊政;胡月志;吴华伟;: "基于GPS与图像融合的智能车辆高精度定位算法", 交通运输系统工程与信息, no. 03, 15 June 2017 (2017-06-15) *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022121606A1 (en) * 2020-12-08 2022-06-16 北京外号信息技术有限公司 Method and system for obtaining identification information of device or user thereof in scenario
CN114663491A (en) * 2020-12-08 2022-06-24 北京外号信息技术有限公司 Method and system for providing information to a user in a scene
TWI800113B (en) * 2020-12-08 2023-04-21 大陸商北京外號信息技術有限公司 Method and system for obtaining identification information of a device or its user in a scene
CN112850388A (en) * 2020-12-31 2021-05-28 济宁市海富电子科技有限公司 Method and device for service robot to enter elevator
CN113177054A (en) * 2021-05-28 2021-07-27 广州南方卫星导航仪器有限公司 Equipment position updating method and device, electronic equipment and storage medium
CN115484342A (en) * 2021-06-15 2022-12-16 南宁富联富桂精密工业有限公司 Indoor positioning method, mobile terminal and computer-readable storage medium
CN113587917A (en) * 2021-07-28 2021-11-02 北京百度网讯科技有限公司 Indoor positioning method, device, equipment, storage medium and computer program product
CN114413903A (en) * 2021-12-08 2022-04-29 上海擎朗智能科技有限公司 Positioning method for multiple robots, robot distribution system, and computer-readable storage medium
CN114582476A (en) * 2022-02-21 2022-06-03 北京融威众邦电子技术有限公司 Intelligent assessment triage system and method based on ANN
CN117191021A (en) * 2023-08-21 2023-12-08 深圳市晅夏机器人有限公司 Indoor vision line-following navigation method, device, equipment and storage medium
CN117191021B (en) * 2023-08-21 2024-06-04 深圳市晅夏机器人有限公司 Indoor vision line-following navigation method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN111814752B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN111814752A (en) Indoor positioning implementation method, server, intelligent mobile device and storage medium
US12227169B2 (en) Method and apparatus for detecting parking space and direction and angle thereof, device and medium
US11422261B2 (en) Robot relocalization method and apparatus and robot using the same
CN111081064B (en) Automatic parking system and automatic passenger-replacing parking method of vehicle-mounted Ethernet
EP3974778B1 (en) Method and apparatus for updating working map of mobile robot, and storage medium
CN110587597B (en) SLAM closed loop detection method and detection system based on laser radar
US7321386B2 (en) Robust stereo-driven video-based surveillance
CA2950791C (en) Binocular visual navigation system and method based on power robot
EP3317691A1 (en) System and method for laser depth map sampling
US20200219281A1 (en) Vehicle external recognition apparatus
CN115223135B (en) Parking space tracking method, device, vehicle and storage medium
CN111935641B (en) Indoor self-positioning realization method, intelligent mobile device and storage medium
CN114155557B (en) Positioning method, positioning device, robot and computer-readable storage medium
CN115601435B (en) Vehicle attitude detection method, device, vehicle and storage medium
CN107527368A (en) Three-dimensional attitude localization method and device based on Quick Response Code
CN113971697A (en) Air-ground cooperative vehicle positioning and orienting method
CN114299146A (en) Parking assisting method, device, computer equipment and computer readable storage medium
CN111144415B (en) A Detection Method for Tiny Pedestrian Targets
CN110673607B (en) Feature point extraction method and device under dynamic scene and terminal equipment
CN115294004A (en) Return control method and device, readable medium and self-moving equipment
CN113673288A (en) Idle parking space detection method and device, computer equipment and storage medium
CN116958935B (en) Multi-view-based target positioning method, device, equipment and medium
CN113298044B (en) Obstacle detection method, system, device and storage medium based on positioning compensation
CN111860084A (en) Image feature matching and positioning method and device and positioning system
JP2007200364A (en) Stereo calibration device and stereo image monitoring device using the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant