[go: up one dir, main page]

WO2019019819A1 - 一种用于处理任务区域的任务的移动电子设备以及方法 - Google Patents

一种用于处理任务区域的任务的移动电子设备以及方法 Download PDF

Info

Publication number
WO2019019819A1
WO2019019819A1 PCT/CN2018/090579 CN2018090579W WO2019019819A1 WO 2019019819 A1 WO2019019819 A1 WO 2019019819A1 CN 2018090579 W CN2018090579 W CN 2018090579W WO 2019019819 A1 WO2019019819 A1 WO 2019019819A1
Authority
WO
WIPO (PCT)
Prior art keywords
electronic device
mobile electronic
wireless signal
camera
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/090579
Other languages
English (en)
French (fr)
Inventor
潘景良
陈灼
李腾
陈嘉宏
高鲁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vestorch Technology Ltd
Original Assignee
Vestorch Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vestorch Technology Ltd filed Critical Vestorch Technology Ltd
Publication of WO2019019819A1 publication Critical patent/WO2019019819A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0011Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement
    • G05D1/0022Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots associated with a remote control arrangement characterised by the communication link
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Definitions

  • the present invention relates to the field of electronic devices.
  • the invention relates to the field of intelligent robot systems.
  • the traditional sweeping robot randomly moves according to the scanned map autonomously positioned and moved or collided, and sweeps the ground at the same time. Therefore, the traditional sweeping robot cannot fully judge the complex situation of the ground during the work process because of the immature or inaccurate drawing and positioning technology, and it is easy to lose the position and direction.
  • some models can only change direction by the physical principle of collision rebound because they do not have the positioning ability, and even cause damage to the household goods or the robot itself or even personal injury, causing interference to the user.
  • the present invention proposes a technique in which a user can use a mobile handset terminal APP to delineate a target operation area (cleaning) and send an instruction to the robot to automatically complete the delineated area automatic operation (cleaning).
  • a target operation area cleaning
  • an instruction to the robot to automatically complete the delineated area automatic operation (cleaning).
  • three ways to establish an indoor environment map are proposed.
  • a path planning algorithm that accurately reaches the circled area of the user's mobile phone terminal APP and effectively covers the circled area to complete cleaning is also implemented.
  • One embodiment of the present invention discloses a mobile electronic device for processing a task of a task area, including a first wireless signal transceiver, an image processor, a positioning module, a path planning module, and a motion module, wherein: the first The wireless signal transceiver is communicably coupled to the second mobile electronic device, configured to acquire a photo taken by the user of the second mobile electronic device to the mission site and a selected area on the photo; the image processor Communicatingly coupled to the first wireless signal transceiver, configured to extract feature information of a photo containing the selected area, and determine by comparing the extracted feature information with the stored feature information of the image map including the location information An actual coordinate range corresponding to the selected area in the photo; the positioning module is communicably coupled to the image processor, configured to record a current location of the mobile electronic device and to select a range of distances between corresponding actual coordinate ranges of the fixed area; the path planning module is communicably coupled to the map An image processor, configured to generate a path planning scheme according to an actual coordinate range
  • Another embodiment of the present invention discloses a method for processing a task of a task area in a mobile electronic device, the mobile electronic device including a first wireless signal transceiver, an image processor, a positioning module, a path planning module, and a motion module, the method comprising: obtaining, by the first wireless signal transceiver communicably connected to the second mobile electronic device, a photo taken by a user of the second mobile electronic device to a mission site and in the a selected area on the photo; extracting feature information of the photo containing the selected area by the image processor communicably coupled to the first wireless signal transceiver, and comparing the extracted feature information and the stored Characteristic information of an image map containing location information, determining an actual coordinate range corresponding to the selected region in the photo; recording the movement by the positioning module communicably coupled to the image processor a range of distances between the current location of the electronic device and the actual coordinate range corresponding to the selected region; Communicatingly coupled to the path planning module of the image processor, generating a path planning scheme based
  • FIG. 1 shows a schematic diagram of a system in which a mobile electronic device is located, in accordance with one embodiment of the present invention.
  • FIGS. 2A and 2B respectively illustrate a task area photographed by a second camera of the second mobile electronic device and a delineation of the task area by the second mobile electronic device, in accordance with one embodiment of the present invention.
  • FIG. 3 shows a schematic diagram of a system in which a mobile electronic device and a second mobile electronic device are located, in accordance with one embodiment of the present invention.
  • FIG. 4 shows a flow chart of a method in a mobile electronic device in accordance with one embodiment of the present invention.
  • FIG. 5 is a diagram showing a checkerboard diagram of a black and white rectangle displayed on a display screen on the mobile electronic device 100.
  • FIG. 1 shows a schematic diagram of a system in which a mobile electronic device is located, in accordance with one embodiment of the present invention.
  • the mobile electronic device 100 includes, but is not limited to, a cleaning robot, an industrial automation robot, a service robot, a disaster relief robot, an underwater robot, a space robot, a drone, and the like. It can be understood that the mobile electronic device 100 can also be referred to as the first mobile electronic device 100 in order to distinguish it from the following second mobile electronic device 140.
  • the second mobile electronic device 140 includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a remote controller, and the like.
  • the mobile electronic device optionally includes an operator interface.
  • the second mobile electronic device is a mobile phone, and the operation interface is a mobile phone APP.
  • the signal transmission manner between the mobile electronic device 100 and the charging station 160 includes but is not limited to: Bluetooth, WIFI, ZigBee, infrared, ultrasonic, ultra-wide bandwidth (UWB), etc., in this embodiment, signal transmission It is WIFI as an example for description.
  • the mission area represents the venue where the mobile electronic device 100 performs the task. For example, when the task of the mobile electronic device 100 is a cleaning robot, the task area indicates an area that the cleaning robot needs to clean. For another example, when the task of the mobile electronic device 100 is to vent a disaster relief robot, the task area indicates the area where the disaster relief robot needs to be rescued.
  • a mission site represents a venue that contains the entire mission area.
  • the mobile electronic device for processing tasks of the task area includes a first wireless signal transceiver 102, an image processor 104, a positioning module 106, a path planning module 108, and a motion module 110.
  • the first wireless signal transceiver 102 can be communicatively coupled to the second mobile electronic device 140, configured to acquire a photo taken by the user of the second mobile electronic device 140 to the mission location and a selected area on the photo.
  • FIGS. 2A and 2B respectively illustrate a task area captured by a second camera 144 of the second mobile electronic device 140, and a user of the second mobile electronic device 140 delineating the selected area, in accordance with an embodiment of the present invention. .
  • the second mobile electronic device 140 is used as a mobile phone, and the task area is a cleaning area as an example.
  • the user of the second mobile electronic device 140 uses the second camera 144 on the second mobile electronic device 140 to take a photo of the position to be cleaned by using the mobile phone APP (as shown in FIG. 2A). Show) and circle the target cleaning area in the photo (as shown in Figure 2B).
  • the photo (including the delineated target cleaning area) is transmitted to the mobile electronic device 100 via a local wireless communication network (WIFI, etc.) and stored in the memory 116.
  • WIFI local wireless communication network
  • the image processor 104 is communicably coupled to the first wireless signal transceiver 102, configured to extract feature information of a photo containing the selected region, and by comparing the extracted feature information with the stored feature information of the image map including the location information, Determine the actual coordinate range that corresponds to the selected area in the photo.
  • the location information refers to the location information of the image feature points in the image map during the process of establishing the map, that is, the actual coordinate position.
  • the location information includes, for example, the location of the charging post 180 and/or the location of the mobile electronic device 100 itself.
  • the image processor 104 can use the position of the charging post 180 as a coordinate origin.
  • the memory 116 of the mobile electronic device 100 stores an image map established during the first use of the indoor environment map, such as indoor image map information, including image feature points and their location information.
  • the image processor 104 extracts feature information and position information in the captured photos, and further utilizes an image feature point matching algorithm (such as SIFT, SURF, etc.) and an indoor image map (including position information) in the memory 116 to perform fast. Comparison analysis.
  • the image processor 104 in the mobile electronic device 100 determines the photo in the photo by comparing the image feature points in the indoor image map. The coordinate range of the indoor actual area corresponding to the user selected area of the mobile electronic device 140.
  • the indoor actual area range corresponding to the user selected area is determined as follows: For example, the image processor 104 may appropriately increase the corresponding area on the basis of the user's original selected area, such as the circled area indicated by the finger pattern in FIG. 2B.
  • the percentage range for example, is increased by a 10% range to ensure that the selected area indicated by the finger pattern is within the actual cleaning range to determine the actual coordinate range.
  • the image processor 104 may offset the original area outward by a certain distance to determine the actual coordinate range.
  • image processor 104 may blur the construction of a standard graphic that includes the actual coordinate range.
  • the selected range indicated by the finger pattern in the figure is an irregular approximate rectangular shape
  • the image processor 104 can convert the approximate rectangle into an actual coordinate range corresponding to the rectangle, thereby facilitating the cleaning of the mobile electronic device and completing the cleaning. task.
  • the image feature points may be identified by a Scale Invariant Feature Transform (SIFT) algorithm or a Speeded Up Robust Features (SURF) algorithm.
  • SIFT Scale Invariant Feature Transform
  • SURF Speeded Up Robust Features
  • the image processor 1040 first identifies the key points of the object of the reference image stored in the memory 110, extracts the SIFT features, and then compares the SIFT features of the respective key points in the memory 110 with the SIFT features of the newly acquired image, and then based on the K nearest neighbor
  • the matching feature of the algorithm K-Nearest Neighbor KNN
  • the SURF algorithm is based on an approximate 2D Haar wavelet response and uses an integral image for image convolution using a Hessian matrix-based measure for the detector. And use a distribution-based descriptor.
  • determining the coordinate range of the indoor real area corresponding to the user selected area of the second mobile electronic device 140 in the photo may determine the actual coordinate range of the task area by coordinate mapping conversion.
  • the feature points in the image in the second mobile electronic device 140 will match the image feature points in the image map, ie the actual coordinate position of the feature points in the image in the second mobile electronic device 140 can be determined.
  • the coordinate system conversion relationship of the camera coordinate system where the image captured by the user camera is located with respect to the actual world coordinate system where the charging pile is located can be calculated.
  • the bounding area boundary line in the image can be discretized into a boundary line composed of points.
  • the positional information of the discretized points on the boundary line in the image relative to the image feature points, the actual coordinate position of the image feature points, and the coordinate system conversion relationship can be used to calculate the discretization points on the boundary line in the actual world coordinate system (ie, the charging pile)
  • the actual coordinate position in the coordinate system that is, the coordinate range of the actual indoor area corresponding to the boundary line.
  • the location module 106 is communicably coupled to the image processor 104 and is configured to record a range of distances between the current location of the mobile electronic device 100 and the actual coordinate range corresponding to the selected region. For example, the positioning module 106 sets the location of the charging post 180 as the coordinate origin, and each point in the image corresponds to a coordinate value (X, Y). The positioning module 106 and the encoder cause the mobile electronic device 100 to know its current location.
  • the positioning module 106 is a module that calculates the position of the first electronic device 100 in the room. The first electronic device 100 always needs to know its indoor location at all times during operation, and is implemented by the positioning module 106.
  • the path planning module 108 is communicably coupled to the image processor 104 and configured to generate a path planning scheme based on actual coordinate ranges corresponding to the selected regions.
  • the path planning module 108 is further configured to perform path planning on the selected area by using a grid-based spanning tree path planning algorithm.
  • the path planning module 108 optimizes the cleaning path for the coordinate range according to the generated corresponding region coordinate range (the user-defined target cleaning region).
  • Grid-based Spanning Tree Path Planning is used to implement cleaning path planning for selected target cleaning areas. The method uses gridding processing for the corresponding coordinate region, establishes a tree node for the mesh and generates a tree, and then uses a Hamiltonian path surrounding the spanning tree as an optimized cleaning path for cleaning the region.
  • the mobile electronic device 100 is located at the smart charging station 180.
  • the path planning module 108 will read the path that the mobile electronic device 100 follows to reach the region when first used (if the mobile electronic device 100 adopts the following mode) Or adopting the walking path in the user mapping process of the second mobile electronic device 140 as the path to the area (if the first time the mobile electronic device 100 does not follow the user), and optimizing the path and the selected area
  • the sweep path synthesizes the sweep task path.
  • the synthesis can connect the two paths in a simple sequence, the first path realizes reaching the target cleaning area, and the second path realizes optimal coverage of the circled cleaning area to complete the cleaning task.
  • the motion module 110 can be communicatively coupled to the path planning module 108, configured to perform motion in accordance with a path planning scheme.
  • the mobile electronic device 100 (for example, a robot) includes a camera, and the user of the second mobile electronic device 140 wears a positioning receiver.
  • the mobile electronic device 100 further includes a first camera 112, wherein the second mobile electronic device 140 further includes a second wireless signal transceiver 142 that is configured to operate in a map-building mode.
  • First wireless signal transceiver 102 and second wireless signal transceiver 142 are communicably coupled to a plurality of reference wireless signal sources, respectively, configured to determine mobile electronic device 100 and based on signal strengths obtained from a plurality of reference wireless signal sources The location of the mobile electronic device 140.
  • the signals received from the reference wireless signal source can be converted to distance information by any method known in the art including, but not limited to, Time of Flight (ToF), Angle of Arrival (Angle) Of Arrival, AoA), Time Difference of Arrival (TDOA) and Received Signal Strengh (RSS).
  • TOF Time of Flight
  • Angle Angle of Arrival
  • AoA Angle of Arrival
  • TDOA Time Difference of Arrival
  • RSS Received Signal Strengh
  • the motion module 110 is configured to follow the motion of the second mobile electronic device 140 in accordance with the location of the mobile electronic device 100 and the second mobile electronic device 140.
  • mobile electronic device 100 includes a monocular camera 112, a user of second mobile electronic device 140 wears a wireless positioning receiver wristband, or a user handheld mobile phone equipped with a wireless positioning receiver peripheral.
  • the use of the monocular camera 112 can reduce hardware cost and computational cost, and the use of a monocular camera achieves the same effect as using a depth camera. Image depth information may not be needed.
  • the distance depth information is sensed by the ultrasonic sensor and the laser sensor.
  • a monocular camera is taken as an example for description.
  • the mobile electronic device 100 follows the user through its own wireless location receiver. For example, for the first time, the user of the second mobile electronic device 140 realizes the interaction with the mobile electronic device 100 through the mobile phone APP to complete the indoor establishment of the map.
  • the wireless signal transmitting group in a fixed position placed indoors as a reference point, for example, UWB, the mobile APP of the second mobile electronic device 140 and the wireless signal module in the mobile electronic device 100 read the signal strength (RSS) for each signal source.
  • RSS signal strength
  • the location of the user of the second mobile electronic device 140 and the mobile electronic device 100 within the room is determined.
  • the motion module 110 of the mobile electronic device 100 completes user tracking according to real-time location information (mobile phone and robot location) transmitted by the smart charging station.
  • the first camera 112 is configured to capture a plurality of images when the motion module 110 is in motion, the plurality of images including feature information and corresponding photographing position information.
  • the follow-up process is completed by the robot's monocular camera.
  • the mobile electronic device 100 captures the entire indoor layout by using the first camera 112, such as a monocular camera, and takes the captured image with a large number of features and corresponding shooting position information and the mobile electronic device 100.
  • the memory 116 is transmitted to the memory 116 in real time via a local wireless communication network (WIFI, Bluetooth, ZigBee, etc.).
  • WIFI local wireless communication network
  • Bluetooth Bluetooth
  • ZigBee ZigBee
  • FIG. 1 memory 116 is shown as being included in mobile electronic device 100.
  • the memory 116 may also be included in the smart charging station 180, ie, the cloud.
  • the image processing module 104 is communicably coupled to the first camera 112 and configured to generate feature maps by extracting the plurality of images, extracting feature information and shooting location point information of the plurality of images, and generating an image map.
  • the image processing module 104 performs map stitching creation on a large number of images captured by the first camera 112 via the image processor 104 according to the height and internal and external parameters of the first camera 112 of the mobile electronic device 100, and feature selection extraction (eg, SIFT, The SURF algorithm or the like adds feature point position information to generate indoor image map information (including a large number of image feature points), and stores the processed image map information in the memory 116.
  • feature selection extraction eg, SIFT, The SURF algorithm or the like adds feature point position information to generate indoor image map information (including a large number of image feature points), and stores the processed image map information in the memory 116.
  • the internal parameters of the camera refer to parameters related to the camera's own characteristics, such as the camera's lens focal length, pixel size, etc.; the camera's external parameters are parameters in the world coordinate system (the actual coordinate system in the charging pile room), such as the camera's Position, direction of rotation, angle, etc.
  • the photos taken by the camera have their own camera coordinate system, so the internal and external parameters of the camera are required to realize the conversion of the coordinate system.
  • the mobile electronic device 100 (robot) includes a camera and the displayable camera corrects the black and white checkerboard, and the user of the second mobile electronic device 140 does not need to wear the positioning receiver.
  • the mobile electronic device 100 further includes a display screen 118, the mobile electronic device 100 is configured to operate in a map-building mode, and the second mobile electronic device 140 includes a second camera 144, the first wireless signal
  • the transceiver 142 is communicably coupled to a plurality of reference wireless signal sources configured to determine a location of the mobile electronic device 100 based on signal strengths obtained from the plurality of reference wireless signal sources.
  • the first camera 112 is configured to detect the location of the second mobile electronic device 140.
  • the mobile electronic device 100 further includes an ultrasonic sensor and a laser sensor, and the distance between the mobile electronic device 100 and the second mobile electronic device 140 can be detected.
  • the motion module 110 is configured to follow the motion of the second mobile electronic device 140 in accordance with the location of the mobile electronic device 100 and the second mobile electronic device 140.
  • the user of the second mobile electronic device 140 implements user interaction with the mobile electronic device 100 through the mobile phone APP to complete the indoor establishment of the map.
  • the first wireless signal transceiver 102 in the mobile electronic device 100 reads the signal strength (RSS) for each signal source to determine the mobile electronic device by using a wireless signal transmitting group (UWB or the like) of a fixed position placed indoors as a reference point. 100 indoors location.
  • RSS signal strength
  • Target positioning and following of the user of the second mobile electronic device 100 is achieved by the first camera 112 of the mobile electronic device 100, such as a monocular camera, an ultrasonic sensor, and a laser sensor 114.
  • the user of the second mobile electronic device 140 can set the following distance through the mobile phone APP, so that the mobile electronic device 100 adjusts and the second mobile electronic according to the following distance and the angle between the second mobile electronic device 140 measured in real time. The distance and angle between the devices 140.
  • the mobile electronic device 100 transmits the following path coordinates to the smart charging post 180 in real time.
  • display 118 of mobile electronic device 100 is configured to display, for example, a black and white checkerboard.
  • the image processor 104 is communicably coupled to the second camera 144 and is configured to receive a plurality of images taken from the second camera 144 as the motion module 110 moves.
  • image processor 104 may receive a plurality of images taken from second camera 144 via first wireless signal transceiver 102 and second wireless signal transceiver 142.
  • the plurality of images includes an image of the display 118 of the mobile electronic device 100 that is displayed as a black and white checkerboard.
  • the image processor 104 is further configured to generate an image map by splicing a plurality of images, extracting feature information in the plurality of images, and capturing position point information.
  • the calibration picture is a checkerboard composed of black and white rectangles, as shown in Figure 5.
  • the mobile electronic device 100 ie, the robot, includes a first camera 112, such as a monocular camera, and a display 118 that can display a black and white camera to correct the board.
  • the user does not need to wear the wireless positioning receiver bracelet, and the user does not need to hold the mobile phone equipped with the wireless positioning receiver peripheral.
  • the mobile electronic device 100 follows the user visually, and the user of the second mobile electronic device 140 uses the mobile phone APP to complete the drawing. For example, each time a room is reached, the user of the second mobile electronic device 140 launches the room building application via the mobile phone APP, at which time the liquid crystal display 118 of the mobile electronic device 100 displays a classic black and white checkerboard for correcting the camera.
  • the mobile electronic device 100 simultaneously transmits its own coordinate and direction information to the positioning module 106.
  • the user of the second mobile electronic device 140 photographs the room environment using the mobile phone APP, and the photograph taken needs to include a black and white checkerboard in the liquid crystal display of the mobile electronic device 100.
  • the user of the second mobile electronic device 140 takes a plurality of photos according to the layout of the room (the photos all need to take a black and white checkerboard in the robot LCD screen), and the room environment and the mobile electronic device 100 that are photographed by the mobile phone APP, for example
  • the image of the robot 100 is transferred to the memory 116 via a local wireless communication network (WIFI, Bluetooth, ZigBee, etc.).
  • WIFI local wireless communication network
  • the image processor 104 performs map stitching creation on a large number of images taken by the user of the second mobile electronic device 140, feature selection extraction, The feature point position information is added, the indoor image feature point map information is generated, and the processed image map information is stored in the memory 116.
  • Mode 3 The mobile electronic device 100 (robot) does not include a camera, and the user of the second mobile electronic device 140 wears a positioning receiver.
  • the second mobile electronic device 140 further includes a second wireless signal transceiver 142 and a second camera 144.
  • a second wireless signal transceiver 142 is communicably coupled to the plurality of reference wireless signal sources, configured to determine a location of the second mobile electronic device 140 based on signal strengths obtained from the plurality of reference wireless signal sources.
  • the second camera 144 is configured to capture a plurality of images of the mission location.
  • the image processor 104 is communicably coupled to the second camera 140, and is configured to generate an image map by splicing a plurality of images, extracting feature information of the plurality of images, and capturing position point information.
  • the mobile electronic device 100 such as a robot, does not include a monocular camera and the robot does not follow the user of the second mobile electronic device 140.
  • the user of the second mobile electronic device 140 wears a wireless positioning receiver wristband, or the user holds a mobile phone equipped with a wireless positioning receiver peripheral, and uses the mobile phone APP to complete the indoor drawing.
  • the user of the second mobile electronic device 140 realizes the indoor establishment of the map through the mobile phone APP or the wireless positioning receiver wristband worn by the user or the wireless positioning receiver peripheral of the mobile phone equipment.
  • the wireless signal transceiver 142 in the second mobile electronic device 140 reads the received signal strength (RSS) for each reference wireless signal source by using a fixed position reference wireless signal source (UWB or the like) placed indoors as a reference point. The location of the user of the second mobile electronic device 140 indoors is determined. Each time a room is reached, the user of the second mobile electronic device 140 initiates a room building program via the mobile APP. The user of the second mobile electronic device 140 photographs the room environment using the mobile phone APP, for example, multiple photos can be taken according to the layout of the room.
  • RSS received signal strength
  • UWB fixed position reference wireless signal source
  • the mobile APP of the second mobile electronic device 140 will record the pose information of the second camera 144 for each shot and the second mobile electronic device 140 recorded by the second wireless signal transceiver 142, such as the height information of the mobile phone relative to the ground and its
  • the location information of the room is transmitted to the memory 116 via a local wireless communication network (WIFI, Bluetooth, ZigBee, etc.).
  • WIFI local wireless communication network
  • a large number of images captured by the image processor 104 are created by map stitching, feature selection extraction, feature point position information is added, and an indoor is generated.
  • the image feature point map information is stored in the memory 116.
  • the mobile electronic device 100 for example, the robot 100 further includes an encoder and an inertial measurement module (IMU) to assist the first camera 112 in acquiring the position and attitude of the mobile electronic device 100, such as a robot.
  • an encoder and an inertial measurement module IMU
  • both the encoder and the IMU can provide the position and attitude of the robot.
  • the encoder can be used as an odometer to record the trajectory of the robot by recording the rotation information of the robot wheel.
  • the mobile electronic device 100 may further include a sensor 114 that transmits obstacle information around the mobile electronic device 100 to the motion module 110.
  • the motion mode 110 is also configured to adjust the motion orientation of the mobile electronic device 100 to avoid obstacles. It can be understood that, because the height of the installation is different, the height of the first camera 112 mounted on the mobile electronic device 100 is different from the height of the sensor 114 mounted on the mobile electronic device 100, so the obstacle information and the sensor device captured by the first camera 112 are The obstacles taken may be different because there may be obscuration.
  • the first camera 112 can change the visual direction by means of rotation, pitch, etc. to obtain a wider visual range.
  • the senor 114 can be mounted at a relatively low horizontal position, which may be a blind spot of the first camera 112. Objects do not appear in the viewing angle of the first camera 112, so these conventional sensors 112 are relied upon to avoid obstacles.
  • camera 112 may acquire obstacle information in conjunction with ultrasound and laser sensor 114 information. The image obtained by the monocular camera 112 is used for object recognition, and the ultrasonic and laser sensors 114 are ranging.
  • the sensor 114 includes an ultrasonic sensor and/or a laser sensor.
  • the first camera 112 and the sensor 114 can assist each other. For example, if there is shielding, the mobile electronic device 100 needs to rely on its own laser sensor, ultrasonic sensor 114, etc. to avoid obstacles in the shaded portion.
  • the laser sensor and the ultrasonic sensor mounted on the mobile electronic device 100 detect static and dynamic environments around the mobile electronic device 100, and assist in avoiding static and dynamic obstacles and adjusting the optimal path.
  • the mobile electronic device 300 further includes a charging post 380, which may include an image processor 386, a path planning module 388, a memory 384, such as a memory data module, a first wireless transmitter 381 (eg, UWB) And the second wireless signal receiver 382, for example implemented by WIFI.
  • a charging post 380 may include an image processor 386, a path planning module 388, a memory 384, such as a memory data module, a first wireless transmitter 381 (eg, UWB) And the second wireless signal receiver 382, for example implemented by WIFI.
  • the body of mobile electronic device 300 may include a first wireless signal receiver 302, such as implemented by UWB, first camera 310, map positioning module 304, obstacle avoidance module 306, motion module 308, and sensors 314 and 316, and The encoder 318 and the second wireless signal receiver 320 are implemented, for example, by WIFI.
  • the second mobile electronic device 340 such as a mobile phone, also includes a mobile phone APP and a second camera.
  • at least one of the image processor 386, the path planning module 388, and the memory 384 may also be included in the body of the mobile electronic device 300. As shown in FIG.
  • the first wireless transmitter 381 in the smart charging post 380 is communicably coupled to the first wireless signal receiver 302 in the cleaning robot 300, and the second wireless signal transmitter 320 and the intelligent in the cleaning robot 300.
  • the fifth wireless signal receiver 382 in the charging post 380 is communicably coupled.
  • the path planning module 388 in the smart charging station 380 is communicatively coupled to the motion module 308 in the cleaning robot 300.
  • the handset 340 of the second electronic mobile device is communicably coupled to the second wireless signal receiver 382 in the smart charging station 380.
  • Path planning module 388 sends the generated path to motion module 308 for execution.
  • the first wireless signal receiver 302 is communicably coupled to the map location module 304.
  • the map location module 304 communication is in turn communicatively coupled to the second wireless signal transmitter 320 and the motion module 308.
  • the first camera 310 is communicably coupled to the second wireless signal transmitter 320.
  • Ultrasonic sensor 314, laser sensor 316, and encoder 318 are communicatively coupled to obstacle avoidance module 306.
  • the obstacle avoidance module 306 is communicatively coupled to the motion module 308. There is also information interaction between the positioning module 304 and the motion module 308.
  • the motion module 308 needs to locate the location information input by the module 304 when performing the planning path.
  • a second wireless signal receiver 382 is communicably coupled to the memory 384 for communication.
  • Memory 384 is communicably coupled to image processor 386.
  • Image processor 386 is communicatively coupled to path planning module 388.
  • FIG. 4 shows a flow diagram of a method 400 in a mobile electronic device in accordance with one embodiment of the present invention.
  • the mobile electronic device includes a first wireless signal transceiver, an image processor, a positioning module, a path planning module, and a motion module.
  • the method 400 includes, in block 410, acquiring a photo taken by a user of the second mobile electronic device on a mission location and a selected area on the photo by communicably connecting to the second mobile electronic device first wireless signal transceiver;
  • feature information of the photo containing the selected region is extracted by an image processor communicably coupled to the first wireless signal transceiver, and by comparing the extracted feature information with the stored image map containing the location information Feature information determining an actual coordinate range corresponding to a selected area in the photo; in block 430, recording the current location of the mobile electronic device and the selected area by a positioning module communicably coupled to the image processor a range of distances between corresponding actual coordinate ranges; in block 440, a path planning scheme is generated by a path planning module communicatively coupled to the image processor, based on an actual coordinate range corresponding to the selected region; The motion is performed according to the path planning scheme by a motion module communicably connected to the path planning module.
  • the mobile electronic device further comprises a first camera, the second mobile electronic device further comprising a second wireless signal transceiver, the mobile electronic device being configured to operate in a map mode
  • the method 400 further comprising Not showing) the mobile electronic device and the first wireless signal transceiver and the second wireless signal transceiver communicably connected to the plurality of reference wireless signal sources, respectively, based on signal strengths obtained from the plurality of reference wireless signal sources Positioning of the second mobile electronic device; following the movement of the second mobile electronic device according to the position of the mobile electronic device and the second mobile electronic device; and capturing, by the first camera, multiple images during the movement of the motion module, The image includes feature information and corresponding shooting position information; and the image processing module communicably connected to the first camera, by splicing the plurality of images, extracting feature information and shooting position information in the plurality of images, and generating Image map.
  • the mobile electronic device further includes a display screen
  • the mobile electronic device is configured to operate in a map mode
  • the second mobile electronic device includes a second camera
  • the method 400 further includes (not shown) a first wireless signal transceiver communicably coupled to the plurality of reference wireless signal sources, determining a location of the mobile electronic device based on signal strengths obtained from the plurality of reference wireless signal sources; through the motion module, according to the mobile electronic device and the second Positioning of the mobile electronic device, following the movement of the second mobile electronic device; displaying a black and white chessboard through the display screen of the mobile electronic device; receiving the image from the second camera in the motion module through an image processor communicably connected to the second camera a plurality of images captured during exercise, wherein the plurality of images comprise images of a display screen of the mobile electronic device displayed as a black and white checkerboard, and the image information is extracted by splicing the plurality of images by the image processor to extract feature information of the plurality of images And shooting location information to generate an image
  • the method 400 further comprising (not shown) communicatively coupled to the plurality of reference wireless signal sources a second wireless signal transceiver, determining a position of the second mobile electronic device according to the signal strength acquired from the plurality of reference wireless signal sources; and capturing a plurality of images of the mission site through the second camera, communicably connecting to the first
  • the image processor of the two cameras generates image maps by splicing a plurality of images, extracting feature information and shooting position point information in the plurality of images.
  • the method 400 further includes (not shown) routing the selected area by using a path planning module and using a grid-based spanning tree path planning algorithm.
  • the method 400 further includes (not shown) further comprising obtaining the location of the mobile electronic device by assisting the first camera by an encoder and an inertial measurement module communicably coupled to the image processor attitude.
  • the mobile electronic device further includes a charging post, wherein the charging post includes an image processor, a path planning module, and a positioning module.
  • the senor comprises an ultrasonic sensor and/or a laser sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

一种移动电子设备(100),包括第一无线信号收发器(102)、图像处理器(104)、定位模块(106)、路径规划模块(108)以及运动模块(110)。第一无线信号收发器(102)获取由第二移动电子设备(140)的用户对任务场所所拍摄的照片和在照片上选定区域;图像处理器(104)提取包含选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定照片中的选定区域的实际坐标范围;定位模块(106)记录移动电子设备(100)的当前所在位置与任务区域的实际坐标范围之间的距离范围;路径规划模块(108)根据选定区域的实际坐标范围,生成路径规划方案;运动模块(110)根据路径规划方案,进行运动。

Description

一种用于处理任务区域的任务的移动电子设备以及方法 技术领域
本发明涉及电子设备领域。具体而言,本发明涉及智能机器人系统领域。
背景技术
传统的扫地机器人按扫描的地图自主定位和移动或者碰撞反弹变向随机行走,同时清扫地面。因此,传统的扫地机器人因为制图和定位技术不成熟或不精确,在工作过程中无法完全判断地面复杂状况,容易出现失去位置与方向的情况。此外,某些机型由于不具备定位能力,只能通过碰撞反弹的物理原理来变向,甚至会造成家居用品或者机器人自身损坏甚至人身伤害,对用户造成干扰等问题。
发明内容
本发明提出了一种用户能够使用移动手机终端APP圈定目标操作区域(清洁)并发送指令至机器人使其自动完成圈定区域自动操作(清扫)的技术。为实现圈定功能,提出了三种建立室内环境地图的方式。此外,还实现了准确到达用户手机终端APP圈定区域且有效覆盖圈定区域完成清洁的路径规划算法。
本发明的一个实施例公开了一种用于处理任务区域的任务的移动电子设备,包括第一无线信号收发器、图像处理器、定位模块、路径规划模块以及运动模块,其中:所述第一无线信号收发器可通信地连接到第二移动电子设备,配置为获取由所述第二移动电子设备的用户对任务场所所拍摄的照片和在所述照片上选定区域;所述图像处理器可通信地连接至所述 第一无线信号收发器,配置为提取包含所述选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与所述照片中的所述选定区域相对应的实际坐标范围;所述定位模块可通信地连接至所述图像处理器,配置为记录所述移动电子设备的当前所在位置和与所述选定区域相对应的实际坐标范围之间的距离范围;所述路径规划模块可通信地连接至所述图像处理器,配置为根据与所述选定区域相对应的实际坐标范围,生成路径规划方案;所述运动模块可通信地连接至所述路径规划模块,配置为根据所述路径规划方案,进行运动。
本发明的另一个实施例公开了一种在移动电子设备中用于处理任务区域的任务的方法,所述移动电子设备包括第一无线信号收发器、图像处理器、定位模块、路径规划模块以及运动模块,所述方法包括:通过可通信地连接到第二移动电子设备所述第一无线信号收发器,获取由所述第二移动电子设备的用户对任务场所所拍摄的照片和在所述照片上选定区域;通过可通信地连接至所述第一无线信号收发器的所述图像处理器,提取包含所述选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与所述照片中的所述选定区域相对应的实际坐标范围;通过可通信地连接至所述图像处理器的所述定位模块,记录所述移动电子设备的当前所在位置和与所述选定区域相对应的实际坐标范围之间的距离范围;通过可通信地连接至所述图像处理器的所述路径规划模块,根据与所述选定区域相对应的实际坐标范围,生成路径规划方案;通过可通信地连接至所述路径规划模块的所述运动模块,根据所述路径规划方案,进行运动。
附图说明
本发明的更完整的理解通过参照关联附图描述的详细的说明书所获得,在附图中相似的附图标记指代相似的部分。
图1示出根据本发明的一个实施例的移动电子设备所在系统的示意图。
图2A和图2B分别示出了根据本发明的一个实施例的利用第二移动电子设备的第二摄像头所拍摄的任务区域,以及第二移动电子设备对任务区域的圈定。
图3示出根据本发明的一个实施例的移动电子设备、第二移动电子设备所在系统的示意图。
图4示出了根据本发明的一个实施例的在移动电子设备中的方法流程图。
图5示出了移动电子设备100上的显示屏所显示的黑白相间的矩形构成的棋盘图的示意图。
具体实施方式
图1示出根据本发明的一个实施例的移动电子设备所在系统的示意图。
参照图1,移动电子设备100包括但不限于扫地机器人、工业自动化机器人、服务型机器人、排险救灾机器人、水下机器人、空间机器人、无人机等。可以理解,为了与以下的第二移动电子设备140相区别,移动电子设备100也可以称为第一移动电子设备100。
第二移动电子设备140包括但不限于:手机、平板电脑、笔记本电脑、遥控器等。移动电子设备可选地包含操作界面。在一个可选的实施方式中,第二移动电子设备是手机,操作界面是手机APP。
移动电子设备100与充电桩160之间的信号传输方式包括但不限于:蓝牙、WIFI、ZigBee、红外、超声波、超宽带(Ultra-wide Bandwidth,UWB)等,在本实施例中以信号传输方式是WIFI为例进行描述。
任务区域表示移动电子设备100执行任务的场地。例如,当移动电子设备100的任务为扫地机器人时,任务区域表示扫地机器人需要清扫的区域。又例如,当移动电子设备100的任务为排险救灾机器人时,任 务区域表示该排险救灾机器人需要抢险的区域。任务场所表示包含整个任务区域的场地。
如图1所示,用于处理任务区域的任务的移动电子设备,包括第一无线信号收发器102、图像处理器104、定位模块106、路径规划模块108以及运动模块110。第一无线信号收发器可102通信地连接到第二移动电子设备140,配置为获取由第二移动电子设备140的用户对任务场所所拍摄的照片和在照片上选定区域。
图2A和图2B分别示出了根据本发明的一个实施例的利用第二移动电子设备140的第二摄像头144所拍摄的任务区域,以及第二移动电子设备140的用户对选定区域的圈定。
以第二移动电子设备140为手机,并且任务区域为清洁区域为例进行说明。如图2A和2B所示,当启动清扫任务时,第二移动电子设备140的用户通过手机APP,使用第二移动电子设备140上的第二摄像头144对需要清扫的位置拍照(如图2A所示),并在照片中圈定目标清扫区域(如图2B所示)。该照片(包含圈定的目标清扫区域)通过本地无线通信网络(WIFI等)传送至移动电子设备100并存储于存储器116中。
图像处理器104可通信地连接至第一无线信号收发器102,配置为提取包含选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与照片中的选定区域相对应的实际坐标范围。该位置信息指在建立地图过程中,图像地图中图像特征点的定位信息,即实际坐标位置。该位置信息例如包括充电桩180的位置和/或移动电子设备100本身的位置。例如,图像处理器104可以将充电桩180的位置作为坐标原点。
移动电子设备100的存储器116中存储了首次使用时建立室内环境地图过程中所建立的图像地图,例如室内图像地图信息,其中包括图像特征点及其位置信息。此外,图像处理器104还提取所拍摄的照片中的特征 信息和位置信息,并进一步利用图像特征点匹配算法(如SIFT,SURF等)与存储器116中的室内图像地图(含位置信息)进行快速比对分析。根据第二移动电子设备140的用户在手机APP中选定区域的像素特征及相对位置,移动电子设备100中的图像处理器104通过比对室内图像地图中的图像特征点,确定与照片中第二移动电子设备140的用户选定区域相对应的室内实际区域的坐标范围。与用户选定区域相对应的室内实际区域范围的确定方式如下:例如,图像处理器104可以在用户原始选定区域,例如图2B中由手指图案指示的圈中区域的基础上适当增加相应的百分比范围,例如,增加10%的范围,从而保证手指图案指示的选定区域在实际清扫范围之内,来确定实际坐标范围。可选地,图像处理器104可以将原有的区域向外偏移一定的距离来确定实际坐标范围。可选地,图像处理器104可以模糊构建包括实际坐标范围的标准图形。例如,图中由手指图案指示的选定范围为不规则的近似矩形的图形,图像处理器104可以将该近似矩形转换为对应为矩形的实际坐标范围,从而便于移动电子设备进行清扫,完成清扫任务。图像特征点可以采用基于尺度不变特征变换(Scale Invariant Feature Transform,SIFT)算法或加速稳健特征(Speeded Up Robust Features,SURF)算法识别上述特征。采用SIFT算法,需要在存储器116中存储参考图像。图像处理器1040首先识别存储在存储器110中的参考图像的对象的关键点,提取SIFT特征,然后通过比较存储器110中的各个关键点SIFT特征与新采集的图像的SIFT特征,再基于K最邻近算法(K-Nearest Neighbor KNN)的匹配特征,来识别新图像中的对象。SURF算法是基于近似的2D哈尔小波(Haar wavelet)响应,并利用积分图像(integral images)进行图像卷积,使用了基于Hessian矩阵的测度去构造检测子(Hessian matrix-based measure for the detector),并使用了基于分布的描述子(a distribution-based descriptor)。
可选地或者附加地,确定与照片中第二移动电子设备140的用户选定区域相对应的室内实际区域的坐标范围可以通过坐标映射转换确定任务 区域的实际坐标范围。第二移动电子设备140中的图像中的特征点将与图像地图中的图像特征点匹配,即可确定第二移动电子设备140中的图像中的特征点的实际坐标位置。同时,匹配后可以计算出用户相机拍摄到图像所在的相机坐标系相对充电桩所在的实际世界坐标系的坐标系转换关系。图像中圈定区域边界线可以离散化为由点组成的边界线。图像中边界线上离散化的点相对图像特征点的位置信息,图像特征点的实际坐标位置,以及坐标系转换关系,可以用于计算边界线上离散化点在实际世界坐标系(即充电桩坐标系)中的实际坐标位置,即边界线对应的室内实际区域的坐标范围。
定位模块106可通信地连接至图像处理器104,配置为记录移动电子设备100的当前所在位置和与选定区域相对应的实际坐标范围之间的距离范围。例如,定位模块106将充电桩180所在处设为坐标原点,图像中的每一个点对应一个坐标值(X,Y)。定位模块106和编码器使得移动电子设备100知道自己目前的位置。定位模块106是计算第一电子设备100在室内位置的模块。第一电子设备100在工作时都都要时刻知道自己的室内位置,都通过定位模块106来实现。
路径规划模块108可通信地连接至图像处理器104,配置为根据与选定区域相对应的实际坐标范围,生成路径规划方案。可选地,路径规划模块108还用于采用基于网格的生成树路径规划算法,对选定区域进行路径规划。例如,路径规划模块108根据生成的对应区域坐标范围(用户圈定的目标清扫区域),对该坐标范围规划优化清扫路径。采用基于网格的生成树路径规划算法(Grid-based Spanning Tree Path Planning)实现对选定目标清扫区域的清扫路径规划。该方法将对应坐标区域采用网格化处理,对网格建立树节点并生成树,然后使用包围生成树的哈密尔顿回路(Hamiltonian path)作为清扫该区域的优化清扫路径。
此外,初始时,移动电子设备100位于智能充电桩180处。对于移动电子设备100如何从智能充电桩180到达选定区域的坐标范围区域,路 径规划模块108将读取首次使用时移动电子设备100跟随到达该区域的路径(如果移动电子设备100采用跟随模式),或者采用第二移动电子设备140的用户建图过程中的行走路径作为到达该区域的路径(如果首次使用时,移动电子设备100不跟随用户的情况),并将该路径与选定区域优化清扫路径合成清扫任务路径。该合成可以将两段路径做简单顺序连接,第一段路径实现到达目标清扫区域,第二段路径实现对圈定清扫区域的优化覆盖,完成清洁任务。
然后,上述任务被发送至移动电子设备100自动执行。例如,运动模块110可通信地连接至路径规划模块108,配置为根据路径规划方案,进行运动。
以下分别描述移动设备100首次使用时建立室内环境地图的多种方式。
方式一:移动电子设备100(例如机器人)包含摄像头,且第二移动电子设备140的用户佩戴定位接收器
可选地,移动电子设备100还包括第一摄像头112,其中,第二移动电子设备140还包括第二无线信号收发器142,移动电子设备100被配置为工作在建立地图模式。第一无线信号收发器102和第二无线信号收发器142分别可通信地连接到多个参考无线信号源,配置为根据从多个参考无线信号源获取的信号强度,确定移动电子设备100和第二移动电子设备140的位置。例如,可通过本领域已知的任何方法将从参考无线信号源处接收到的信号转换为距离信息,上述方法包括但不限于:飞行时间算法(Time of Flight,ToF)、到达角度算法(Angle of Arrival,AoA)、到达时间差算法(Time Difference of Arrival,TDOA)和接收信号强度算法(Received Signal Strengh,RSS)。
运动模块110被配置为根据移动电子设备100和第二移动电子设备140的位置,跟随第二移动电子设备140的运动。例如,移动电子设备100包含单目摄像头112,第二移动电子设备140的用户佩戴有无线定位 接收器手环,或用户手持装备有无线定位接收器外设的手机。使用单目摄像头112可以降低硬件成本与计算代价,采用单目摄像头实现与采用深度摄像头一样的效果。可以不需要图像深度信息。距离深度信息通过超声波传感器与激光传感器感知。本实施例中,以单目摄像头为例进行说明,本领域技术人员应能理解,也可以采用深度摄像头等作为移动电子设备100的摄像头。移动电子设备100通过自身的无线定位接收器跟随用户。例如,首次使用,第二移动电子设备140的用户通过手机APP实现与移动电子设备100的交互完成室内建立地图。通过室内放置的固定位置的无线信号发射组作为参考点,例如,UWB,第二移动电子设备140的手机APP和移动电子设备100中的无线信号模块读取对各个信号源的信号强度(RSS),来确定第二移动电子设备140的用户和移动电子设备100在室内的位置。并且,移动电子设备100的运动模块110根据智能充电桩发送的实时位置信息(手机和机器人位置),完成用户跟随。
第一摄像头112被配置为在运动模块110运动时拍摄多个图像,该多个图像包含特征信息和对应的拍摄位置信息。例如,跟随过程中由机器人的单目摄像头完成建图。在跟随的过程中,移动电子设备100利用第一摄像头112,例如单目摄像头,对整个室内布局进行拍摄,并将拍下的含大量特征的图像及其对应的拍摄位置信息以及移动电子设备100跟随路径坐标,经过本地无线通信网络(WIFI、蓝牙、ZigBee等)实时传送至存储器116中。在图1中,存储器116被显示包含在移动电子设备100中。可选地,存储器116也可以包含在智能充电桩180中,也即云端。
图像处理模器104可通信地连接到第一摄像头112,被配置为通过对该多个图像进行拼接,提取该多个图像中的特征信息和拍摄位置点信息,生成图像地图。例如,图像处理模器104根据移动电子设备100的第一摄像头112的高度和内外参数,经由图像处理器104对第一摄像头112所拍摄的大量图像进行地图拼接创建,特征选择提取(例如SIFT、SURF算法等),特征点位置信息添加,进而生成室内图像地图信息(含大量图像特 征点),再将处理后的图像地图信息存储在存储器116中。相机(摄像头)的内参数指与相机自身特性相关的参数,比如相机的镜头焦距、像素大小等;相机的外参数是在世界坐标系中(充电桩室内实际坐标系)的参数,比如相机的位置、旋转方向、角度等。相机拍摄的照片有自己的相机坐标系,故需要相机内外参数实现坐标系的转换。
方式二:移动电子设备100(机器人)包含摄像头以及可显示相机校正黑白棋盘,第二移动电子设备140的用户无需佩戴定位接收器。
可选地,在另一实施例中,移动电子设备100还包括显示屏118,移动电子设备100被配置为工作在建立地图模式,第二移动电子设备140包括第二摄像头144,第一无线信号收发器142可通信地连接到多个参考无线信号源,配置为根据从多个参考无线信号源获取的信号强度,确定移动电子设备100的位置。
第一摄像头112被配置为检测第二移动电子设备140的位置。可选地,移动电子设备100还包括超声波传感器及激光传感器,可以检测移动电子设备100与第二移动电子设备140之间的距离。
运动模块110被配置为根据移动电子设备100和第二移动电子设备140的位置,跟随第二移动电子设备140的运动。例如,首次使用,第二移动电子设备140的用户通过手机APP实现与移动电子设备100的用户交互来完成室内建立地图。通过室内放置的固定位置的无线信号发射组(UWB等)作为参考点,移动电子设备100中的第一无线信号收发器102读取对各个信号源的信号强度(RSS),来确定移动电子设备100在室内的位置。通过移动电子设备100的第一摄像头112,例如单目摄像头、超声波传感器及激光传感器114实现对第二移动电子设备100的用户的目标定位与跟随。例如,第二移动电子设备140的用户可以通过手机APP设定跟随距离,从而移动电子设备100根据该跟随距离和实时测量的与第二移动电子设备140之间的角度,调整与第二移动电子设备140之间的距离和 角度。跟随过程中移动电子设备100实时发送跟随路径坐标至智能充电桩180。
此外,移动电子设备100的显示屏118被配置为显示例如黑白色棋盘。图像处理器104可通信地连接到第二摄像头144,被配置为接收来自第二摄像头144在运动模块110运动时拍摄的多个图像。例如,图像处理器104可通过第一无线信号收发器102和第二无线信号收发器142,接收来自第二摄像头144所拍摄的多个图像。其中,多个图像包括移动电子设备100的显示为黑白色棋盘的显示屏118的图像。图像处理器104还被配置为通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。在该方式中第二移动电子设备140的用户无需佩戴定位接收器,故第二移动设备140,例如手机相机的外参数需要通过标定图进行相机标定。标定图片是黑白相间的矩形构成的棋盘图,如图5所示。
例如,移动电子设备100,也即机器人包含第一摄像头112,例如为单目摄像头,及可显示黑白色相机校正棋盘的显示屏118。用户无需佩戴无线定位接收器手环,也无需用户手持装备无线定位接收器外设的手机,移动电子设备100通过视觉跟随用户,第二移动电子设备140的用户使用手机APP拍照完成建图。例如,每到达一个房间,第二移动电子设备140的用户通过手机APP启动房间建图应用,此时的移动电子设备100的液晶显示屏118显示用于校正相机的经典黑白色棋盘。移动电子设备100同时将此时自身的坐标及方向信息发送至定位模块106。此时,第二移动电子设备140的用户使用手机APP拍摄该房间环境,拍摄的照片需要包括移动电子设备100的液晶显示屏中的黑白色棋盘。第二移动电子设备140的用户根据房间布局情况拍摄多张照片(照片均需要拍到机器人液晶屏中的黑白色棋盘),并通过手机APP将拍下的含房间环境及移动电子设备100,例如机器人100的图像经过本地无线通信网络(WIFI、蓝牙、ZigBee等)传送至存储器116中。根据移动电子设备100,例如机器人当时的位置和方向信息、摄像头112的高度和内外参数,经由图像处理器104对第二移 动电子设备140的用户拍摄的大量图像进行地图拼接创建,特征选择提取,特征点位置信息添加,生成室内图像特征点地图信息,再将处理后的图像地图信息存储在存储器116中。
方式三:移动电子设备100(机器人)不包含摄像头,第二移动电子设备140的用户佩戴定位接收器。
可选地,在另一个实施例中,第二移动电子设备140还包括第二无线信号收发器142和第二摄像头144。第二无线信号收发器142可通信地连接到多个参考无线信号源,配置为根据从多个参考无线信号源获取的信号强度,确定第二移动电子设备140的位置。第二摄像头144被配置为拍摄任务场所的多个图像。图像处理器104可通信地连接到第二摄像头140,被配置为通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。
例如,在该实施例中,移动电子设备100,例如机器人不包含单目摄像头且机器人不跟随第二移动电子设备140的用户。第二移动电子设备140的用户佩戴无线定位接收器手环,或用户手持装备有无线定位接收器外设的手机,使用手机APP完成室内建图。例如,首次使用,第二移动电子设备140的用户通过手机APP或用户佩戴的无线定位接收器手环或手机装备的无线定位接收器外设实现室内建立地图。通过室内放置的固定位置的参考无线信号源(UWB等)作为参考点,第二移动电子设备140中的无线信号收发器142读取对各个参考无线信号源的信号强度(Received Signal Strength,RSS)来确定该第二移动电子设备140的用户在室内的位置。每到达一个房间,第二移动电子设备140的用户通过手机APP启动房间建图程序。第二移动电子设备140的用户使用手机APP拍摄该房间环境,例如,根据房间布局情况可拍摄多张照片。第二移动电子设备140的手机APP将记录每次拍摄的第二摄像头144的位姿信息以及第二无线信号收发器142记录的第二移动电子设备140,例如手机相对地面的高度信息和其在室内的位置信息,并通过本地无线通信网络(WIFI、蓝牙、ZigBee等) 传送至存储器116中。根据第二摄像头144的内外参数信息以及拍摄时的位姿信息、高度信息及位置信息,经由图像处理器104对拍摄的大量图像进行地图拼接创建,特征选择提取,特征点位置信息添加,生成室内图像特征点地图信息,再将处理后的图像地图信息存储在存储器116中。
可选地或者附加地,移动电子设备100,例如,机器人100还包括编码器和惯性测量模块(IMU),以辅助第一摄像头112获取移动电子设备100,例如机器人的位置和姿态。例如当机器人被遮蔽,不在第一摄像头112的视线中时,编码器和IMU都还能提供机器人的位置和姿态。例如,编码器可以作为里程计,通过记录机器人轮子的转动信息,来计算机器人走过的轨迹。
可选地或者附加地,移动电子设备100还可包含传感器114,传感器114将移动电子设备100周围的障碍物信息发送至运动模块110。运动模110还配置为调整移动电子设备100的运动方位以避开障碍物。可以理解,因为安装的高度不同,安装在移动电子设备100上的第一摄像头112与安装在移动电子设备100上的传感器114的高度不同,因此第一摄像头112所拍摄的障碍物信息与传感器所拍摄的障碍物可能不同,因为可能存在遮蔽。第一摄像头112可以通过旋转,俯仰等方式改变视觉方向,以获取更广的视觉范围。此外,传感器114可以安装在比较低的水平位置,而这个位置有可能是第一摄像头112的盲区,物体不出现在第一摄像头112的视角中,那么就得依靠这些传统传感器112来避障。可选地,摄像头112可以获取障碍物信息,并结合超声和激光传感器114的信息。单目摄像头112获得的图像做物体识别,超声和激光传感器114测距。
可选地或者可替代地,传感器114包括超声波传感器和/或激光传感器。第一摄像头112和传感器114可以相互辅助。例如,如有遮蔽时,在被遮蔽的局部,移动电子设备100需要靠自身的激光传感器、超声波传感器114等来进行避障。
例如,移动电子设备100搭载的激光传感器、超声波传感器对移动电子设备100周围静态、动态环境进行检测,辅助避开静态、动态障碍物以及调整最优路径。
图3示出根据本发明的一个实施例的移动电子设备、第二移动电子设备所在系统的示意图。可选地或者可替代地,移动电子设备300还包括充电桩380,充电桩380可以包括图像处理器386、路径规划模块388、存储器384,例如内存数据模块、第一无线发射机381(例如UWB)和第二无线信号接收机382,例如由WIFI实现。移动电子设备300的本体,例如,机器人可以包括第一无线信号接收机302,例如由UWB实现、第一摄像头310、地图定位模块304、避障模块306、运动模块308和传感器314和316,以及编码器318、第二无线信号接收机320,例如由WIFI实现。第二移动电子设备340,例如手机,还包括手机APP和第二摄像头。可选地,图像处理器386、路径规划模块388、存储器384中的至少一个也可以包含在移动电子设备300的本体中。如图3所示,智能充电桩380中的第一无线发射机381与扫地机器人300中的第一无线信号接收机302可通信地连接,扫地机器人300中的第二无线信号发射机320与智能充电桩380中的第五无线信号接收机382可通信地连接。智能充电桩380中的路径规划模块388与扫地机器人300中的运动模块308可通信地连接。第二电子移动设备的手机340与智能充电桩380中的第二无线信号接收机382可通信地连接。路径规划模块388将生成的路径发送至运动模块308执行。
在扫地机器人300内部,中的第一无线信号接收机302与地图定位模块304可通信地连接。地图定位模块304通信又与第二无线信号发射机320和运动模块308可通信地连接。第一摄像头310与第二无线信号发射机320可通信地连接。超声波传感器314、激光传感器316和编码器318与避障模块306可通信地连接。避障模块306与运动模块308可通信地连 接。定位模块304与运动模块308之间亦有信息交互,运动模块308在执行规划路径时需要定位模块304输入的位置信息。
在智能充电桩380内部,第二无线信号接收机382可通信地连接与存储器384可通信地连接。存储器384与图像处理器386可通信地连接。图像处理器386与路径规划模块388可通信地连接。
图4示出了根据本发明的一个实施例的在移动电子设备中的方法400的流程图。在移动电子设备中用于处理任务区域的任务的方法400。其中移动电子设备包括第一无线信号收发器、图像处理器、定位模块、路径规划模块以及运动模块。方法400包括在块410中,通过可通信地连接到第二移动电子设备第一无线信号收发器,获取由第二移动电子设备的用户对任务场所所拍摄的照片和在照片上选定区域;在块420中,通过可通信地连接至第一无线信号收发器的图像处理器,提取包含选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与照片中的选定区域相对应的实际坐标范围;在块430中,通过可通信地连接至图像处理器的定位模块,记录移动电子设备的当前所在位置和与选定区域相对应的实际坐标范围之间的距离范围;在块440中,通过可通信地连接至图像处理器的路径规划模块,根据与选定区域相对应的实际坐标范围,生成路径规划方案;在块450中,通过可通信地连接至路径规划模块的运动模块,根据路径规划方案,进行运动。
可选地或者可替代地,移动电子设备还包括第一摄像头,第二移动电子设备还包括第二无线信号收发器,移动电子设备被配置为工作在建立地图模式,方法400还包括(图中未示出)分别通过可通信地连接到多个参考无线信号源的第一无线信号收发器和第二无线信号收发器,根据从多个参考无线信号源获取的信号强度,确定移动电子设备和第二移动电子设备的位置;通过运动模块,根据移动电子设备和第二移动电子设备的位置,跟随第二移动电子设备的运动;通过第一摄像头,在运动模块运动时拍摄多个图像,多个图像包含特征信息和对应的拍摄位置信息;通过可通信地 连接到第一摄像头的图像处理模器,通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。
可选地或者可替代地,移动电子设备还包括显示屏,移动电子设备被配置为工作在建立地图模式,第二移动电子设备包括第二摄像头,方法400还包括(图中未示出)通过可通信地连接到多个参考无线信号源的第一无线信号收发器,根据从多个参考无线信号源获取的信号强度,确定移动电子设备的位置;通过运动模块,根据移动电子设备和第二移动电子设备的位置,跟随第二移动电子设备的运动;通过移动电子设备的显示屏,显示黑白色棋盘;通过可通信地连接到第二摄像头的图像处理器,接收来自第二摄像头在运动模块运动时拍摄的多个图像,其中,多个图像包括移动电子设备的显示为黑白色棋盘的显示屏的图像,通过图像处理器,通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。
可选地或者可替代地,其中第二移动电子设备还包括第二无线信号收发器和第二摄像头,方法400还包括(图中未示出)通过可通信地连接到多个参考无线信号源的第二无线信号收发器,根据从多个参考无线信号源获取的信号强度,确定第二移动电子设备的位置;通过第二摄像头,拍摄任务场所的多个图像,通过可通信地连接到第二摄像头的图像处理器,通过对多个图像进行拼接,提取多个图像中的特征信息和拍摄位置点信息,生成图像地图。
可选地或者可替代地,方法400还包括(图中未示出)通过路径规划模块,采用基于网格的生成树路径规划算法,对选定区域进行路径规划。
可选地或者可替代地,方法400还包括(图中未示出)还包括通过可通信地连接到图像处理器的编码器和惯性测量模块,为辅助第一摄像头获取移动电子设备的位置和姿态。
可选地或者可替代地,移动电子设备还包括充电桩,其中充电桩包括图像处理器、路径规划模块和定位模块。
可选地或者可替代地,移动电子设备还可包含传感器,方法400还包括(图中未示出)通过传感器,将移动电子设备周围的障碍物信息发送至运动模块,通过运动模块,调整移动电子设备的运动方位以避开障碍物。
可选地或者可替代地,传感器包括超声波传感器和/或激光传感器。
在前面的描述中,已经参考具体示例性实施例描述了本发明;然而,应当理解,在不脱离本文所阐述的本发明的范围的情况下,可以进行各种修改和变化。说明书和附图应以示例性的方式来看待,而不是限制性的,并且所有这些修改旨在被包括在本发明的范围内。因此,本发明的范围应由本文的一般实施例及其合法等效物、而不是仅由上述具体实施例来确定。例如,任何方法或过程实施例中的步骤可以任何顺序执行,并且不限于在具体实施例中呈现的明确顺序。另外,在任何装置实施例中的部件和/或元件可以各种排列组装或以其他方式操作地配置,以产生与本发明基本相同的结果,因此不限于具体实施例中的具体配置。
以上已经关于具体实施例描述了益处、其他优点和问题的解决方案;然而,任何益处、优点或问题的解决方案,或可引起任何特定益处、优点或方案发生或变得更明显的任何元件不应被解释为关键的、必需的或基本的特征或部件。
如本文所使用的,术语“包括”、“包含”或其任何变型旨在引用非排他性的包含,使得包括元件列表的过程、方法、物品、组合物或装置不仅包括所述的那些元件,而且也可以包括未明确列出的或固有的主要的过程、方法、物品、组合物或装置。除了未具体叙述的那些之外,在本发明的实践中使用的上述结构、布局、应用、比例、元件、材料或部件的其它组合和/或修改可以被改变,或者以其他方式特别适用于特定的环境、制造规格、设计参数或其他操作要求,而不脱离其大体原则。
虽然本文已经参考某些优选实施例描述了本发明,但是本领域技术人员将容易理解,在不脱离本发明的精神和范围的情况下,其他应用可以替代本文所阐述的那些。因此,本发明仅由下述权利要求书限定。

Claims (18)

  1. 一种用于处理任务区域的任务的移动电子设备,包括第一无线信号收发器、图像处理器、定位模块、路径规划模块以及运动模块,其中:
    所述第一无线信号收发器可通信地连接到第二移动电子设备,配置为获取由所述第二移动电子设备的用户对任务场所所拍摄的照片和在所述照片上选定区域;
    所述图像处理器可通信地连接至所述第一无线信号收发器,配置为提取包含所述选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与所述照片中的所述选定区域相对应的实际坐标范围;
    所述定位模块可通信地连接至所述图像处理器,配置为记录所述移动电子设备的当前所在位置与所述选定区域相对应的实际坐标范围之间的距离范围;
    所述路径规划模块可通信地连接至所述图像处理器,配置为根据与所述选定区域相对应的实际坐标范围,生成路径规划方案;
    所述运动模块可通信地连接至所述路径规划模块,配置为根据所述路径规划方案,进行运动。
  2. 根据权利要求1所述的移动电子设备,还包括第一摄像头,其中,所述第二移动电子设备还包括第二无线信号收发器,所述移动电子设备被配置为工作在建立地图模式,
    所述第一无线信号收发器和所述第二无线信号收发器分别可通信地连接到多个参考无线信号源,配置为根据从所述多个参考无线信号源获取的信号强度,确定所述移动电子设备和所述第二移动电子设备的位置;
    所述运动模块被配置为根据所述移动电子设备和所述第二移动电子设备的位置,跟随所述第二移动电子设备的运动;
    所述第一摄像头被配置为在所述运动模块运动时拍摄多个图像,所述多个图像包含特征信息和对应的拍摄位置信息,
    所述图像处理模器可通信地连接到所述第一摄像头,被配置为通过对所述多个图像进行拼接,提取所述多个图像中的特征信息和拍摄位置点信息,生成所述图像地图。
  3. 根据权利要求1所述的移动电子设备,还包括显示屏,其中,所述移动电子设备被配置为工作在建立地图模式,所述第一移动电子设备包括第一摄像头,所述第二移动电子设备包括第二摄像头,
    所述第一无线信号收发器可通信地连接到多个参考无线信号源,配置为根据从所述多个参考无线信号源获取的信号强度,确定所述移动电子设备的位置;
    所述第一摄像头被配置为检测所述第二移动电子设备的位置;
    所述运动模块被配置为根据所述移动电子设备和所述第二移动电子设备的位置,跟随所述第二移动电子设备的运动;
    所述移动电子设备的显示屏被配置为显示黑白色棋盘;
    所述图像处理器可通信地连接到所述第二摄像头,被配置为接收来自所述第二摄像头在所述运动模块运动时拍摄的多个图像,其中,所述多个图像包括所述移动电子设备的显示为所述黑白色棋盘的显示屏的图像,所述图像处理器还被配置为通过对所述多个图像进行拼接,提取所述多个图像中的特征信息和拍摄位置点信息,生成所述图像地图。
  4. 根据权利要求1所述的移动电子设备,其中所述第二移动电子设备还包括第二无线信号收发器和第二摄像头,
    其中所述第二无线信号收发器可通信地连接到多个参考无线信号源,配置为根据从所述多个参考无线信号源获取的信号强度,确定所述第二移动电子设备的位置;
    所述第二摄像头被配置为拍摄所述任务场所的多个图像,
    所述图像处理器可通信地连接到所述第二摄像头,被配置为通过对 所述多个图像进行拼接,提取所述多个图像中的特征信息和拍摄位置点信息,生成所述图像地图。
  5. 根据权利要求1所述的移动电子设备,其中所述路径规划模块还用于采用基于网格的生成树路径规划算法,对所述选定区域进行路径规划。
  6. 根据权利要求2所述的移动电子设备,还包括可通信地连接到所述图像处理器的编码器和惯性测量模块,配置为辅助所述摄像头获取所述移动电子设备的位置和姿态。
  7. 根据权利要求1-6中任一项所述的移动电子设备,还包括充电桩,其中所述充电桩包括所述图像处理器、所述路径规划模块和所述定位模块。
  8. 根据权利要求1-6中任一项所述的移动电子设备,还可包含传感器,所述传感器将所述移动电子设备周围的障碍物信息发送至所述运动模块,所述运动模块还配置为调整所述移动电子设备的运动方位以避开所述障碍物。
  9. 根据权利要求6所述的移动电子设备,其中所述传感器包括超声波传感器和/或激光传感器。
  10. 一种在移动电子设备中用于处理任务区域的任务的方法,所述移动电子设备包括第一无线信号收发器、图像处理器、定位模块、路径规划模块以及运动模块,所述方法包括:
    通过可通信地连接到第二移动电子设备所述第一无线信号收发器,获取由所述第二移动电子设备的用户对任务场所所拍摄的照片和在所述照片上选定区域;
    通过可通信地连接至所述第一无线信号收发器的所述图像处理器,提取包含所述选定区域的照片的特征信息,并通过比较提取的特征信息和存储的包含位置信息的图像地图的特征信息,确定与所述照片中的所述选定区域相对应的实际坐标范围;
    通过可通信地连接至所述图像处理器的所述定位模块,记录所述移动电子设备的当前所在位置和与所述选定区域相对应的实际坐标范围之间 的距离范围;
    通过可通信地连接至所述图像处理器的所述路径规划模块,根据与所述选定区域相对应的实际坐标范围,生成路径规划方案;
    通过可通信地连接至所述路径规划模块的所述运动模块,根据所述路径规划方案,进行运动。
  11. 根据权利要求10所述的方法,其中所述移动电子设备还包括第一摄像头,所述第二移动电子设备还包括第二无线信号收发器,所述移动电子设备被配置为工作在建立地图模式,所述方法还包括:
    分别通过可通信地连接到多个参考无线信号源的所述第一无线信号收发器和所述第二无线信号收发器,根据从所述多个参考无线信号源获取的信号强度,确定所述移动电子设备和所述第二移动电子设备的位置;
    通过所述运动模块,根据所述移动电子设备和所述第二移动电子设备的位置,跟随所述第二移动电子设备的运动;
    通过所述第一摄像头,在所述运动模块运动时拍摄多个图像,所述多个图像包含特征信息和对应的拍摄位置信息,
    通过可通信地连接到所述第一摄像头的所述图像处理模器,通过对所述多个图像进行拼接,提取所述多个图像中的特征信息和拍摄位置点信息,生成所述图像地图。
  12. 根据权利要求10所述的方法,其中所述移动电子设备还包括显示屏,所述移动电子设备被配置为工作在建立地图模式,所述第一移动电子设备包括第一摄像头,所述第二移动电子设备包括第二摄像头,
    通过可通信地连接到多个参考无线信号源的所述第一无线信号收发器,根据从所述多个参考无线信号源获取的信号强度,确定所述移动电子设备的位置;
    所述第一摄像头被配置为检测所述第二移动电子设备的位置;
    通过所述运动模块,根据所述移动电子设备和所述第二移动电子设备的位置,跟随所述第二移动电子设备的运动;
    通过所述移动电子设备的显示屏,显示黑白色棋盘;
    通过可通信地连接到所述第二摄像头的所述图像处理器,接收来自所述第二摄像头在所述运动模块运动时拍摄的多个图像,其中,所述多个图像包括所述移动电子设备的显示为所述黑白色棋盘的显示屏的图像,
    通过所述图像处理器,通过对所述多个图像进行拼接,提取所述多个图像中的特征信息和拍摄位置点信息,生成所述图像地图。
  13. 根据权利要求10所述的方法,其中所述第二移动电子设备还包括第二无线信号收发器和第二摄像头,
    通过可通信地连接到多个参考无线信号源的所述第二无线信号收发器,根据从所述多个参考无线信号源获取的信号强度,确定所述第二移动电子设备的位置;
    通过所述第二摄像头,拍摄所述任务场所的多个图像,
    通过可通信地连接到所述第二摄像头的所述图像处理器,通过对所述多个图像进行拼接,提取所述多个图像中的特征信息和拍摄位置点信息,生成所述图像地图。
  14. 根据权利要求10所述的方法,还包括
    通过所述路径规划模块,采用基于网格的生成树路径规划算法,对所述选定区域进行路径规划。
  15. 根据权利要求11所述的方法,还包括
    通过可通信地连接到所述图像处理器的编码器和惯性测量模块,为辅助所述第一摄像头获取所述移动电子设备的位置和姿态。
  16. 根据权利要求10-15中任一项所述的方法,所述移动电子设备还包括充电桩,其中所述充电桩包括所述图像处理器、所述路径规划模块和所述定位模块。
  17. 根据权利要求10-15中任一项所述的方法,其中所述移动电子设备还可包含传感器,所述方法还包括
    通过所述传感器,将所述移动电子设备周围的障碍物信息发送至所 述运动模块,
    通过所述运动模块,调整所述移动电子设备的运动方位以避开所述障碍物。
  18. 根据权利要求15所述的方法,其中所述传感器包括超声波传感器和/或激光传感器。
PCT/CN2018/090579 2017-07-26 2018-06-11 一种用于处理任务区域的任务的移动电子设备以及方法 Ceased WO2019019819A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710620792.6 2017-07-26
CN201710620792.6A CN108459597B (zh) 2017-07-26 2017-07-26 一种用于处理任务区域的任务的移动电子设备以及方法

Publications (1)

Publication Number Publication Date
WO2019019819A1 true WO2019019819A1 (zh) 2019-01-31

Family

ID=63220256

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/090579 Ceased WO2019019819A1 (zh) 2017-07-26 2018-06-11 一种用于处理任务区域的任务的移动电子设备以及方法

Country Status (2)

Country Link
CN (1) CN108459597B (zh)
WO (1) WO2019019819A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115060262A (zh) * 2019-05-09 2022-09-16 深圳阿科伯特机器人有限公司 在地图上定位设备的方法、服务端及移动机器人
CN115705349A (zh) * 2021-08-05 2023-02-17 华为技术有限公司 地图构建的方法及其装置

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109376262B (zh) * 2018-11-12 2022-02-11 万瞳(南京)科技有限公司 一种基于大数据处理的景点离线图像识别方法及其装置
CN109460040A (zh) * 2018-12-28 2019-03-12 珠海凯浩电子有限公司 一种通过手机拍摄照片识别地板建立地图系统及方法
CN110285799B (zh) * 2019-01-17 2021-07-30 杭州志远科技有限公司 一种带有三维可视化技术的导航系统
CN111481111A (zh) * 2019-01-29 2020-08-04 北京奇虎科技有限公司 扫地机内存的使用方法及装置
CN111667531B (zh) * 2019-03-06 2023-11-24 西安远智电子科技有限公司 定位方法及装置
CN113296495B (zh) * 2020-02-19 2023-10-20 苏州宝时得电动工具有限公司 自移动设备的路径形成方法、装置和自动工作系统
CN111784797A (zh) * 2020-06-29 2020-10-16 济南浪潮高新科技投资发展有限公司 一种基于ar的机器人物联网交互方法、装置及介质
CN112256345A (zh) * 2020-10-10 2021-01-22 深圳供电局有限公司 一种基于最先适应算法和遗传算法的计算任务卸载方法
CN116965745A (zh) * 2022-04-22 2023-10-31 追觅创新科技(苏州)有限公司 坐标重定位方法、系统及清洁机器人

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6667592B2 (en) * 2001-08-13 2003-12-23 Intellibot, L.L.C. Mapped robot system
CN101655369A (zh) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 利用图像识别技术实现定位导航的系统及方法
CN104750008A (zh) * 2015-04-14 2015-07-01 西北农林科技大学 一种ZigBee网络中的农业机器人无线遥控系统
CN105259898A (zh) * 2015-10-13 2016-01-20 江苏拓新天机器人科技有限公司 一种智能手机控制的扫地机器人
CN106725119A (zh) * 2016-12-02 2017-05-31 西安丰登农业科技有限公司 一种基于三维模型定位的扫地机器人导航系统
CN207115193U (zh) * 2017-07-26 2018-03-16 炬大科技有限公司 一种用于处理任务区域的任务的移动电子设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101333496B1 (ko) * 2010-11-30 2013-11-28 주식회사 유진로봇 과거 지도 데이터 기반의 이동 로봇 제어 장치 및 방법
CN102866706B (zh) * 2012-09-13 2015-03-25 深圳市银星智能科技股份有限公司 一种采用智能手机导航的清扫机器人及其导航清扫方法
US20140309925A1 (en) * 2013-04-14 2014-10-16 Pablo Garcia MORATO Visual positioning system
CN103439973B (zh) * 2013-08-12 2016-06-29 桂林电子科技大学 自建地图家用清洁机器人及清洁方法
CN205068153U (zh) * 2015-08-07 2016-03-02 浙江海洋学院 一种基于行走机器人的分布式视觉定位系统
CN105352508A (zh) * 2015-10-22 2016-02-24 深圳创想未来机器人有限公司 机器人定位导航方法及装置
KR20170058612A (ko) * 2015-11-19 2017-05-29 (주)예사싱크 영상 기반 실내측위 방법 및 그의 시스템
CN106292697B (zh) * 2016-07-26 2019-06-14 北京工业大学 一种移动设备的室内路径规划与导航方法
CN106382930B (zh) * 2016-08-18 2019-03-29 广东工业大学 一种室内agv无线导航方法及装置
CN106444750A (zh) * 2016-09-13 2017-02-22 哈尔滨工业大学深圳研究生院 一种基于二维码定位的智能仓储移动机器人系统
CN106647766A (zh) * 2017-01-13 2017-05-10 广东工业大学 一种基于复杂环境的uwb‑视觉交互的机器人巡航方法及系统

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6667592B2 (en) * 2001-08-13 2003-12-23 Intellibot, L.L.C. Mapped robot system
CN101655369A (zh) * 2008-08-22 2010-02-24 环达电脑(上海)有限公司 利用图像识别技术实现定位导航的系统及方法
CN104750008A (zh) * 2015-04-14 2015-07-01 西北农林科技大学 一种ZigBee网络中的农业机器人无线遥控系统
CN105259898A (zh) * 2015-10-13 2016-01-20 江苏拓新天机器人科技有限公司 一种智能手机控制的扫地机器人
CN106725119A (zh) * 2016-12-02 2017-05-31 西安丰登农业科技有限公司 一种基于三维模型定位的扫地机器人导航系统
CN207115193U (zh) * 2017-07-26 2018-03-16 炬大科技有限公司 一种用于处理任务区域的任务的移动电子设备

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115060262A (zh) * 2019-05-09 2022-09-16 深圳阿科伯特机器人有限公司 在地图上定位设备的方法、服务端及移动机器人
CN115705349A (zh) * 2021-08-05 2023-02-17 华为技术有限公司 地图构建的方法及其装置

Also Published As

Publication number Publication date
CN108459597B (zh) 2024-02-23
CN108459597A (zh) 2018-08-28

Similar Documents

Publication Publication Date Title
WO2019019819A1 (zh) 一种用于处理任务区域的任务的移动电子设备以及方法
JP7236565B2 (ja) 位置姿勢決定方法、装置、電子機器、記憶媒体及びコンピュータプログラム
CN207115193U (zh) 一种用于处理任务区域的任务的移动电子设备
WO2019001237A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
US9224208B2 (en) Image-based surface tracking
CN112567201A (zh) 距离测量方法以及设备
WO2022077296A1 (zh) 三维重建方法、云台负载、可移动平台以及计算机可读存储介质
US20160253814A1 (en) Photogrammetric methods and devices related thereto
US20180190014A1 (en) Collaborative multi sensor system for site exploitation
WO2018140107A1 (en) System for 3d image filtering
US11982783B2 (en) Metal detector capable of visualizing the target shape
CN207488823U (zh) 一种移动电子设备
Ye et al. 6-DOF pose estimation of a robotic navigation aid by tracking visual and geometric features
WO2018228258A1 (zh) 一种移动电子设备以及该移动电子设备中的方法
JP2016085602A (ja) センサ情報統合方法、及びその装置
KR102190743B1 (ko) 로봇과 인터랙션하는 증강현실 서비스 제공 장치 및 방법
CN207067803U (zh) 一种用于处理任务区域的任务的移动电子设备
WO2019012803A1 (ja) 指定装置、及び、指定プログラム
KR20120108256A (ko) 로봇 물고기 위치 인식 시스템 및 로봇 물고기 위치 인식 방법
JP7437930B2 (ja) 移動体及び撮像システム
CN117257170A (zh) 清洁方法、清洁展示方法、清洁设备和存储介质
CN112672134B (zh) 基于移动终端三维信息采集控制设备及方法
CN206833252U (zh) 一种移动电子设备
JP7317684B2 (ja) 移動体、情報処理装置、及び撮像システム
WO2019037517A1 (zh) 一种用于处理任务区域的任务的移动电子设备以及方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18838419

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18838419

Country of ref document: EP

Kind code of ref document: A1