[go: up one dir, main page]

WO2019153345A1 - Environment information determining method, apparatus, robot, and storage medium - Google Patents

Environment information determining method, apparatus, robot, and storage medium Download PDF

Info

Publication number
WO2019153345A1
WO2019153345A1 PCT/CN2018/076503 CN2018076503W WO2019153345A1 WO 2019153345 A1 WO2019153345 A1 WO 2019153345A1 CN 2018076503 W CN2018076503 W CN 2018076503W WO 2019153345 A1 WO2019153345 A1 WO 2019153345A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
image
video device
surrounding environment
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/076503
Other languages
French (fr)
Chinese (zh)
Inventor
骆磊
于智远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shenzhen Robotics Systems Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201880001148.3A priority Critical patent/CN108781258B/en
Priority to PCT/CN2018/076503 priority patent/WO2019153345A1/en
Publication of WO2019153345A1 publication Critical patent/WO2019153345A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present application relates to the field of robot technology, and in particular, to a method, device, robot, and storage medium for determining environmental information.
  • robot vision is often limited to a certain angle of view, and there are fewer robots with a 360° angle of view.
  • the inventor found that even if the robot itself has a 360° angle of view, the effective viewing angle is limited in the case of being occluded, etc., and in some scenarios, the capability of the robot may be defective or may not be maximized. . It can be seen that how to enhance the visual ability of the robot is a problem to be considered.
  • the technical problem to be solved by some embodiments of the present application is how to enhance the visual ability of the robot.
  • An embodiment of the present application provides an environment information determining method, including: acquiring an image of a surrounding environment captured by at least one video device, wherein a distance between the video device and the robot does not exceed a preset value; according to each video device An image of the surrounding environment captured by each of them expands the surrounding environment information of the robot.
  • An embodiment of the present application further provides an environment information determining apparatus, including: an obtaining module and a processing module; wherein the acquiring module is configured to acquire an image of a surrounding environment captured by at least one video device, and a distance between the video device and the robot The preset value is not exceeded; the processing module is configured to expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • An embodiment of the present application also provides a robot comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the environmental information determination method as in the above embodiment.
  • An embodiment of the present application further provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the environment information determining method in any of the above embodiments.
  • the surrounding environment information of the robot is expanded based on the image captured by the video device, whereby expanding the effective viewing angle of the robot and enhancing the visual ability of the robot, so that the robot can obtain more comprehensive surrounding environment information.
  • FIG. 1 is a flowchart of a method for determining environmental information in a first embodiment of the present application
  • FIG. 2 is a flowchart of a method for processing an image of a surrounding environment captured by a robot for each video device in the first embodiment of the present application;
  • FIG. 3 is a schematic diagram showing a positional relationship between a robot and a video device in the first embodiment of the present application
  • FIG. 4 is a top plan view of an imaging angle of view of a camera of a video device in the first embodiment of the present application
  • FIG. 5 is a front view of each frame of a picture taken by a camera of the video device in the first embodiment of the present application;
  • FIG. 6 is a flowchart of a method for determining environmental information in a second embodiment of the present application.
  • FIG. 7 is a schematic view showing the positions of a robot, a natural person, and other objects in the second embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an environment information determining apparatus in a third embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an environment information determining apparatus in a fourth embodiment of the present application.
  • Fig. 10 is a schematic structural view of a robot in a fifth embodiment of the present application.
  • the method for determining environmental information utilizes the powerful processing capability of the robot, such as processing multiple sets of image information in parallel, and combining the powerful interconnecting capabilities of the robot, so that the robot can obtain limitations beyond its own shooting capability.
  • the perspective of the robot greatly expands the visual ability of the robot itself, so that the robot can cope with the challenges in various scenes with the support of more data, and maximize the advantages of the robot relative to human beings to make up for the shortcomings of human beings and obtain more. Good user experience.
  • the video device referred to in the following embodiments of the present application may be any device that has specific image acquisition capabilities, such as a robot with visual capabilities, a monitor, and the like.
  • the first embodiment of the present application relates to an environment information determining method
  • the execution body of the environment information determining method may be a robot or other device that establishes a communication connection with the robot.
  • the robot has intelligent devices with autonomous behavior.
  • the execution subject is the robot itself as an example.
  • the specific process of the environment information determining method is as shown in FIG. 1 and includes the following steps:
  • Step 101 Acquire an image of a surrounding environment captured by at least one video device.
  • the distance between the video device and the robot does not exceed a preset value.
  • the preset value may be determined according to the image processing capability of the robot, or may be determined according to other information such as the communication capability of the robot.
  • the robot can directly establish a communication connection with at least one video device, obtain an image of the surrounding environment of the camera, or establish a connection with at least one video device through the cloud or other video device.
  • This embodiment does not limit the manner in which the robot acquires an image of the surrounding environment captured by at least one video device.
  • Step 102 Expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • the peripheral environment information obtained by the robot through shooting is expanded, and the specific processing procedure is shown in FIG. 2 .
  • Step 201 processing, respectively, an image of a surrounding environment captured by each video device: determining a position of the robot in an image of a surrounding environment captured by the video device, using the determined position as image positioning information of the robot, and acquiring a video according to the image positioning information The surrounding environment information of the robot monitored by the device.
  • the video device can open the permissions of some or all of the cameras when there are multiple cameras.
  • the parameters of the camera that the robot acquires the open permission of the video device include: a direction of a horizontal plane of the optical axis of the camera and a lateral viewing angle information of the camera. .
  • the robot determines the robot in the perspective of the camera with the open authority of the video device according to the physical position of the robot, the physical position of the video device, and the horizontal direction of the optical axis of the camera with the open permission of the video device. The value of the off-angle in the horizontal plane with respect to the optical axis.
  • the robot can follow the formula Determining the value of the off-angle of the horizontal plane with respect to the optical axis in the perspective of the camera of the open authority of the video device; wherein ⁇ represents the angle between the horizontal direction of the optical axis of the camera with the open authority of the video device and the axis of abscissa, x represents the difference between the abscissa of the physical position of the video device and the abscissa of the physical position of the robot, and y represents the difference between the ordinate of the physical position of the video device and the ordinate of the physical position of the robot, and ⁇ represents the horizontal plane.
  • represents the angle between the horizontal direction of the optical axis of the camera with the open authority of the video device and the axis of abscissa
  • x represents the difference between the abscissa of the physical position of the video device and the abscissa of the physical position of the robot
  • y represents the difference between the ordinate of the physical position of the
  • the robot determines the first position reference information of the robot in the image of the surrounding environment captured by the camera of the open authority according to the value of the off angle at the horizontal plane and the lateral view information of the camera with the open authority of the video device.
  • the first position reference information is: a ratio of the amount laterally shifted from the center of the image to the left or right in the horizontal direction of the image.
  • represents the value of the off-angle in the horizontal plane
  • represents the lateral viewing angle of the camera with the open authority of the video device
  • M represents the first position reference information.
  • the robot determines the position of the robot in the image of the surrounding environment captured by the camera with the open authority of the video device based on the first position reference information. For example, if the first position reference information is a ratio of the amount laterally shifted from the center of the image to the left in the horizontal direction of the image, the robot is laterally shifted from the center line of the image to the left according to the first position reference information.
  • the first position reference information is a ratio of the amount laterally shifted from the center of the image to the left in the horizontal direction of the image
  • the robot is laterally shifted from the center line of the image to the left according to the first position reference information.
  • the robot starts from the center line of the image to the right according to the first position reference information Deviating to obtain a first reference line; laterally offsetting the preset value to the left with the first reference line as a center to obtain a second reference line, and laterally offsetting the preset value to the right to obtain a third reference line;
  • the feature of the robot is detected; and the position of the robot in the image is determined according to the detected characteristics of the robot.
  • the robot A in order to avoid the recognition error caused by many other robots similar to the robot A in the vicinity of the robot A, the robot A can be quickly and accurately identified in the following manner:
  • the robot A deliberately performs a specific motion, and the robot corresponding to the specific motion is recognized as the robot A in the image region defined by the second reference line and the third reference line.
  • the robot A flashes or sequentially lights the head or body signal according to the random light and dark sequence and/or the color sequence, and will be in the image area defined by the second reference line and the third reference line, and the lighting mode
  • the robot lights out according to 0.2S red light, 0.1s, 0.1s red light, and off for 0.5 seconds; or, 0.1s red light, 0.15s green light, 0.1s orange light, etc.
  • the mode setting There are infinite possibilities for the mode setting, as long as the robot A can be distinguished.
  • Mode c a combination of mode a and mode b.
  • the physical location of the robot or the physical location of the video device can be determined by the Global Positioning System (GPS) or the Beidou of the robot or the video device itself.
  • GPS Global Positioning System
  • the robot or video device may also incorporate base station location information or Wireless-Fidelity (WIFI) location information. This embodiment does not limit the manner in which a robot or video device acquires a physical location.
  • WIFI Wireless-Fidelity
  • the following describes the process of determining the positioning information of the robot by using the first implementation manner in combination with the actual scenario.
  • FIG. 3 The positional relationship between the robot and the video device is shown in Fig. 3.
  • the top view of the imaging angle of the camera with the open permission of the video device is shown in Fig. 4, and the front view of each frame of the camera with the open permission of the video device is as shown in Fig. 5.
  • the robot is represented by the letter A and the video device is represented by the letter B.
  • Fig. 3 The positional relationship between the robot and the video device is shown in Fig. 3.
  • Fig. 4 The top view of the imaging angle of the camera with the open permission of the video device is shown in Fig. 4, and the front view of each frame of the camera with the open permission of the video device is as shown in Fig. 5.
  • the robot is represented by the letter A and the video device is represented by the letter B.
  • the robot A takes its physical position as the coordinate origin O, the positive east direction of the robot A is the positive direction of the abscissa axis (X), and the north direction is the positive direction of the ordinate axis (Y), and the robot A obtains
  • the physical position of the video device B is (x, y), and the distance between the robot A and the video device B is d.
  • the angle between the horizontal direction of the optical axis of the camera of the current video device B and the axis of abscissa is ⁇ , and the value of the off angle of the robot A in the horizontal angle of the camera with the open authority of the video device B in the horizontal plane with respect to the optical axis Is ⁇ . It can be seen from Fig.
  • a straight line l 1 passing through the robot A in the lateral direction of the imaging picture intersects with the optical axis in the direction of the horizontal plane at a point P, which is a straight line l 1 and a longitudinal boundary of the imaging picture.
  • M the first position reference information
  • the robot A laterally shifts from the center line l 2 of the image to the left (or right) according to the first position reference information to obtain the first reference line l 3 , and then the first reference line l 3
  • the second reference line l 4 is obtained by shifting the preset value laterally to the left to the left
  • the third reference line l 5 is obtained by shifting the preset value laterally to the right, as shown in FIG. 5.
  • l 6 to l 9 are the boundaries of a front view of a frame taken by a camera with open access to the video device.
  • the image area defined by the second reference line l 3 and the third reference line 14 the feature of the robot A is detected, and based on the detected characteristics of the robot A, the position of the robot A in the image is determined.
  • the robot may adjust the elevation angle information, the elevation angle information of the camera according to the open permission of the video device, One or more of longitudinal angle information and height information, etc., using a similar method to determine the longitudinal offset of the robot in the image of the surrounding environment captured by the video device, or determining the robot to shoot on the video device in combination with ranging or the like.
  • the longitudinal offset in the image of the surrounding environment combines the lateral offset and the longitudinal offset to determine the position of the image of the robot in the image, which will not be described here.
  • the robot detects the characteristics of the robot in the image of the surrounding environment captured by the video device, and determines the location of the feature of the robot as the image positioning information of the robot. Specifically, the robot locks the position of the robot in the image of the surrounding environment captured by the camera with the open authority of the video device through the image tracking technology, and uses the position as the image positioning information of the robot.
  • the characteristics of the robot may be the contour features of the robot, the motion characteristics of the robot (such as the current motion of the robot such as the head-up or the deliberate action), and the light and dark features of the signal light of the robot (such as the signal light of the robot is randomly illuminated) Any one or any combination of dark sequences, and/or flashing of the robot's signal color sequence, and/or the robot's signal light sequentially may be other features.
  • the robot can be set to acquire only images containing the image captured by the video device; if the robot has strong processing capability and wants to obtain more surrounding environment information, The robot is set to acquire all images captured by the video device, and the embodiment does not restrict the robot from acquiring the type of image captured by the video device.
  • Step 202 Expand the surrounding environment information of the robot according to the surrounding environment information of the robot monitored by each video device.
  • the surrounding environment information of the robot may be that the robot itself directly captures an image of the surrounding environment, and directly uses the image of the surrounding environment as the surrounding environment information, or extracts specific environmental parameters from the image of the surrounding environment, and extracts The environmental parameters are used as information about the surrounding environment.
  • the surrounding environment information of the robot may be obtained by other means than the shooting by the robot, such as information recorded in an electronic map.
  • the image of the surrounding environment of the corresponding location may be selectively retrieved, thereby obtaining the surrounding environment information of the robot.
  • the environment information that the robot is blocked by the obstacle is acquired according to the position of the robot in the image of the surrounding environment captured by the video device.
  • the present embodiment expands the surrounding environment information of the robot based on the image captured by the at least one video device around the robot, thereby expanding the effective viewing angle of the robot and enhancing the image.
  • the robot's visual ability enables the robot to obtain more comprehensive information about the surrounding environment.
  • the second embodiment of the present application relates to a method for determining environmental information.
  • This embodiment is a further improvement of the first embodiment.
  • the specific improvement is that other related steps are added before step 101 and step 102, respectively.
  • the embodiment includes steps 301 to 307, wherein steps 304 and 306 are substantially the same as steps 101 and 102 in the first embodiment, and are not described in detail herein.
  • Step 301 Determine that the line of sight of the robot is blocked by the obstacle.
  • the method for determining the environment information is executed, thereby avoiding the resource caused by the method in the case that the robot can obtain sufficient surrounding environment information by virtue of its own capability. waste.
  • occlusion of the robot's line of sight by the obstacle is only a specific triggering method.
  • other triggering modes may also be adopted, for example, triggering a subsequent process according to a user operation.
  • Step 302 Establish a connection with each video device separately, and obtain a physical location of each video device.
  • the robot may be set to establish a point-to-point connection with the video device directly if the distance from the video device is less than or equal to the preset value during the traveling process. Or establish a connection with the video device through the cloud, and read the physical location of the connected video device (determined by means of GPS or Beidou, etc.) and the parameters of the open-access camera.
  • the physical location of the video device can also be determined by the robot itself, for example, by using base station positioning, WIFI positioning, and the like.
  • each robot or video device with video capture capability has a unified interface, and it can be used by other robots or video devices to read some or all of the visual information in its own situation and permission.
  • Step 303 Determine that the robot is within the line of sight of the video device.
  • the robot is within the line of sight of the camera with the open authority of the video device based on the physical location of the video device and the parameters of the camera of the video device open authority.
  • the robot is determined to be within the field of view of the video device.
  • FOV optical axis direction and perspective
  • Step 304 Acquire an image of a surrounding environment captured by at least one video device.
  • Step 305 Determine that the image of the surrounding environment captured by the video device includes an image of the robot.
  • the line of sight of the video device may also be blocked, resulting in no image of the robot in the image of the surrounding environment captured by the video device, so that the robot cannot expand the surrounding environment information based on the image of the surrounding environment.
  • the robot can detect the characteristics of the robot in the image of the surrounding environment captured by the video device, and determine whether it is in the image of the surrounding environment captured by the video device.
  • the robot may determine whether the image of the surrounding environment captured by the video device includes its own image in other manners. This embodiment does not limit the specific determining manner.
  • step 303 if it is determined in step 303 that the image captured by the video device should include the image of the robot, but the image of the robot is not found in the image, it may be determined that the line of sight of the video device is blocked.
  • the following may be Processing method: continuously obtaining an image of the surrounding environment of the video device for a period of time, and determining whether the image of the robot is included in the image until the image of the robot is included in the image; or, after the preset time is exceeded The image of the robot is not recognized from the image of the surrounding environment captured by the video device, the image is stopped from the video device, and the image that has been acquired from the video device is discarded.
  • the robot only processes the image of the surrounding environment containing its own image, which reduces the amount of calculation.
  • Step 306 Expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • Step 307 Adopt matching processing measures according to the extended surrounding environment information.
  • the robot follows a natural person and performs an instruction to detect surrounding risk factors.
  • the position diagram of the robot, the natural person and other objects is as shown in Fig. 7, and the vehicle D is directly in front of the robot A and the natural person C.
  • the other object F in front of the robot A blocks the line of sight of the robot A, and the surrounding environment information obtained from the image of the surrounding environment captured by the robot A itself cannot display the danger information that the vehicle D is coming in front.
  • the robot A can obtain images of the surrounding environment photographed by other video devices (shown as E in the figure) through the above steps, the other video devices capture the vehicle D, so that the robot A can determine the front based on the expanded surrounding environment information.
  • the robot can also perform different processing measures according to other instructions (such as determining whether there are risk factors, searching for suspects, etc.), and will not be repeated here.
  • step 301, step 302, step 303, step 305 and step 307 are not necessarily steps, and the above steps, or any one of the above steps, or any combination thereof may be selectively performed.
  • the embodiment performs the environment information determining method only when the robot determines that the surrounding environment information is occluded, thereby preventing the robot from performing the method of obtaining sufficient surrounding environment information by virtue of its own capability.
  • the waste of resources caused by the method Before acquiring images of the surrounding environment captured by the at least one video device, it is determined that the robot is within the line of sight of the video device, avoiding the robot receiving too much useless information, wasting storage and computing resources. Moreover, only the image including the robot is processed, and the processing load is reduced.
  • the robot and the video device may be configured to cooperate to share the captured image.
  • the robot may also send the image obtained by the robot itself while acquiring the video device.
  • the video device is extended to expand the viewing angle of the video device, wherein the process of expanding the viewing angle of the video device is the same as the process of expanding the viewing angle of the robot described above, and details are not described herein again.
  • the third embodiment of the present application relates to an environment information determining apparatus, as shown in FIG. 8, including an obtaining module 401 and a processing module 402.
  • the obtaining module 401 is configured to acquire an image of a surrounding environment captured by the at least one video device, where a distance between the video device and the robot does not exceed a preset value.
  • the processing module 402 is configured to expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • the present embodiment is an apparatus embodiment corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment.
  • the related technical details mentioned in the first embodiment are still effective in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the embodiment can also be applied to the first embodiment.
  • the fourth embodiment of the present invention relates to an environment information determining apparatus. As shown in FIG. 9, the embodiment is further improved on the basis of the third embodiment, and the specific improvement is as follows: the determining module and the communication module are added. And operating modules.
  • the determining module 403 is configured to: before the acquiring module 401 acquires an image of the surrounding environment captured by the at least one video device, determine that the line of sight of the robot is blocked by the obstacle, and determine that the robot is within the line of sight of the video device;
  • the processing module 402 is further configured to: before the peripheral environment information of the robot is expanded according to an image of a surrounding environment captured by each of the video devices, determine that an image of the surrounding environment captured by the video device includes an image of the robot;
  • the communication module 404 is configured to establish a connection with each video device and obtain a physical location of each video device before acquiring the image of the surrounding environment captured by the at least one video device.
  • the operation module 405 is configured to adopt a matching processing measure according to the extended surrounding environment information.
  • this embodiment is an apparatus embodiment corresponding to the second embodiment, and this embodiment can be implemented in cooperation with the second embodiment.
  • the related technical details mentioned in the second embodiment are still effective in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the embodiment can also be applied to the second embodiment.
  • each module involved in the third embodiment and the fourth embodiment is a logic module.
  • a logic unit may be a physical unit or a part of a physical unit. It can also be implemented in a combination of multiple physical units.
  • the third embodiment and the fourth embodiment do not introduce units that are less closely related to solving the technical problem proposed by the present invention, but this does not indicate the third embodiment and There are no other units in the fourth embodiment.
  • a fifth embodiment of the present application relates to a robot, as shown in FIG. 10, including at least one processor 501; and a memory 502 communicably coupled to at least one processor 501; wherein the memory 502 stores at least one processor
  • the instructions executed by 501 are executed by at least one processor 501 to enable at least one processor 501 to perform the environment information determining method in the above embodiment.
  • the robot may further include a communication component.
  • the communication component receives and/or transmits data, such as an image of the surrounding environment captured by the video device, under the control of the processor 501.
  • the processor 501 is an example of a central processing unit (CPU), and the memory 502 is exemplified by a readable and writable memory (RAM).
  • the processor 501 and the memory 502 may be connected by a bus or other means, and the connection by a bus is taken as an example in FIG.
  • the memory 502 is a non-volatile computer readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as the program for implementing the environment information determining method in the embodiment of the present application. It is stored in the memory 502.
  • the processor 501 implements the above-described environmental information determining method by executing non-volatile software programs, instructions, and modules stored in the memory 502, thereby performing various functional applications and data processing of the device.
  • the memory 502 can include a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can store a list of options, and the like. Further, the memory may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other nonvolatile solid state storage device. In some embodiments, the memory 502 can optionally include memory remotely located relative to the processor 501 that can be connected to the external device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • One or more modules are stored in the memory 502, and when executed by the one or more processors 501, perform the environmental information determination method in any of the above method embodiments.
  • a sixth embodiment of the present application is directed to a computer readable storage medium storing a computer program.
  • the environmental information determining method described in any of the above embodiments is implemented when the computer program is executed by the processor.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

The present application relates to the technical field of robotics, and particularly relates to an environment information determining method, apparatus, robot, and storage medium. The environment information determining method comprises: obtaining an image of an ambient environment captured by at least one video device, the distance between said video device and a robot not exceeding a preset value; according to images of the ambient environment captured by each video device, expanding the ambient environment information of the robot.

Description

环境信息确定方法、装置、机器人及存储介质Environmental information determination method, device, robot and storage medium 技术领域Technical field

本申请涉及机器人技术领域,尤其涉及一种环境信息确定方法、装置、机器人及存储介质。The present application relates to the field of robot technology, and in particular, to a method, device, robot, and storage medium for determining environmental information.

背景技术Background technique

目前,可行动的机器人大多具备视觉能力,可以自行移动、分辨物体、寻找路径等,甚至在面对可能的危险时,还可以立刻做出反应和预警。然而,机器人视觉往往局限在一定的视角中,具备360°视角的机器人还比较少。At present, most mobile robots have the ability to move, distinguish objects, find paths, etc., and even respond to early dangers when faced with possible dangers. However, robot vision is often limited to a certain angle of view, and there are fewer robots with a 360° angle of view.

发明人在实现本申请的过程中发现,即使机器人自身具备360°视角,在受到遮挡等情况下,有效视角也会受到限制,在部分场景下可能会导致机器人的能力存在缺陷或者无法最大限度发挥。可见,如何能够增强机器人的视觉能力,是需要考虑的问题。In the process of implementing the present application, the inventor found that even if the robot itself has a 360° angle of view, the effective viewing angle is limited in the case of being occluded, etc., and in some scenarios, the capability of the robot may be defective or may not be maximized. . It can be seen that how to enhance the visual ability of the robot is a problem to be considered.

发明内容Summary of the invention

本申请部分实施例所要解决的技术问题在于如何增强机器人的视觉能力。The technical problem to be solved by some embodiments of the present application is how to enhance the visual ability of the robot.

本申请的一个实施例提供了一种环境信息确定方法,包括:获取至少一个视频设备拍摄的周边环境的图像,其中,视频设备与机器人之间的距离不超过预设值;根据每个视频设备各自拍摄的周边环境的图像,对机器人的周边环境信息进行扩展。An embodiment of the present application provides an environment information determining method, including: acquiring an image of a surrounding environment captured by at least one video device, wherein a distance between the video device and the robot does not exceed a preset value; according to each video device An image of the surrounding environment captured by each of them expands the surrounding environment information of the robot.

本申请的一个实施例还提供了一种环境信息确定装置,包括:获取模块和处理模块;其中,获取模块用于获取至少一个视频设备拍摄的周边环境的图像,视频设备与机器人之间的距离不超过预设值;处理模块用于根据每个视频设备各自拍摄的周边环境的图像,对机器人的周边环境信息进行扩展。An embodiment of the present application further provides an environment information determining apparatus, including: an obtaining module and a processing module; wherein the acquiring module is configured to acquire an image of a surrounding environment captured by at least one video device, and a distance between the video device and the robot The preset value is not exceeded; the processing module is configured to expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.

本申请的一个实施例还提供了一种机器人,包括至少一个处理器;以及,与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,指令被至少一个处理器执行,以使至少一个处理器能够执行如上述实施例中的环境信息确定方法。An embodiment of the present application also provides a robot comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the environmental information determination method as in the above embodiment.

本申请的一个实施例还提供了一种计算机可读存储介质,存储有计算机程序,计算机程序被处理器执行时实现上述任一实施例中的环境信息确定方法。An embodiment of the present application further provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the environment information determining method in any of the above embodiments.

相对于现有技术而言,本申请部分实施例提供的环境信息确定方法中,通 过获取机器人周边的至少一个视频设备拍摄的图像,基于该视频设备拍摄的图像对机器人的周边环境信息进行扩展,从而扩大了该机器人的有效视角,增强了该机器人的视觉能力,使得机器人能够获得更全面的周边环境信息。With respect to the prior art, in the environmental information determining method provided by some embodiments of the present application, by acquiring an image captured by at least one video device around the robot, the surrounding environment information of the robot is expanded based on the image captured by the video device, Thereby expanding the effective viewing angle of the robot and enhancing the visual ability of the robot, so that the robot can obtain more comprehensive surrounding environment information.

附图说明DRAWINGS

一个或多个实施例通过与之对应的附图中的图片进行示例性说明,这些示例性说明并不构成对实施例的限定,附图中具有相同参考数字标号的元件表示为类似的元件,除非有特别申明,附图中的图不构成比例限制。The one or more embodiments are exemplified by the accompanying drawings in the accompanying drawings, and FIG. The figures in the drawings do not constitute a scale limitation unless otherwise stated.

图1是本申请第一实施例中环境信息确定方法的流程图;1 is a flowchart of a method for determining environmental information in a first embodiment of the present application;

图2是本申请第一实施例中机器人分别针对每个视频设备拍摄的周边环境的图像的处理方法的流程图;2 is a flowchart of a method for processing an image of a surrounding environment captured by a robot for each video device in the first embodiment of the present application;

图3是本申请第一实施例中机器人与视频设备的位置关系的示意图;3 is a schematic diagram showing a positional relationship between a robot and a video device in the first embodiment of the present application;

图4是本申请第一实施例中视频设备的摄像头的成像视角的俯视图;4 is a top plan view of an imaging angle of view of a camera of a video device in the first embodiment of the present application;

图5是本申请第一实施例中视频设备的摄像头拍摄的每帧画面的正视图;5 is a front view of each frame of a picture taken by a camera of the video device in the first embodiment of the present application;

图6是本申请第二实施例中环境信息确定方法的流程图;6 is a flowchart of a method for determining environmental information in a second embodiment of the present application;

图7是本申请第二实施例中机器人、自然人和其他物体的位置示意图;7 is a schematic view showing the positions of a robot, a natural person, and other objects in the second embodiment of the present application;

图8是本申请第三实施例中的环境信息确定装置的结构示意图;8 is a schematic structural diagram of an environment information determining apparatus in a third embodiment of the present application;

图9是本申请第四实施例中的环境信息确定装置的结构示意图;9 is a schematic structural diagram of an environment information determining apparatus in a fourth embodiment of the present application;

图10是本申请第五实施例中的机器人的结构示意图。Fig. 10 is a schematic structural view of a robot in a fifth embodiment of the present application.

具体实施例Specific embodiment

为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请部分实施例进行进一步详细说明。应当理解,此处所描述的具体实施例仅用以解释本申请,并不用于限定本申请。然而,本领域的普通技术人员可以理解,在本申请的各实施例中,为了使读者更好地理解本申请而提出了许多技术细节。但是,即使没有这些技术细节和基于以下各实施例的种种变化和修改,也可以实现本申请所要求保护的技术方案。In order to make the objects, the technical solutions and the advantages of the present application more clear, some embodiments of the present application will be further described in detail below with reference to the accompanying drawings and embodiments. It is understood that the specific embodiments described herein are merely illustrative of the application and are not intended to be limiting. However, those of ordinary skill in the art will appreciate that in the various embodiments of the present application, numerous technical details are set forth in order to provide the reader with a better understanding of the application. However, the technical solutions claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments.

本申请以下各实施例所提供的环境信息确定方法,是利用机器人强大的处理能力,如可并行处理多组图像信息,并结合机器人强大的互联能力,使得机器人能够获得超出自身拍摄能力的限制的视角,对机器人自身的视觉能力进行极大的延展,使得机器人在更多数据的支持下在各场景中的应对游刃有余,将机器人相对人类的优势最大程度发挥,以弥补人类的不足,获得更好的用户体 验。The method for determining environmental information provided by the following embodiments of the present application utilizes the powerful processing capability of the robot, such as processing multiple sets of image information in parallel, and combining the powerful interconnecting capabilities of the robot, so that the robot can obtain limitations beyond its own shooting capability. The perspective of the robot greatly expands the visual ability of the robot itself, so that the robot can cope with the challenges in various scenes with the support of more data, and maximize the advantages of the robot relative to human beings to make up for the shortcomings of human beings and obtain more. Good user experience.

本申请以下各实施例中所称的视频设备可以是任何一个具体图像获取能力的设备,例如具备视觉能力的机器人、监控器等。The video device referred to in the following embodiments of the present application may be any device that has specific image acquisition capabilities, such as a robot with visual capabilities, a monitor, and the like.

本申请的第一实施例涉及一种环境信息确定方法,该环境信息确定方法的执行主体可以是机器人,也可以是与该机器人建立通信连接的其它设备。其中,机器人具备自主行为能力的智能设备。该实施例中,以执行主体为机器人本身为例进行说明。该环境信息确定方法的具体流程如图1所示,包括以下步骤:The first embodiment of the present application relates to an environment information determining method, and the execution body of the environment information determining method may be a robot or other device that establishes a communication connection with the robot. Among them, the robot has intelligent devices with autonomous behavior. In this embodiment, the execution subject is the robot itself as an example. The specific process of the environment information determining method is as shown in FIG. 1 and includes the following steps:

步骤101:获取至少一个视频设备拍摄的周边环境的图像。Step 101: Acquire an image of a surrounding environment captured by at least one video device.

具体实现中,该视频设备与机器人之间的距离不超过预设值。其中,预设值可以根据该机器人的图像处理能力确定,也可以根据机器人的通信能力等其他信息确定。In a specific implementation, the distance between the video device and the robot does not exceed a preset value. The preset value may be determined according to the image processing capability of the robot, or may be determined according to other information such as the communication capability of the robot.

需要说明的是,机器人可以直接与至少一个视频设备建立通信连接,获得其拍摄的周边环境的图像,也可以通过云端或其他视频设备与至少一个视频设备建立连接。本实施例不限制机器人获取至少一个视频设备拍摄的周边环境的图像的方式。It should be noted that the robot can directly establish a communication connection with at least one video device, obtain an image of the surrounding environment of the camera, or establish a connection with at least one video device through the cloud or other video device. This embodiment does not limit the manner in which the robot acquires an image of the surrounding environment captured by at least one video device.

步骤102:根据每个视频设备各自拍摄的周边环境的图像,对机器人的周边环境信息进行扩展。Step 102: Expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.

具体实现中,对机器人通过拍摄获得的周边环境信息进行扩展,具体处理过程如图2所示。In the specific implementation, the peripheral environment information obtained by the robot through shooting is expanded, and the specific processing procedure is shown in FIG. 2 .

步骤201:分别针对每个视频设备拍摄的周边环境的图像进行处理:确定机器人在视频设备拍摄的周边环境的图像中的位置,将确定的位置作为机器人的图像定位信息,根据图像定位信息获取视频设备监控到的机器人的周边环境信息。Step 201: processing, respectively, an image of a surrounding environment captured by each video device: determining a position of the robot in an image of a surrounding environment captured by the video device, using the determined position as image positioning information of the robot, and acquiring a video according to the image positioning information The surrounding environment information of the robot monitored by the device.

具体实现中,确定机器人的定位信息的方式有多种,包括但不限于以下两种具体实现方式:In the specific implementation, there are various ways to determine the positioning information of the robot, including but not limited to the following two specific implementation manners:

第一,根据机器人的物理位置、视频设备的物理位置以及该视频设备开放权限的摄像头的参数,确定机器人在该视频设备开放权限的摄像头拍摄的周边环境的图像中的位置,将确定的位置作为该机器人的图像定位信息。First, determining the position of the robot in the image of the surrounding environment photographed by the camera with the open authority of the video device according to the physical position of the robot, the physical position of the video device, and the parameters of the camera of the open permission of the video device, and determining the position as the determined position Image positioning information of the robot.

需要说明的是,视频设备在有多个摄像头的情况下,可以开放部分或所有摄像头的权限。It should be noted that the video device can open the permissions of some or all of the cameras when there are multiple cameras.

具体地说,在视频设备与机器人没有高度差且视频设备处于平视状态时,机器人获取的该视频设备开放权限的摄像头的参数包括:该摄像头的光轴的水 平面的方向和该摄像头的横向视角信息。获取视频设备开放权限的摄像头的参数以后,机器人根据该机器人的物理位置、视频设备的物理位置以及视频设备开放权限的摄像头的光轴的水平面方向,确定机器人在视频设备开放权限的摄像头的视角中、相对于光轴在水平面的偏离角的值。Specifically, when the video device and the robot are not in the height difference and the video device is in the head-up state, the parameters of the camera that the robot acquires the open permission of the video device include: a direction of a horizontal plane of the optical axis of the camera and a lateral viewing angle information of the camera. . After obtaining the parameters of the camera with the open permission of the video device, the robot determines the robot in the perspective of the camera with the open authority of the video device according to the physical position of the robot, the physical position of the video device, and the horizontal direction of the optical axis of the camera with the open permission of the video device. The value of the off-angle in the horizontal plane with respect to the optical axis.

例如,该机器人可以按照公式

Figure PCTCN2018076503-appb-000001
确定自己在视频设备的开放权限的摄像头的视角中、相对于光轴在水平面的偏离角的值;其中,α表示视频设备开放权限的摄像头的光轴的水平面方向与横坐标轴的夹角,x表示视频设备的物理位置的横坐标与机器人的物理位置的横坐标的差值,y表示视频设备的物理位置的纵坐标与机器人的物理位置的纵坐标的差值,β表示该在水平面的偏离角的值。 For example, the robot can follow the formula
Figure PCTCN2018076503-appb-000001
Determining the value of the off-angle of the horizontal plane with respect to the optical axis in the perspective of the camera of the open authority of the video device; wherein α represents the angle between the horizontal direction of the optical axis of the camera with the open authority of the video device and the axis of abscissa, x represents the difference between the abscissa of the physical position of the video device and the abscissa of the physical position of the robot, and y represents the difference between the ordinate of the physical position of the video device and the ordinate of the physical position of the robot, and β represents the horizontal plane. The value of the off angle.

然后,该机器人根据该在水平面的偏离角的值以及视频设备开放权限的摄像头的横向视角信息,确定该机器人在开放权限的摄像头拍摄的周边环境的图像中的第一位置参考信息。其中,第一位置参考信息为:从图像的中心向左或向右横向偏移的量在图像横向中的占比。Then, the robot determines the first position reference information of the robot in the image of the surrounding environment captured by the camera of the open authority according to the value of the off angle at the horizontal plane and the lateral view information of the camera with the open authority of the video device. The first position reference information is: a ratio of the amount laterally shifted from the center of the image to the left or right in the horizontal direction of the image.

例如,机器人按照公式M=tanβ/(2×tan(γ/2)),确定机器人在视频设备开放权限的摄像头拍摄的周边环境的图像中的第一位置参考信息。其中,β表示该在水平面的偏离角的值,γ表示视频设备开放权限的摄像头的横向视角,M表示第一位置参考信息。For example, the robot determines the first position reference information in the image of the surrounding environment captured by the camera of the camera with the open authority of the video device according to the formula M=tanβ/(2×tan(γ/2)). Wherein β represents the value of the off-angle in the horizontal plane, γ represents the lateral viewing angle of the camera with the open authority of the video device, and M represents the first position reference information.

最后,机器人根据第一位置参考信息,确定机器人在视频设备开放权限的摄像头拍摄的周边环境的图像中的位置。例如,若第一位置参考信息为:从图像的中心向左横向偏移的量在图像横向中的占比,则该机器人按照第一位置参考信息从图像的中心线开始向左横向偏移得到第一参考线,若第一位置参考信息为:从图像的中心向右横向偏移的量在图像横向中的占比,则该机器人按照第一位置参考信息从图像的中心线开始向右横向偏移得到第一参考线;以第一参考线为中心向左横向偏移预设值得到第二参考线,以及向右横向偏移预设值得到第三参考线;在第二参考线和第三参考线限定的图像区域中,检测机器人的特征;根据检测到的机器人的特征,确定机器人在图像中的位置。Finally, the robot determines the position of the robot in the image of the surrounding environment captured by the camera with the open authority of the video device based on the first position reference information. For example, if the first position reference information is a ratio of the amount laterally shifted from the center of the image to the left in the horizontal direction of the image, the robot is laterally shifted from the center line of the image to the left according to the first position reference information. a first reference line, if the first position reference information is: a ratio of an amount laterally shifted from the center of the image to the right in the horizontal direction of the image, the robot starts from the center line of the image to the right according to the first position reference information Deviating to obtain a first reference line; laterally offsetting the preset value to the left with the first reference line as a center to obtain a second reference line, and laterally offsetting the preset value to the right to obtain a third reference line; In the image area defined by the third reference line, the feature of the robot is detected; and the position of the robot in the image is determined according to the detected characteristics of the robot.

例如,在定位机器人A的过程中,为了避免由于机器人A的附近存在很多与机器人A相似的其他机器人导致识别错误,则可以结合以下方式快速准确的识别出机器人A:For example, in the process of locating the robot A, in order to avoid the recognition error caused by many other robots similar to the robot A in the vicinity of the robot A, the robot A can be quickly and accurately identified in the following manner:

方式a,机器人A刻意摆出特定动作,将在第二参考线和第三参考线限定的图像区域内,与该特定动作相符的机器人识别为机器人A。In the mode a, the robot A deliberately performs a specific motion, and the robot corresponding to the specific motion is recognized as the robot A in the image region defined by the second reference line and the third reference line.

方式b,机器人A按照随机亮暗序列和/或颜色序列闪烁或依次亮起头部或身体上的信号灯,将在第二参考线和第三参考线限定的图像区域内,与该亮灯方式相同的机器人识别为机器人A。例如,机器人A按照0.2S红光,熄灭0.1s,0.1s红光,熄灭0.5秒等亮灯;或者,0.1s红光,0.15s绿光,0.1s橙光等方式亮灯;具体亮灯方式设定可以有无限种可能,只要能够区分出机器人A即可。Mode b, the robot A flashes or sequentially lights the head or body signal according to the random light and dark sequence and/or the color sequence, and will be in the image area defined by the second reference line and the third reference line, and the lighting mode The same robot is recognized as robot A. For example, the robot A lights out according to 0.2S red light, 0.1s, 0.1s red light, and off for 0.5 seconds; or, 0.1s red light, 0.15s green light, 0.1s orange light, etc. There are infinite possibilities for the mode setting, as long as the robot A can be distinguished.

方式c,方式a和方式b的组合。Mode c, a combination of mode a and mode b.

需要说明的是,机器人的物理位置或视频设备的物理位置可以通过机器人或视频设备自身的全球定位系统(Global Positioning System,GPS)或北斗确定。在确定机器人的物理位置或视频设备的物理位置的过程中,机器人或视频设备还可以结合基站定位信息或无线保真(Wireless-Fidelity,WIFI)定位信息。本实施例不限制机器人或视频设备获取物理位置的方式。It should be noted that the physical location of the robot or the physical location of the video device can be determined by the Global Positioning System (GPS) or the Beidou of the robot or the video device itself. In determining the physical location of the robot or the physical location of the video device, the robot or video device may also incorporate base station location information or Wireless-Fidelity (WIFI) location information. This embodiment does not limit the manner in which a robot or video device acquires a physical location.

以下结合实际场景,对采用第一种实现方式确定机器人的定位信息的过程进行举例说明。The following describes the process of determining the positioning information of the robot by using the first implementation manner in combination with the actual scenario.

机器人与视频设备的位置关系如图3所示,视频设备开放权限的摄像头的成像视角的俯视图如图4所示,视频设备开放权限的摄像头拍摄的每帧画面的正视图如图5所示。图3、图4和图5中,机器人以字母A表示,视频设备以字母B表示。图3中,机器人A以自身的物理位置为坐标原点O,机器人A的正东方向为横坐标轴(X)的正方向,正北方向为纵坐标轴(Y)的正方向,机器人A获得视频设备B的物理位置为(x,y),机器人A与视频设备B之间的距离为d。且当前视频设备B开放权限的摄像头的光轴的水平面方向与横坐标轴的夹角为α,机器人A在视频设备B开放权限的摄像头的视角中、相对于光轴在水平面的偏离角的值为β。由图3可知,α、β、y、d之间的关系式为:sin(α+β)=y/d,d、x、y之间的关系式为

Figure PCTCN2018076503-appb-000002
根据以上两个关系式可以得到:
Figure PCTCN2018076503-appb-000003
确定了机器人A在视频设备B开放权限的摄像头的视角中、相对于光轴在水平面的偏离角的值。 The positional relationship between the robot and the video device is shown in Fig. 3. The top view of the imaging angle of the camera with the open permission of the video device is shown in Fig. 4, and the front view of each frame of the camera with the open permission of the video device is as shown in Fig. 5. In Figures 3, 4 and 5, the robot is represented by the letter A and the video device is represented by the letter B. In Fig. 3, the robot A takes its physical position as the coordinate origin O, the positive east direction of the robot A is the positive direction of the abscissa axis (X), and the north direction is the positive direction of the ordinate axis (Y), and the robot A obtains The physical position of the video device B is (x, y), and the distance between the robot A and the video device B is d. Moreover, the angle between the horizontal direction of the optical axis of the camera of the current video device B and the axis of abscissa is α, and the value of the off angle of the robot A in the horizontal angle of the camera with the open authority of the video device B in the horizontal plane with respect to the optical axis Is β. It can be seen from Fig. 3 that the relationship between α, β, y, and d is: sin(α+β)=y/d, and the relationship between d, x, and y is
Figure PCTCN2018076503-appb-000002
According to the above two relations, you can get:
Figure PCTCN2018076503-appb-000003
The value of the off angle of the robot A in the horizontal angle of the camera with the open authority of the video device B with respect to the optical axis in the horizontal plane is determined.

图4中,在视频设备B的成像画面中,沿成像画面的横向方向经过机器人A的直线l 1,与光轴在水平面的方向相交于点P,Q为直线l 1与成像画面纵向边界的交点,β、PA、PB之间满足以下关系式:tanβ=PA/PB,γ、PB、PQ之间满足以下关系式:tan(γ/2)=PQ/PB,根据以上两个关系式可以得到:PZ/(2*PQ)=tanβ/[2*tan(γ/2)]。而PZ/(2*PQ)实际就是第一位置参考信息M,即从图像的中心向左(或向右)横向偏移的量在图像横向中的占比,故M=tanβ/(2×tan(γ/2))。得到第一位置参考信息之后,机器人A按照第一 位置参考信息从图像的中心线l 2开始向左(或向右)横向偏移得到第一参考线l 3,然后以第一参考线l 3为中心向左横向偏移预设值得到第二参考线l 4,以及向右横向偏移预设值得到第三参考线l 5,如图5所示。其中,l 6至l 9为视频设备开放权限的摄像头拍摄的一帧画面正视图的边界。在第二参考线l 3和第三参考线l 4限定的图像区域中,检测机器人A的特征,并根据检测到的机器人A的特征,确定机器人A在图像中的位置。 In FIG. 4, in the imaging picture of the video device B, a straight line l 1 passing through the robot A in the lateral direction of the imaging picture intersects with the optical axis in the direction of the horizontal plane at a point P, which is a straight line l 1 and a longitudinal boundary of the imaging picture. At the intersection, β, PA, and PB satisfy the following relationship: tanβ=PA/PB, and the following relationship is satisfied between γ, PB, and PQ: tan(γ/2)=PQ/PB, according to the above two relations Yield: PZ / (2 * PQ) = tan β / [2 * tan (γ / 2)]. PZ/(2*PQ) is actually the first position reference information M, that is, the amount of lateral shift from the center of the image to the left (or right) in the horizontal direction of the image, so M=tanβ/(2× Tan(γ/2)). After obtaining the first position reference information, the robot A laterally shifts from the center line l 2 of the image to the left (or right) according to the first position reference information to obtain the first reference line l 3 , and then the first reference line l 3 The second reference line l 4 is obtained by shifting the preset value laterally to the left to the left, and the third reference line l 5 is obtained by shifting the preset value laterally to the right, as shown in FIG. 5. Among them, l 6 to l 9 are the boundaries of a front view of a frame taken by a camera with open access to the video device. In the image area defined by the second reference line l 3 and the third reference line 14 , the feature of the robot A is detected, and based on the detected characteristics of the robot A, the position of the robot A in the image is determined.

需要说明的是,在机器人和视频设备存在高度差,和/或,视频设备并非处于平视状态(如俯视状态或仰视状态)时,机器人可以根据视频设备开放权限的摄像头的俯角信息、仰角信息、纵向视角信息和高度信息等信息中的一种或多种,采用类似的方法确定机器人在视频设备拍摄的周边环境的图像中的纵向偏移,或者结合测距等方式确定机器人在视频设备拍摄的周边环境的图像中的纵向偏移,将横向偏移和纵向偏移结合确定机器人的影像在图像中的位置,此处不再赘述。It should be noted that, when there is a height difference between the robot and the video device, and/or the video device is not in a head-up state (such as a top view state or a bottom view state), the robot may adjust the elevation angle information, the elevation angle information of the camera according to the open permission of the video device, One or more of longitudinal angle information and height information, etc., using a similar method to determine the longitudinal offset of the robot in the image of the surrounding environment captured by the video device, or determining the robot to shoot on the video device in combination with ranging or the like. The longitudinal offset in the image of the surrounding environment combines the lateral offset and the longitudinal offset to determine the position of the image of the robot in the image, which will not be described here.

第二,机器人在视频设备拍摄的周边环境的图像中,检测机器人的特征,将机器人的特征所在的位置确定为机器人的图像定位信息。具体地说,机器人通过图像追踪技术锁定该机器人的特征在视频设备开放权限的摄像头拍摄的周边环境的图像中的位置,并将该位置作为机器人的图像定位信息。Second, the robot detects the characteristics of the robot in the image of the surrounding environment captured by the video device, and determines the location of the feature of the robot as the image positioning information of the robot. Specifically, the robot locks the position of the robot in the image of the surrounding environment captured by the camera with the open authority of the video device through the image tracking technology, and uses the position as the image positioning information of the robot.

需要说明的是,机器人的特征可以是机器人的轮廓特征、机器人的动作特征(如抬头等机器人当前的动作或刻意摆出的动作)、机器人的信号灯的亮暗特征(如机器人的信号灯按照随机亮暗序列,和/或,机器人的信号灯颜色序列闪烁,和/或,机器人的信号灯依次亮起)中的任一种或任意组合,也可以是其他特征。It should be noted that the characteristics of the robot may be the contour features of the robot, the motion characteristics of the robot (such as the current motion of the robot such as the head-up or the deliberate action), and the light and dark features of the signal light of the robot (such as the signal light of the robot is randomly illuminated) Any one or any combination of dark sequences, and/or flashing of the robot's signal color sequence, and/or the robot's signal light sequentially may be other features.

需要说明的是,实际应用中,若机器人的处理能力不强,可以将机器人设置为仅获取视频设备拍摄的包含自身的图像;若机器人处理能力强,且希望获得更多的周边环境信息,可以将机器人设置为获取视频设备拍摄的所有图像,本实施例不限制机器人获取视频设备拍摄的图像的类型。It should be noted that, in practical applications, if the processing capability of the robot is not strong, the robot can be set to acquire only images containing the image captured by the video device; if the robot has strong processing capability and wants to obtain more surrounding environment information, The robot is set to acquire all images captured by the video device, and the embodiment does not restrict the robot from acquiring the type of image captured by the video device.

步骤202:根据每个视频设备各自监控到的机器人的周边环境信息,对机器人的周边环境信息进行扩展。Step 202: Expand the surrounding environment information of the robot according to the surrounding environment information of the robot monitored by each video device.

其中,机器人的周边环境信息可以是该机器人自身拍摄获得周边环境的图像后,直接将该周边环境的图像作为周边环境信息,也可以是从该周边环境的图像中提取具体的环境参数,将提取的环境参数作为周边环境信息。当然,机器人的周边环境信息也可以是该机器人通过拍摄之外的其它方式获得,例如电 子地图中记载的信息等。The surrounding environment information of the robot may be that the robot itself directly captures an image of the surrounding environment, and directly uses the image of the surrounding environment as the surrounding environment information, or extracts specific environmental parameters from the image of the surrounding environment, and extracts The environmental parameters are used as information about the surrounding environment. Of course, the surrounding environment information of the robot may be obtained by other means than the shooting by the robot, such as information recorded in an electronic map.

具体地说,在确定机器人在视频设备拍摄的周边环境的图像中的位置后,可以有选择地调取相应位置的周边环境的图像,进而得到机器人的周边环境信息。Specifically, after determining the position of the robot in the image of the surrounding environment captured by the video device, the image of the surrounding environment of the corresponding location may be selectively retrieved, thereby obtaining the surrounding environment information of the robot.

例如,在机器人的视线被障碍物遮挡的情况下,根据机器人在视频设备拍摄的周边环境的图像中的位置,获取机器人被障碍物遮挡的环境信息。For example, in a case where the line of sight of the robot is blocked by the obstacle, the environment information that the robot is blocked by the obstacle is acquired according to the position of the robot in the image of the surrounding environment captured by the video device.

本实施例相对于现有技术而言,通过获取机器人周边的至少一个视频设备拍摄的图像,基于该视频设备拍摄的图像对机器人的周边环境信息进行扩展,从而扩大了该机器人的有效视角,增强了该机器人的视觉能力,使得机器人能够获得更全面的周边环境信息。Compared with the prior art, the present embodiment expands the surrounding environment information of the robot based on the image captured by the at least one video device around the robot, thereby expanding the effective viewing angle of the robot and enhancing the image. The robot's visual ability enables the robot to obtain more comprehensive information about the surrounding environment.

本申请第二实施例涉及一种环境信息确定方法,本实施例是对第一实施例的进一步改进,具体改进之处为:在步骤101和步骤102之前分别增加了其他相关步骤。The second embodiment of the present application relates to a method for determining environmental information. This embodiment is a further improvement of the first embodiment. The specific improvement is that other related steps are added before step 101 and step 102, respectively.

如图6所示,本实施例包括步骤301至步骤307,其中,步骤304和步骤306分别与第一实施例中的步骤101和步骤102大致相同,此处不再详述,下面主要介绍不同之处:As shown in FIG. 6, the embodiment includes steps 301 to 307, wherein steps 304 and 306 are substantially the same as steps 101 and 102 in the first embodiment, and are not described in detail herein. Where:

步骤301:确定机器人的视线被障碍物遮挡。Step 301: Determine that the line of sight of the robot is blocked by the obstacle.

值得一提的是,在机器人确定周边环境信息被遮挡的情况下,执行该环境信息确定方法,避免了机器人在凭借自身能力可以获得足够的周边环境信息的情况下,仍然执行该方法造成的资源浪费。It is worth mentioning that, in the case that the robot determines that the surrounding environment information is occluded, the method for determining the environment information is executed, thereby avoiding the resource caused by the method in the case that the robot can obtain sufficient surrounding environment information by virtue of its own capability. waste.

需要说明的是,机器人的视线被障碍物遮挡仅是一种具体的触发方式,应用中,也可以采用其他的触发方式,例如,根据用户操作触发后续过程等。It should be noted that the occlusion of the robot's line of sight by the obstacle is only a specific triggering method. In the application, other triggering modes may also be adopted, for example, triggering a subsequent process according to a user operation.

步骤302:分别与每个视频设备建立连接,并获取每个视频设备的物理位置。Step 302: Establish a connection with each video device separately, and obtain a physical location of each video device.

需要说明的是,在不需要步骤301的触发条件的情况下,可以设置机器人在行进过程中,若检测到与视频设备的距离小于或等于预设值,则直接与该视频设备建立点对点连接,或者通过云端与该视频设备建立连接,并读取建立连接的视频设备的物理位置(通过GPS或北斗等方式确定)以及开放权限的摄像头的参数等。当前,视频设备的物理位置也可以由机器人自行确定,例如利用基站定位、WIFI定位等方式自行确定。It should be noted that, in the case that the trigger condition of step 301 is not needed, the robot may be set to establish a point-to-point connection with the video device directly if the distance from the video device is less than or equal to the preset value during the traveling process. Or establish a connection with the video device through the cloud, and read the physical location of the connected video device (determined by means of GPS or Beidou, etc.) and the parameters of the open-access camera. Currently, the physical location of the video device can also be determined by the robot itself, for example, by using base station positioning, WIFI positioning, and the like.

其中,各具备视频捕获能力的机器人或视频设备,都具备统一的接口,在自身情况和权限允许的情况下,可以供其他机器人或视频设备读取视觉的部分 或全部信息。Among them, each robot or video device with video capture capability has a unified interface, and it can be used by other robots or video devices to read some or all of the visual information in its own situation and permission.

步骤303:确定机器人处于视频设备的视线范围内。Step 303: Determine that the robot is within the line of sight of the video device.

具体地说,根据视频设备的物理位置和该视频设备开放权限的摄像头的参数,确定机器人是否处于该视频设备开放权限的摄像头的视线范围内。Specifically, it is determined whether the robot is within the line of sight of the camera with the open authority of the video device based on the physical location of the video device and the parameters of the camera of the video device open authority.

例如,根据开放权限的摄像头的光轴方向以及视角(FOV)信息,确定机器人在该视频设备的视场范围内。这里仅需要确定机器人处于视频设备的视场范围内即可,不考虑障碍物的遮挡、高度差等问题对实际视线所造成的影响。For example, based on the optical axis direction and perspective (FOV) information of the open-access camera, the robot is determined to be within the field of view of the video device. Here, it is only necessary to determine that the robot is within the field of view of the video device, regardless of the influence of obstacles such as occlusion and height difference on the actual line of sight.

值得一提的是,在获取至少一个视频设备拍摄的周边环境的图像之前,确定机器人处于该视频设备的视线范围内,丢弃机器人不在视线范围内的视频设备提供的图像信息,避免机器人接收过多无用信息,占用存储空间。It is worth mentioning that before acquiring the image of the surrounding environment captured by the at least one video device, determining that the robot is within the line of sight of the video device, discarding image information provided by the video device that is not in the line of sight of the robot, and avoiding excessive receiving of the robot Useless information, taking up storage space.

步骤304:获取至少一个视频设备拍摄的周边环境的图像。Step 304: Acquire an image of a surrounding environment captured by at least one video device.

需要说明的是,为了保证机器人对周边环境的实时掌控,需要实时获取至少一个视频设备拍摄的周边环境的图像。It should be noted that in order to ensure real-time control of the surrounding environment of the robot, it is necessary to acquire images of the surrounding environment captured by at least one video device in real time.

步骤305:确定视频设备拍摄的周边环境的图像中包含机器人的影像。Step 305: Determine that the image of the surrounding environment captured by the video device includes an image of the robot.

具体地说,视频设备的视线也可能被遮挡,导致视频设备拍摄的周边环境的图像中没有机器人的影像,使得机器人无法基于该周边环境的图像扩展周边环境信息。Specifically, the line of sight of the video device may also be blocked, resulting in no image of the robot in the image of the surrounding environment captured by the video device, so that the robot cannot expand the surrounding environment information based on the image of the surrounding environment.

具体实现中,机器人可以在视频设备拍摄的周边环境的图像中,检测机器人的特征,确定自己是否在视频设备拍摄的周边环境的图像中。In a specific implementation, the robot can detect the characteristics of the robot in the image of the surrounding environment captured by the video device, and determine whether it is in the image of the surrounding environment captured by the video device.

需要说明的是,实际应用中,机器人也可以采用其他方式确定视频设备拍摄的周边环境的图像中是否包含自己的影像,本实施例不限制具体的确定方式。It should be noted that, in an actual application, the robot may determine whether the image of the surrounding environment captured by the video device includes its own image in other manners. This embodiment does not limit the specific determining manner.

应用中,若通过步骤303确定视频设备拍摄的图像中应该包含机器人的影像,但是却未在图像中找到机器人的影像,则可判定该视频设备的视线受到了遮挡,该情况下,可以有以下处理方式:在一段时间内持续获取该视频设备的拍摄的周边环境的图像,并判断该图像中是否包含机器人的影像,直至判定图像中包含机器人的影像为止;或者,在超出预设时长后仍未从视频设备拍摄的周边环境的图像中识别到机器人的影像,停止从该视频设备获取图像,并舍弃已经从该视频设备获取的图像。In the application, if it is determined in step 303 that the image captured by the video device should include the image of the robot, but the image of the robot is not found in the image, it may be determined that the line of sight of the video device is blocked. In this case, the following may be Processing method: continuously obtaining an image of the surrounding environment of the video device for a period of time, and determining whether the image of the robot is included in the image until the image of the robot is included in the image; or, after the preset time is exceeded The image of the robot is not recognized from the image of the surrounding environment captured by the video device, the image is stopped from the video device, and the image that has been acquired from the video device is discarded.

值得一提的是,机器人仅对包含自身的图像的周边环境的图像进行处理,减轻了计算量。It is worth mentioning that the robot only processes the image of the surrounding environment containing its own image, which reduces the amount of calculation.

步骤306:根据每个视频设备各自拍摄的周边环境的图像,对机器人的周边环境信息进行扩展。Step 306: Expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.

步骤307:根据扩展后的周边环境信息采取相匹配的处理措施。Step 307: Adopt matching processing measures according to the extended surrounding environment information.

例如,机器人跟随自然人出行并执行检测周围危险因素的指令。其中,机器人、自然人和其他物体的位置示意图如图7所示,机器人A和自然人C的正前方有车辆D驶来。而机器人A前方的其他物体F遮挡了机器人A的视线,根据机器人A自身拍摄的周边环境的图像得到的周边环境信息无法显示前方有车辆D驶来这一危险信息。但是,由于机器人A能够通过上述步骤获得其他视频设备(如图中E所示)拍摄的周边环境的图像,而其他视频设备拍摄到了车辆D,使得机器人A可以基于扩展后的周边环境信息确定出前方有危险因素(车辆D驶来),并通过报警或其他行为动作将该判断结果传递给自然人。For example, the robot follows a natural person and performs an instruction to detect surrounding risk factors. Among them, the position diagram of the robot, the natural person and other objects is as shown in Fig. 7, and the vehicle D is directly in front of the robot A and the natural person C. The other object F in front of the robot A blocks the line of sight of the robot A, and the surrounding environment information obtained from the image of the surrounding environment captured by the robot A itself cannot display the danger information that the vehicle D is coming in front. However, since the robot A can obtain images of the surrounding environment photographed by other video devices (shown as E in the figure) through the above steps, the other video devices capture the vehicle D, so that the robot A can determine the front based on the expanded surrounding environment information. There is a risk factor (vehicle D is coming) and the result is passed to the natural person through an alarm or other behavior.

需要说明的是,机器人还可以根据其他指令(如判定是否存在危险因素、搜寻嫌疑人等)执行不同的处理措施,此处不再一一赘述。It should be noted that the robot can also perform different processing measures according to other instructions (such as determining whether there are risk factors, searching for suspects, etc.), and will not be repeated here.

需要说明的是,步骤301、步骤302、步骤303、步骤305和步骤307不是必须要执行的步骤,可以有选择性的执行上述步骤,或上述步骤中的任意一种或任意组合。It should be noted that step 301, step 302, step 303, step 305 and step 307 are not necessarily steps, and the above steps, or any one of the above steps, or any combination thereof may be selectively performed.

本实施例相对于现有技术而言,在机器人确定周边环境信息被遮挡的情况下才执行该环境信息确定方法,避免了机器人在凭借自身能力可以获得足够的周边环境信息的情况下依旧执行该方法造成的资源浪费。在获取至少一个视频设备拍摄的周边环境的图像之前,确定机器人处于该视频设备的视线范围内,避免机器人接收过多无用信息,浪费存储及计算资源。并且,仅对包含机器人的图像进行处理,减轻了处理负担。Compared with the prior art, the embodiment performs the environment information determining method only when the robot determines that the surrounding environment information is occluded, thereby preventing the robot from performing the method of obtaining sufficient surrounding environment information by virtue of its own capability. The waste of resources caused by the method. Before acquiring images of the surrounding environment captured by the at least one video device, it is determined that the robot is within the line of sight of the video device, avoiding the robot receiving too much useless information, wasting storage and computing resources. Moreover, only the image including the robot is processed, and the processing load is reduced.

一个具体实现中,以上两个实施例中,可以设置机器人与视频设备之间以共享拍摄图像的方式合作,具体地,机器人在获取视频设备拍摄的同时,也可以将机器人自身拍摄获得的图像发送给该视频设备,以扩大视频设备的视角,其中,视频设备扩大自身视角的过程与以上所描述的扩大机器人视角的过程相同,此处不再赘述。In a specific implementation, in the above two embodiments, the robot and the video device may be configured to cooperate to share the captured image. Specifically, the robot may also send the image obtained by the robot itself while acquiring the video device. The video device is extended to expand the viewing angle of the video device, wherein the process of expanding the viewing angle of the video device is the same as the process of expanding the viewing angle of the robot described above, and details are not described herein again.

本申请第三实施例涉及一种环境信息确定装置,如图8所示,包括获取模块401和处理模块402。The third embodiment of the present application relates to an environment information determining apparatus, as shown in FIG. 8, including an obtaining module 401 and a processing module 402.

获取模块401用于获取至少一个视频设备拍摄的周边环境的图像,其中,视频设备与机器人之间的距离不超过预设值。The obtaining module 401 is configured to acquire an image of a surrounding environment captured by the at least one video device, where a distance between the video device and the robot does not exceed a preset value.

处理模块402用于根据每个视频设备各自拍摄的周边环境的图像,对机器人的周边环境信息进行扩展。The processing module 402 is configured to expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.

不难发现,本实施例为与第一实施例相对应的装置实施例,本实施例可与 第一实施例互相配合实施。第一实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在第一实施例中。It is not difficult to find that the present embodiment is an apparatus embodiment corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still effective in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the embodiment can also be applied to the first embodiment.

本申请第四实施例涉及一种环境信息确定装置,如图9所示,本实施例是在第三实施例的基础上做了进一步改进,具体改进之处为:增加了确定模块、通信模块和操作模块。The fourth embodiment of the present invention relates to an environment information determining apparatus. As shown in FIG. 9, the embodiment is further improved on the basis of the third embodiment, and the specific improvement is as follows: the determining module and the communication module are added. And operating modules.

其中,确定模块403用于在获取模块401获取至少一个视频设备拍摄的周边环境的图像之前,确定机器人的视线被障碍物遮挡,以及确定机器人处于视频设备的视线范围内;The determining module 403 is configured to: before the acquiring module 401 acquires an image of the surrounding environment captured by the at least one video device, determine that the line of sight of the robot is blocked by the obstacle, and determine that the robot is within the line of sight of the video device;

处理模块402还用于根据每个所述视频设备各自拍摄的周边环境的图像,对所述机器人的周边环境信息进行扩展之前,确定视频设备拍摄的周边环境的图像中包含机器人的影像;The processing module 402 is further configured to: before the peripheral environment information of the robot is expanded according to an image of a surrounding environment captured by each of the video devices, determine that an image of the surrounding environment captured by the video device includes an image of the robot;

通信模块404用于在获取模块401获取至少一个视频设备拍摄的周边环境的图像之前,分别与每个视频设备建立连接,并获取每个视频设备的物理位置;The communication module 404 is configured to establish a connection with each video device and obtain a physical location of each video device before acquiring the image of the surrounding environment captured by the at least one video device.

操作模块405用于根据扩展后的周边环境信息采取相匹配的处理措施。The operation module 405 is configured to adopt a matching processing measure according to the extended surrounding environment information.

不难发现,本实施例为与第二实施例相对应的装置实施例,本实施例可与第二实施例互相配合实施。第二实施例中提到的相关技术细节在本实施例中依然有效,为了减少重复,这里不再赘述。相应地,本实施例中提到的相关技术细节也可应用在第二实施例中。It is not difficult to find that this embodiment is an apparatus embodiment corresponding to the second embodiment, and this embodiment can be implemented in cooperation with the second embodiment. The related technical details mentioned in the second embodiment are still effective in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the embodiment can also be applied to the second embodiment.

值得一提的是,第三实施例和第四实施例中所涉及到的各模块均为逻辑模块,在实际应用中,一个逻辑单元可以是一个物理单元,也可以是一个物理单元的一部分,还可以以多个物理单元的组合实现。此外,为了突出本发明的创新部分,第三实施例和第四实施例中并没有将与解决本发明所提出的技术问题关系不太密切的单元引入,但这并不表明第三实施例和第四实施例中不存在其它的单元。It is worth mentioning that each module involved in the third embodiment and the fourth embodiment is a logic module. In practical applications, a logic unit may be a physical unit or a part of a physical unit. It can also be implemented in a combination of multiple physical units. In addition, in order to highlight the innovative part of the present invention, the third embodiment and the fourth embodiment do not introduce units that are less closely related to solving the technical problem proposed by the present invention, but this does not indicate the third embodiment and There are no other units in the fourth embodiment.

本申请第五实施例涉及一种机器人,如图10所示,包括至少一个处理器501;以及,与至少一个处理器501通信连接的存储器502;其中,存储器502存储有可被至少一个处理器501执行的指令,指令被至少一个处理器501执行,以使至少一个处理器501能够执行上述实施例中的环境信息确定方法。A fifth embodiment of the present application relates to a robot, as shown in FIG. 10, including at least one processor 501; and a memory 502 communicably coupled to at least one processor 501; wherein the memory 502 stores at least one processor The instructions executed by 501 are executed by at least one processor 501 to enable at least one processor 501 to perform the environment information determining method in the above embodiment.

需要说明的是,具体实现中,机器人还可以包括通信组件。该通信组件在处理器501的控制下接收和/或发送数据,如视频设备拍摄的周边环境的图像。It should be noted that, in a specific implementation, the robot may further include a communication component. The communication component receives and/or transmits data, such as an image of the surrounding environment captured by the video device, under the control of the processor 501.

本实施例中,处理器501以中央处理器(Central Processing Unit,CPU) 为例,存储器502以可读写存储器(Random Access Memory,RAM)为例。处理器501、存储器502可以通过总线或者其他方式连接,图10中以通过总线连接为例。存储器502作为一种非易失性计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中实现环境信息确定方法的程序就存储于存储器502中。处理器501通过运行存储在存储器502中的非易失性软件程序、指令以及模块,从而执行设备的各种功能应用以及数据处理,即实现上述环境信息确定方法。In this embodiment, the processor 501 is an example of a central processing unit (CPU), and the memory 502 is exemplified by a readable and writable memory (RAM). The processor 501 and the memory 502 may be connected by a bus or other means, and the connection by a bus is taken as an example in FIG. The memory 502 is a non-volatile computer readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as the program for implementing the environment information determining method in the embodiment of the present application. It is stored in the memory 502. The processor 501 implements the above-described environmental information determining method by executing non-volatile software programs, instructions, and modules stored in the memory 502, thereby performing various functional applications and data processing of the device.

存储器502可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储选项列表等。此外,存储器可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器502可选包括相对于处理器501远程设置的存储器,这些远程存储器可以通过网络连接至外接设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 502 can include a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can store a list of options, and the like. Further, the memory may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other nonvolatile solid state storage device. In some embodiments, the memory 502 can optionally include memory remotely located relative to the processor 501 that can be connected to the external device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.

一个或者多个模块存储在存储器502中,当被一个或者多个处理器501执行时,执行上述任意方法实施例中的环境信息确定方法。One or more modules are stored in the memory 502, and when executed by the one or more processors 501, perform the environmental information determination method in any of the above method embodiments.

上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块和有益效果,未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above-mentioned products can be implemented in the method provided by the embodiments of the present application, and the functional modules and the beneficial effects of the method are not described in detail in the embodiments.

本申请的第六实施例涉及一种计算机可读存储介质,存储有计算机程序。计算机程序被处理器执行时实现上述任一实施例所描述的环境信息确定方法。A sixth embodiment of the present application is directed to a computer readable storage medium storing a computer program. The environmental information determining method described in any of the above embodiments is implemented when the computer program is executed by the processor.

即,本领域技术人员可以理解,实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。That is, those skilled in the art can understand that all or part of the steps in implementing the foregoing embodiment methods can be completed by a program instructing related hardware, and the program is stored in a storage medium, and includes a plurality of instructions for making a device ( The method may be a microcontroller, a chip, etc. or a processor to perform all or part of the steps of the methods described in the various embodiments of the present application. The foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

本领域的普通技术人员可以理解,上述各实施例是实现本申请的具体实施例,而在实际应用中,可以在形式上和细节上对其作各种改变,而不偏离本申请的精神和范围。A person skilled in the art can understand that the above embodiments are specific embodiments of the present application, and various changes can be made in the form and details without departing from the spirit and scope of the application. range.

Claims (15)

一种环境信息确定方法,其特征在于,包括:A method for determining environmental information, comprising: 获取至少一个视频设备拍摄的周边环境的图像,其中,所述视频设备与机器人之间的距离不超过预设值;Obtaining an image of a surrounding environment captured by at least one video device, wherein a distance between the video device and the robot does not exceed a preset value; 根据每个所述视频设备各自拍摄的周边环境的图像,对所述机器人的周边环境信息进行扩展。The surrounding environment information of the robot is expanded according to an image of a surrounding environment captured by each of the video devices. 如权利要求1所述的环境信息确定方法,其特征在于,获取至少一个视频设备拍摄的周边环境的图像之前,所述方法还包括:The method of determining the environment information according to claim 1, wherein the method further comprises: before acquiring the image of the surrounding environment captured by the at least one video device, the method further comprising: 确定所述机器人处于所述视频设备的视线范围内。It is determined that the robot is within the line of sight of the video device. 如权利要求2所述的环境信息确定方法,其特征在于,确定所述机器人处于所述视频设备的视线范围内,包括:The environment information determining method according to claim 2, wherein determining that the robot is within a line of sight of the video device comprises: 根据所述视频设备的物理位置和所述视频设备开放权限的摄像头的参数,确定所述机器人处于所述视频设备开放权限的摄像头的视线范围内。Determining that the robot is within a line of sight of a camera of the video device open authority according to a physical location of the video device and a parameter of a camera of the video device open authority. 如权利要求1至3任一项所述的环境信息确定方法,其特征在于,根据每个所述视频设备拍摄的周边环境的图像,对所述机器人的周边环境信息进行扩展,包括:The environment information determining method according to any one of claims 1 to 3, wherein the surrounding environment information of the robot is expanded according to an image of a surrounding environment captured by each of the video devices, including: 分别针对每个所述视频设备进行以下处理:确定所述机器人在所述视频设备拍摄的周边环境的图像中的位置,将确定的位置作为所述机器人的图像定位信息,根据所述图像定位信息获取所述视频设备监控到的所述机器人的周边环境信息;Performing, for each of the video devices, a process of determining a position of the robot in an image of a surrounding environment captured by the video device, using the determined position as image positioning information of the robot, according to the image positioning information Obtaining surrounding environment information of the robot monitored by the video device; 根据每个所述视频设备各自监控到的所述机器人的周边环境信息,对所述机器人的周边环境信息进行扩展。The surrounding environment information of the robot is expanded according to the surrounding environment information of the robot monitored by each of the video devices. 如权利要求4所述的环境信息确定方法,其特征在于,确定所述机器人在所述视频设备拍摄的周边环境的图像中的位置,将确定的位置作为所述机器人的图像定位信息,包括:The environment information determining method according to claim 4, wherein determining the position of the robot in the image of the surrounding environment captured by the video device, and determining the determined position as the image positioning information of the robot includes: 根据所述机器人的物理位置、所述视频设备的物理位置以及所述视频设备开放权限的摄像头的参数,确定所述机器人在所述视频设备开放权限的摄像头拍摄的周边环境的图像中的位置,将确定的位置作为所述机器人的图像定位信息;Determining, according to a physical location of the robot, a physical location of the video device, and a parameter of a camera of the video device open authority, a position of the robot in an image of a surrounding environment captured by a camera with an open authority of the video device, Using the determined position as image positioning information of the robot; 或者,or, 在所述视频设备拍摄的周边环境的图像中,检测所述机器人的特征,将所述机器人的特征所在的位置确定为所述机器人的图像定位信息。In an image of a surrounding environment captured by the video device, a feature of the robot is detected, and a location where the feature of the robot is located is determined as image positioning information of the robot. 如权利要求5所述的环境信息确定方法,其特征在于,根据所述机器人的物理位置、所述视频设备的物理位置以及所述视频设备开放权限的摄像头的参数,确定所述机器人在所述视频设备开放权限的摄像头拍摄的周边环境的图像中的位置,包括:The environment information determining method according to claim 5, wherein the robot is determined according to a physical position of the robot, a physical position of the video device, and a parameter of a camera of the video device open authority The video device has an open-access camera that captures the location in the image of the surrounding environment, including: 根据所述机器人的物理位置、所述视频设备的物理位置以及所述视频设备开放权限的摄像头的光轴的水平面方向,确定所述机器人在所述视频设备开放权限的摄像头的视角中、相对于所述光轴在水平面的偏离角的值;Determining, according to a physical position of the robot, a physical location of the video device, and a horizontal direction of an optical axis of the camera of the video device open authority, the robot is in a viewing angle of the camera with the open authority of the video device, a value of an off angle of the optical axis in a horizontal plane; 根据所述在水平面的偏离角的值以及所述视频设备开放权限的摄像头的横向视角信息,确定所述机器人在所述开放权限的摄像头拍摄的周边环境的图像中的第一位置参考信息,所述第一位置参考信息为:从图像的中心向左或向右横向偏移的量在图像横向中的占比;Determining, according to the value of the off angle of the horizontal plane and the lateral viewing angle information of the camera with the open authority of the video device, the first position reference information in the image of the surrounding environment captured by the camera of the open authority The first position reference information is: a ratio of the amount laterally shifted from the center of the image to the left or right in the horizontal direction of the image; 根据所述第一位置参考信息,确定所述机器人在所述视频设备开放权限的摄像头拍摄的周边环境的图像中的位置。Determining, according to the first location reference information, a location of the robot in an image of a surrounding environment captured by a camera with open authority of the video device. 如权利要求6所述的环境信息确定方法,其特征在于,根据所述第一位置参考信息,确定所述机器人在所述视频设备开放权限的摄像头拍摄的周边环境的图像中的位置,包括:The environment information determining method according to claim 6, wherein determining, according to the first position reference information, a position of the robot in an image of a surrounding environment captured by a camera with an open authority of the video device, comprising: 按照所述第一位置参考信息从所述图像的中心线开始横向偏移得到第一参考线;Obtaining a first reference line according to the first position reference information laterally offsetting from a center line of the image; 以所述第一参考线为中心向左横向偏移预设值得到第二参考线,以及向右横向偏移所述预设值得到第三参考线;Deviating a preset value laterally to the left centering on the first reference line to obtain a second reference line, and laterally shifting the preset value to the right to obtain a third reference line; 在所述第二参考线和所述第三参考线限定的图像区域中,检测所述机器人的特征;Detecting features of the robot in image regions defined by the second reference line and the third reference line; 根据检测到的所述机器人的特征,确定所述机器人在所述图像中的位置。A position of the robot in the image is determined based on the detected characteristics of the robot. 如权利要求7所述的环境信息确定方法,其特征在于,所述机器人的特征包括:所述机器人的轮廓特征,和/或,所述机器人的动作特征,和/或,所述机器人的信号灯的亮暗特征。The environmental information determining method according to claim 7, wherein the characteristic of the robot comprises: a contour feature of the robot, and/or an action characteristic of the robot, and/or a signal light of the robot Bright and dark features. 如权利要求1至8任一项所述的环境信息确定方法,其特征在于,根据每个所述视频设备各自拍摄的周边环境的图像,对所述机器人的周边环境信息进行扩展之前,所述方法还包括:The environment information determining method according to any one of claims 1 to 8, wherein the peripheral environment information of the robot is expanded according to an image of a surrounding environment photographed by each of the video devices, The method also includes: 确定所述视频设备拍摄的周边环境的图像中包含所述机器人的影像。Determining an image of the surrounding environment in the image captured by the video device. 如权利要求1至9任一项所述的环境信息确定方法,其特征在于,根据每个所述视频设备各自拍摄的周边环境的图像,对所述机器人的周边环境信息 进行扩展之后,所述方法还包括:The environment information determining method according to any one of claims 1 to 9, wherein after the surrounding environment information of the robot is expanded according to an image of a surrounding environment photographed by each of the video devices, the The method also includes: 根据扩展后的周边环境信息采取相匹配的处理措施。According to the extended surrounding environment information, the matching measures are taken. 如权利要求1至10任一项所述的环境信息确定方法,其特征在于,所述获取至少一个视频设备拍摄的周边环境的图像之前,所述方法还包括:The method of determining the environment information according to any one of claims 1 to 10, wherein before the acquiring the image of the surrounding environment captured by the at least one video device, the method further includes: 确定所述机器人的视线被障碍物遮挡。It is determined that the line of sight of the robot is blocked by an obstacle. 如权利要求3所述的环境信息确定方法,其特征在于,所述获取至少一个视频设备拍摄的周边环境的图像之前,所述方法还包括:The method of determining the environment information according to claim 3, wherein the method further comprises: before acquiring the image of the surrounding environment captured by the at least one video device, the method further comprising: 分别与每个所述视频设备建立连接,并获取每个所述视频设备的物理位置。A connection is established with each of the video devices, respectively, and a physical location of each of the video devices is obtained. 一种环境信息确定装置,其特征在于,包括:An environment information determining apparatus, comprising: 获取模块,用于获取至少一个视频设备拍摄的周边环境的图像,其中,所述视频设备与机器人之间的距离不超过预设值;An acquiring module, configured to acquire an image of a surrounding environment captured by the at least one video device, where a distance between the video device and the robot does not exceed a preset value; 处理模块,用于根据每个所述视频设备各自拍摄的周边环境的图像,对所述机器人的周边环境信息进行扩展。And a processing module, configured to expand the surrounding environment information of the robot according to an image of a surrounding environment captured by each of the video devices. 一种机器人,其特征在于,包括至少一个处理器;以及,A robot characterized by comprising at least one processor; 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein 所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行如权利要求1至12任一项所述的环境信息确定方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the method of any one of claims 1 to Method of determining environmental information. 一种计算机可读存储介质,存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求1至12任一项所述的环境信息确定方法。A computer readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the environment information determining method according to any one of claims 1 to 12.
PCT/CN2018/076503 2018-02-12 2018-02-12 Environment information determining method, apparatus, robot, and storage medium Ceased WO2019153345A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001148.3A CN108781258B (en) 2018-02-12 2018-02-12 Environmental information determination method, device, robot and storage medium
PCT/CN2018/076503 WO2019153345A1 (en) 2018-02-12 2018-02-12 Environment information determining method, apparatus, robot, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/076503 WO2019153345A1 (en) 2018-02-12 2018-02-12 Environment information determining method, apparatus, robot, and storage medium

Publications (1)

Publication Number Publication Date
WO2019153345A1 true WO2019153345A1 (en) 2019-08-15

Family

ID=64029058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076503 Ceased WO2019153345A1 (en) 2018-02-12 2018-02-12 Environment information determining method, apparatus, robot, and storage medium

Country Status (2)

Country Link
CN (1) CN108781258B (en)
WO (1) WO2019153345A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494848A (en) * 2021-12-21 2022-05-13 重庆特斯联智慧科技股份有限公司 Robot sight distance path determining method and device

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666476B (en) * 2022-03-15 2024-04-16 北京云迹科技股份有限公司 Intelligent video recording method, device, equipment and storage medium for robot

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914303A (en) * 2012-10-11 2013-02-06 江苏科技大学 Navigation information acquisition method and intelligent space system with multiple mobile robots
CN103389699A (en) * 2013-05-09 2013-11-13 浙江大学 Robot monitoring and automatic mobile system operation method based on distributed intelligent monitoring controlling nodes
CN105307115A (en) * 2015-08-07 2016-02-03 浙江海洋学院 Distributed vision positioning system and method based on action robot
JP6187499B2 (en) * 2015-02-19 2017-08-30 Jfeスチール株式会社 Self-localization method for autonomous mobile robot, autonomous mobile robot, and landmark for self-localization
CN107368074A (en) * 2017-07-27 2017-11-21 南京理工大学 A kind of autonomous navigation method of robot based on video monitoring

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3841621B2 (en) * 2000-07-13 2006-11-01 シャープ株式会社 Omnidirectional visual sensor
JP4933354B2 (en) * 2007-06-08 2012-05-16 キヤノン株式会社 Information processing apparatus and information processing method
CN107076557A (en) * 2016-06-07 2017-08-18 深圳市大疆创新科技有限公司 Mobile robot recognition positioning method, device, system and mobile robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914303A (en) * 2012-10-11 2013-02-06 江苏科技大学 Navigation information acquisition method and intelligent space system with multiple mobile robots
CN103389699A (en) * 2013-05-09 2013-11-13 浙江大学 Robot monitoring and automatic mobile system operation method based on distributed intelligent monitoring controlling nodes
JP6187499B2 (en) * 2015-02-19 2017-08-30 Jfeスチール株式会社 Self-localization method for autonomous mobile robot, autonomous mobile robot, and landmark for self-localization
CN105307115A (en) * 2015-08-07 2016-02-03 浙江海洋学院 Distributed vision positioning system and method based on action robot
CN107368074A (en) * 2017-07-27 2017-11-21 南京理工大学 A kind of autonomous navigation method of robot based on video monitoring

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494848A (en) * 2021-12-21 2022-05-13 重庆特斯联智慧科技股份有限公司 Robot sight distance path determining method and device
CN114494848B (en) * 2021-12-21 2024-04-16 重庆特斯联智慧科技股份有限公司 Method and device for determining vision path of robot

Also Published As

Publication number Publication date
CN108781258B (en) 2021-05-28
CN108781258A (en) 2018-11-09

Similar Documents

Publication Publication Date Title
US10652452B2 (en) Method for automatic focus and PTZ camera
CN108986164B (en) Image-based position detection method, device, equipment and storage medium
TWI709110B (en) Camera calibration method and apparatus, electronic device
WO2021023106A1 (en) Target recognition method and apparatus, and camera
CN103726879B (en) Utilize camera automatic capturing mine ore deposit to shake and cave in and the method for record warning in time
CN110022444B (en) Panoramic photographing method of unmanned aerial vehicle and unmanned aerial vehicle using the same
WO2018098824A1 (en) Photographing control method and apparatus, and control device
US9986155B2 (en) Image capturing method, panorama image generating method and electronic apparatus
US20180001480A1 (en) Robot control using gestures
CN110660186A (en) Method and device for identifying target objects in video images based on radar signals
WO2020237565A1 (en) Target tracking method and device, movable platform and storage medium
WO2019019819A1 (en) Mobile electronic device and method for processing tasks in task region
KR102186875B1 (en) Motion tracking system and method
WO2018233217A1 (en) Image processing method, device and augmented reality device
CN112911249B (en) Target object tracking method and device, storage medium and electronic device
WO2021065265A1 (en) Size estimation device, size estimation method, and recording medium
CN107527368A (en) Three-dimensional attitude localization method and device based on Quick Response Code
KR20190131320A (en) Method, system and non-transitory computer-readable recording medium for calculating spatial coordinates of a region of interest
JP2017063287A (en) Information processing apparatus, information processing method, and program thereof
TWI774543B (en) Obstacle detection method
CN110602376A (en) Snapshot method and device and camera
WO2019153345A1 (en) Environment information determining method, apparatus, robot, and storage medium
KR101348681B1 (en) Multi-sensor image alignment method of image detection system and apparatus using the same
CN114794992B (en) Charging seat, recharging method of robot and sweeping robot
WO2019100216A1 (en) 3d modeling method, electronic device, storage medium and program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905313

Country of ref document: EP

Kind code of ref document: A1