[go: up one dir, main page]

WO2019153345A1 - Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage - Google Patents

Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage Download PDF

Info

Publication number
WO2019153345A1
WO2019153345A1 PCT/CN2018/076503 CN2018076503W WO2019153345A1 WO 2019153345 A1 WO2019153345 A1 WO 2019153345A1 CN 2018076503 W CN2018076503 W CN 2018076503W WO 2019153345 A1 WO2019153345 A1 WO 2019153345A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
image
video device
surrounding environment
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/076503
Other languages
English (en)
Chinese (zh)
Inventor
骆磊
于智远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cloudminds Shenzhen Robotics Systems Co Ltd
Original Assignee
Cloudminds Shenzhen Robotics Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cloudminds Shenzhen Robotics Systems Co Ltd filed Critical Cloudminds Shenzhen Robotics Systems Co Ltd
Priority to CN201880001148.3A priority Critical patent/CN108781258B/zh
Priority to PCT/CN2018/076503 priority patent/WO2019153345A1/fr
Publication of WO2019153345A1 publication Critical patent/WO2019153345A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present application relates to the field of robot technology, and in particular, to a method, device, robot, and storage medium for determining environmental information.
  • robot vision is often limited to a certain angle of view, and there are fewer robots with a 360° angle of view.
  • the inventor found that even if the robot itself has a 360° angle of view, the effective viewing angle is limited in the case of being occluded, etc., and in some scenarios, the capability of the robot may be defective or may not be maximized. . It can be seen that how to enhance the visual ability of the robot is a problem to be considered.
  • the technical problem to be solved by some embodiments of the present application is how to enhance the visual ability of the robot.
  • An embodiment of the present application provides an environment information determining method, including: acquiring an image of a surrounding environment captured by at least one video device, wherein a distance between the video device and the robot does not exceed a preset value; according to each video device An image of the surrounding environment captured by each of them expands the surrounding environment information of the robot.
  • An embodiment of the present application further provides an environment information determining apparatus, including: an obtaining module and a processing module; wherein the acquiring module is configured to acquire an image of a surrounding environment captured by at least one video device, and a distance between the video device and the robot The preset value is not exceeded; the processing module is configured to expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • An embodiment of the present application also provides a robot comprising at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor, the instructions being at least one The processor executes to enable the at least one processor to perform the environmental information determination method as in the above embodiment.
  • An embodiment of the present application further provides a computer readable storage medium storing a computer program that, when executed by a processor, implements the environment information determining method in any of the above embodiments.
  • the surrounding environment information of the robot is expanded based on the image captured by the video device, whereby expanding the effective viewing angle of the robot and enhancing the visual ability of the robot, so that the robot can obtain more comprehensive surrounding environment information.
  • FIG. 1 is a flowchart of a method for determining environmental information in a first embodiment of the present application
  • FIG. 2 is a flowchart of a method for processing an image of a surrounding environment captured by a robot for each video device in the first embodiment of the present application;
  • FIG. 3 is a schematic diagram showing a positional relationship between a robot and a video device in the first embodiment of the present application
  • FIG. 4 is a top plan view of an imaging angle of view of a camera of a video device in the first embodiment of the present application
  • FIG. 5 is a front view of each frame of a picture taken by a camera of the video device in the first embodiment of the present application;
  • FIG. 6 is a flowchart of a method for determining environmental information in a second embodiment of the present application.
  • FIG. 7 is a schematic view showing the positions of a robot, a natural person, and other objects in the second embodiment of the present application.
  • FIG. 8 is a schematic structural diagram of an environment information determining apparatus in a third embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of an environment information determining apparatus in a fourth embodiment of the present application.
  • Fig. 10 is a schematic structural view of a robot in a fifth embodiment of the present application.
  • the method for determining environmental information utilizes the powerful processing capability of the robot, such as processing multiple sets of image information in parallel, and combining the powerful interconnecting capabilities of the robot, so that the robot can obtain limitations beyond its own shooting capability.
  • the perspective of the robot greatly expands the visual ability of the robot itself, so that the robot can cope with the challenges in various scenes with the support of more data, and maximize the advantages of the robot relative to human beings to make up for the shortcomings of human beings and obtain more. Good user experience.
  • the video device referred to in the following embodiments of the present application may be any device that has specific image acquisition capabilities, such as a robot with visual capabilities, a monitor, and the like.
  • the first embodiment of the present application relates to an environment information determining method
  • the execution body of the environment information determining method may be a robot or other device that establishes a communication connection with the robot.
  • the robot has intelligent devices with autonomous behavior.
  • the execution subject is the robot itself as an example.
  • the specific process of the environment information determining method is as shown in FIG. 1 and includes the following steps:
  • Step 101 Acquire an image of a surrounding environment captured by at least one video device.
  • the distance between the video device and the robot does not exceed a preset value.
  • the preset value may be determined according to the image processing capability of the robot, or may be determined according to other information such as the communication capability of the robot.
  • the robot can directly establish a communication connection with at least one video device, obtain an image of the surrounding environment of the camera, or establish a connection with at least one video device through the cloud or other video device.
  • This embodiment does not limit the manner in which the robot acquires an image of the surrounding environment captured by at least one video device.
  • Step 102 Expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • the peripheral environment information obtained by the robot through shooting is expanded, and the specific processing procedure is shown in FIG. 2 .
  • Step 201 processing, respectively, an image of a surrounding environment captured by each video device: determining a position of the robot in an image of a surrounding environment captured by the video device, using the determined position as image positioning information of the robot, and acquiring a video according to the image positioning information The surrounding environment information of the robot monitored by the device.
  • the video device can open the permissions of some or all of the cameras when there are multiple cameras.
  • the parameters of the camera that the robot acquires the open permission of the video device include: a direction of a horizontal plane of the optical axis of the camera and a lateral viewing angle information of the camera. .
  • the robot determines the robot in the perspective of the camera with the open authority of the video device according to the physical position of the robot, the physical position of the video device, and the horizontal direction of the optical axis of the camera with the open permission of the video device. The value of the off-angle in the horizontal plane with respect to the optical axis.
  • the robot can follow the formula Determining the value of the off-angle of the horizontal plane with respect to the optical axis in the perspective of the camera of the open authority of the video device; wherein ⁇ represents the angle between the horizontal direction of the optical axis of the camera with the open authority of the video device and the axis of abscissa, x represents the difference between the abscissa of the physical position of the video device and the abscissa of the physical position of the robot, and y represents the difference between the ordinate of the physical position of the video device and the ordinate of the physical position of the robot, and ⁇ represents the horizontal plane.
  • represents the angle between the horizontal direction of the optical axis of the camera with the open authority of the video device and the axis of abscissa
  • x represents the difference between the abscissa of the physical position of the video device and the abscissa of the physical position of the robot
  • y represents the difference between the ordinate of the physical position of the
  • the robot determines the first position reference information of the robot in the image of the surrounding environment captured by the camera of the open authority according to the value of the off angle at the horizontal plane and the lateral view information of the camera with the open authority of the video device.
  • the first position reference information is: a ratio of the amount laterally shifted from the center of the image to the left or right in the horizontal direction of the image.
  • represents the value of the off-angle in the horizontal plane
  • represents the lateral viewing angle of the camera with the open authority of the video device
  • M represents the first position reference information.
  • the robot determines the position of the robot in the image of the surrounding environment captured by the camera with the open authority of the video device based on the first position reference information. For example, if the first position reference information is a ratio of the amount laterally shifted from the center of the image to the left in the horizontal direction of the image, the robot is laterally shifted from the center line of the image to the left according to the first position reference information.
  • the first position reference information is a ratio of the amount laterally shifted from the center of the image to the left in the horizontal direction of the image
  • the robot is laterally shifted from the center line of the image to the left according to the first position reference information.
  • the robot starts from the center line of the image to the right according to the first position reference information Deviating to obtain a first reference line; laterally offsetting the preset value to the left with the first reference line as a center to obtain a second reference line, and laterally offsetting the preset value to the right to obtain a third reference line;
  • the feature of the robot is detected; and the position of the robot in the image is determined according to the detected characteristics of the robot.
  • the robot A in order to avoid the recognition error caused by many other robots similar to the robot A in the vicinity of the robot A, the robot A can be quickly and accurately identified in the following manner:
  • the robot A deliberately performs a specific motion, and the robot corresponding to the specific motion is recognized as the robot A in the image region defined by the second reference line and the third reference line.
  • the robot A flashes or sequentially lights the head or body signal according to the random light and dark sequence and/or the color sequence, and will be in the image area defined by the second reference line and the third reference line, and the lighting mode
  • the robot lights out according to 0.2S red light, 0.1s, 0.1s red light, and off for 0.5 seconds; or, 0.1s red light, 0.15s green light, 0.1s orange light, etc.
  • the mode setting There are infinite possibilities for the mode setting, as long as the robot A can be distinguished.
  • Mode c a combination of mode a and mode b.
  • the physical location of the robot or the physical location of the video device can be determined by the Global Positioning System (GPS) or the Beidou of the robot or the video device itself.
  • GPS Global Positioning System
  • the robot or video device may also incorporate base station location information or Wireless-Fidelity (WIFI) location information. This embodiment does not limit the manner in which a robot or video device acquires a physical location.
  • WIFI Wireless-Fidelity
  • the following describes the process of determining the positioning information of the robot by using the first implementation manner in combination with the actual scenario.
  • FIG. 3 The positional relationship between the robot and the video device is shown in Fig. 3.
  • the top view of the imaging angle of the camera with the open permission of the video device is shown in Fig. 4, and the front view of each frame of the camera with the open permission of the video device is as shown in Fig. 5.
  • the robot is represented by the letter A and the video device is represented by the letter B.
  • Fig. 3 The positional relationship between the robot and the video device is shown in Fig. 3.
  • Fig. 4 The top view of the imaging angle of the camera with the open permission of the video device is shown in Fig. 4, and the front view of each frame of the camera with the open permission of the video device is as shown in Fig. 5.
  • the robot is represented by the letter A and the video device is represented by the letter B.
  • the robot A takes its physical position as the coordinate origin O, the positive east direction of the robot A is the positive direction of the abscissa axis (X), and the north direction is the positive direction of the ordinate axis (Y), and the robot A obtains
  • the physical position of the video device B is (x, y), and the distance between the robot A and the video device B is d.
  • the angle between the horizontal direction of the optical axis of the camera of the current video device B and the axis of abscissa is ⁇ , and the value of the off angle of the robot A in the horizontal angle of the camera with the open authority of the video device B in the horizontal plane with respect to the optical axis Is ⁇ . It can be seen from Fig.
  • a straight line l 1 passing through the robot A in the lateral direction of the imaging picture intersects with the optical axis in the direction of the horizontal plane at a point P, which is a straight line l 1 and a longitudinal boundary of the imaging picture.
  • M the first position reference information
  • the robot A laterally shifts from the center line l 2 of the image to the left (or right) according to the first position reference information to obtain the first reference line l 3 , and then the first reference line l 3
  • the second reference line l 4 is obtained by shifting the preset value laterally to the left to the left
  • the third reference line l 5 is obtained by shifting the preset value laterally to the right, as shown in FIG. 5.
  • l 6 to l 9 are the boundaries of a front view of a frame taken by a camera with open access to the video device.
  • the image area defined by the second reference line l 3 and the third reference line 14 the feature of the robot A is detected, and based on the detected characteristics of the robot A, the position of the robot A in the image is determined.
  • the robot may adjust the elevation angle information, the elevation angle information of the camera according to the open permission of the video device, One or more of longitudinal angle information and height information, etc., using a similar method to determine the longitudinal offset of the robot in the image of the surrounding environment captured by the video device, or determining the robot to shoot on the video device in combination with ranging or the like.
  • the longitudinal offset in the image of the surrounding environment combines the lateral offset and the longitudinal offset to determine the position of the image of the robot in the image, which will not be described here.
  • the robot detects the characteristics of the robot in the image of the surrounding environment captured by the video device, and determines the location of the feature of the robot as the image positioning information of the robot. Specifically, the robot locks the position of the robot in the image of the surrounding environment captured by the camera with the open authority of the video device through the image tracking technology, and uses the position as the image positioning information of the robot.
  • the characteristics of the robot may be the contour features of the robot, the motion characteristics of the robot (such as the current motion of the robot such as the head-up or the deliberate action), and the light and dark features of the signal light of the robot (such as the signal light of the robot is randomly illuminated) Any one or any combination of dark sequences, and/or flashing of the robot's signal color sequence, and/or the robot's signal light sequentially may be other features.
  • the robot can be set to acquire only images containing the image captured by the video device; if the robot has strong processing capability and wants to obtain more surrounding environment information, The robot is set to acquire all images captured by the video device, and the embodiment does not restrict the robot from acquiring the type of image captured by the video device.
  • Step 202 Expand the surrounding environment information of the robot according to the surrounding environment information of the robot monitored by each video device.
  • the surrounding environment information of the robot may be that the robot itself directly captures an image of the surrounding environment, and directly uses the image of the surrounding environment as the surrounding environment information, or extracts specific environmental parameters from the image of the surrounding environment, and extracts The environmental parameters are used as information about the surrounding environment.
  • the surrounding environment information of the robot may be obtained by other means than the shooting by the robot, such as information recorded in an electronic map.
  • the image of the surrounding environment of the corresponding location may be selectively retrieved, thereby obtaining the surrounding environment information of the robot.
  • the environment information that the robot is blocked by the obstacle is acquired according to the position of the robot in the image of the surrounding environment captured by the video device.
  • the present embodiment expands the surrounding environment information of the robot based on the image captured by the at least one video device around the robot, thereby expanding the effective viewing angle of the robot and enhancing the image.
  • the robot's visual ability enables the robot to obtain more comprehensive information about the surrounding environment.
  • the second embodiment of the present application relates to a method for determining environmental information.
  • This embodiment is a further improvement of the first embodiment.
  • the specific improvement is that other related steps are added before step 101 and step 102, respectively.
  • the embodiment includes steps 301 to 307, wherein steps 304 and 306 are substantially the same as steps 101 and 102 in the first embodiment, and are not described in detail herein.
  • Step 301 Determine that the line of sight of the robot is blocked by the obstacle.
  • the method for determining the environment information is executed, thereby avoiding the resource caused by the method in the case that the robot can obtain sufficient surrounding environment information by virtue of its own capability. waste.
  • occlusion of the robot's line of sight by the obstacle is only a specific triggering method.
  • other triggering modes may also be adopted, for example, triggering a subsequent process according to a user operation.
  • Step 302 Establish a connection with each video device separately, and obtain a physical location of each video device.
  • the robot may be set to establish a point-to-point connection with the video device directly if the distance from the video device is less than or equal to the preset value during the traveling process. Or establish a connection with the video device through the cloud, and read the physical location of the connected video device (determined by means of GPS or Beidou, etc.) and the parameters of the open-access camera.
  • the physical location of the video device can also be determined by the robot itself, for example, by using base station positioning, WIFI positioning, and the like.
  • each robot or video device with video capture capability has a unified interface, and it can be used by other robots or video devices to read some or all of the visual information in its own situation and permission.
  • Step 303 Determine that the robot is within the line of sight of the video device.
  • the robot is within the line of sight of the camera with the open authority of the video device based on the physical location of the video device and the parameters of the camera of the video device open authority.
  • the robot is determined to be within the field of view of the video device.
  • FOV optical axis direction and perspective
  • Step 304 Acquire an image of a surrounding environment captured by at least one video device.
  • Step 305 Determine that the image of the surrounding environment captured by the video device includes an image of the robot.
  • the line of sight of the video device may also be blocked, resulting in no image of the robot in the image of the surrounding environment captured by the video device, so that the robot cannot expand the surrounding environment information based on the image of the surrounding environment.
  • the robot can detect the characteristics of the robot in the image of the surrounding environment captured by the video device, and determine whether it is in the image of the surrounding environment captured by the video device.
  • the robot may determine whether the image of the surrounding environment captured by the video device includes its own image in other manners. This embodiment does not limit the specific determining manner.
  • step 303 if it is determined in step 303 that the image captured by the video device should include the image of the robot, but the image of the robot is not found in the image, it may be determined that the line of sight of the video device is blocked.
  • the following may be Processing method: continuously obtaining an image of the surrounding environment of the video device for a period of time, and determining whether the image of the robot is included in the image until the image of the robot is included in the image; or, after the preset time is exceeded The image of the robot is not recognized from the image of the surrounding environment captured by the video device, the image is stopped from the video device, and the image that has been acquired from the video device is discarded.
  • the robot only processes the image of the surrounding environment containing its own image, which reduces the amount of calculation.
  • Step 306 Expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • Step 307 Adopt matching processing measures according to the extended surrounding environment information.
  • the robot follows a natural person and performs an instruction to detect surrounding risk factors.
  • the position diagram of the robot, the natural person and other objects is as shown in Fig. 7, and the vehicle D is directly in front of the robot A and the natural person C.
  • the other object F in front of the robot A blocks the line of sight of the robot A, and the surrounding environment information obtained from the image of the surrounding environment captured by the robot A itself cannot display the danger information that the vehicle D is coming in front.
  • the robot A can obtain images of the surrounding environment photographed by other video devices (shown as E in the figure) through the above steps, the other video devices capture the vehicle D, so that the robot A can determine the front based on the expanded surrounding environment information.
  • the robot can also perform different processing measures according to other instructions (such as determining whether there are risk factors, searching for suspects, etc.), and will not be repeated here.
  • step 301, step 302, step 303, step 305 and step 307 are not necessarily steps, and the above steps, or any one of the above steps, or any combination thereof may be selectively performed.
  • the embodiment performs the environment information determining method only when the robot determines that the surrounding environment information is occluded, thereby preventing the robot from performing the method of obtaining sufficient surrounding environment information by virtue of its own capability.
  • the waste of resources caused by the method Before acquiring images of the surrounding environment captured by the at least one video device, it is determined that the robot is within the line of sight of the video device, avoiding the robot receiving too much useless information, wasting storage and computing resources. Moreover, only the image including the robot is processed, and the processing load is reduced.
  • the robot and the video device may be configured to cooperate to share the captured image.
  • the robot may also send the image obtained by the robot itself while acquiring the video device.
  • the video device is extended to expand the viewing angle of the video device, wherein the process of expanding the viewing angle of the video device is the same as the process of expanding the viewing angle of the robot described above, and details are not described herein again.
  • the third embodiment of the present application relates to an environment information determining apparatus, as shown in FIG. 8, including an obtaining module 401 and a processing module 402.
  • the obtaining module 401 is configured to acquire an image of a surrounding environment captured by the at least one video device, where a distance between the video device and the robot does not exceed a preset value.
  • the processing module 402 is configured to expand the surrounding environment information of the robot according to the image of the surrounding environment captured by each of the video devices.
  • the present embodiment is an apparatus embodiment corresponding to the first embodiment, and this embodiment can be implemented in cooperation with the first embodiment.
  • the related technical details mentioned in the first embodiment are still effective in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the embodiment can also be applied to the first embodiment.
  • the fourth embodiment of the present invention relates to an environment information determining apparatus. As shown in FIG. 9, the embodiment is further improved on the basis of the third embodiment, and the specific improvement is as follows: the determining module and the communication module are added. And operating modules.
  • the determining module 403 is configured to: before the acquiring module 401 acquires an image of the surrounding environment captured by the at least one video device, determine that the line of sight of the robot is blocked by the obstacle, and determine that the robot is within the line of sight of the video device;
  • the processing module 402 is further configured to: before the peripheral environment information of the robot is expanded according to an image of a surrounding environment captured by each of the video devices, determine that an image of the surrounding environment captured by the video device includes an image of the robot;
  • the communication module 404 is configured to establish a connection with each video device and obtain a physical location of each video device before acquiring the image of the surrounding environment captured by the at least one video device.
  • the operation module 405 is configured to adopt a matching processing measure according to the extended surrounding environment information.
  • this embodiment is an apparatus embodiment corresponding to the second embodiment, and this embodiment can be implemented in cooperation with the second embodiment.
  • the related technical details mentioned in the second embodiment are still effective in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the embodiment can also be applied to the second embodiment.
  • each module involved in the third embodiment and the fourth embodiment is a logic module.
  • a logic unit may be a physical unit or a part of a physical unit. It can also be implemented in a combination of multiple physical units.
  • the third embodiment and the fourth embodiment do not introduce units that are less closely related to solving the technical problem proposed by the present invention, but this does not indicate the third embodiment and There are no other units in the fourth embodiment.
  • a fifth embodiment of the present application relates to a robot, as shown in FIG. 10, including at least one processor 501; and a memory 502 communicably coupled to at least one processor 501; wherein the memory 502 stores at least one processor
  • the instructions executed by 501 are executed by at least one processor 501 to enable at least one processor 501 to perform the environment information determining method in the above embodiment.
  • the robot may further include a communication component.
  • the communication component receives and/or transmits data, such as an image of the surrounding environment captured by the video device, under the control of the processor 501.
  • the processor 501 is an example of a central processing unit (CPU), and the memory 502 is exemplified by a readable and writable memory (RAM).
  • the processor 501 and the memory 502 may be connected by a bus or other means, and the connection by a bus is taken as an example in FIG.
  • the memory 502 is a non-volatile computer readable storage medium, and can be used for storing non-volatile software programs, non-volatile computer-executable programs, and modules, such as the program for implementing the environment information determining method in the embodiment of the present application. It is stored in the memory 502.
  • the processor 501 implements the above-described environmental information determining method by executing non-volatile software programs, instructions, and modules stored in the memory 502, thereby performing various functional applications and data processing of the device.
  • the memory 502 can include a storage program area and a storage data area, wherein the storage program area can store an operating system, an application required for at least one function; the storage data area can store a list of options, and the like. Further, the memory may include a high speed random access memory, and may also include a nonvolatile memory such as at least one magnetic disk storage device, flash memory device, or other nonvolatile solid state storage device. In some embodiments, the memory 502 can optionally include memory remotely located relative to the processor 501 that can be connected to the external device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • One or more modules are stored in the memory 502, and when executed by the one or more processors 501, perform the environmental information determination method in any of the above method embodiments.
  • a sixth embodiment of the present application is directed to a computer readable storage medium storing a computer program.
  • the environmental information determining method described in any of the above embodiments is implemented when the computer program is executed by the processor.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)
  • Image Analysis (AREA)

Abstract

La présente invention se rapporte au domaine technique de la robotique et, en particulier, à un procédé de détermination d'informations d'environnement, un appareil, un robot et un support de stockage. Le procédé de détermination d'informations d'environnement comprend les étapes suivantes : obtenir une image d'un environnement ambiant capturé par au moins un dispositif vidéo, la distance entre ledit dispositif vidéo et un robot ne dépassant pas une valeur prédéfinie ; en fonction des images de l'environnement ambiant capturées par chaque dispositif vidéo, élargir les informations d'environnement ambiant du robot.
PCT/CN2018/076503 2018-02-12 2018-02-12 Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage Ceased WO2019153345A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880001148.3A CN108781258B (zh) 2018-02-12 2018-02-12 环境信息确定方法、装置、机器人及存储介质
PCT/CN2018/076503 WO2019153345A1 (fr) 2018-02-12 2018-02-12 Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/076503 WO2019153345A1 (fr) 2018-02-12 2018-02-12 Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage

Publications (1)

Publication Number Publication Date
WO2019153345A1 true WO2019153345A1 (fr) 2019-08-15

Family

ID=64029058

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/076503 Ceased WO2019153345A1 (fr) 2018-02-12 2018-02-12 Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage

Country Status (2)

Country Link
CN (1) CN108781258B (fr)
WO (1) WO2019153345A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494848A (zh) * 2021-12-21 2022-05-13 重庆特斯联智慧科技股份有限公司 一种机器人视距路径确定方法和装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114666476B (zh) * 2022-03-15 2024-04-16 北京云迹科技股份有限公司 机器人智能录像方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914303A (zh) * 2012-10-11 2013-02-06 江苏科技大学 多移动机器人的智能空间系统及导航信息获取方法
CN103389699A (zh) * 2013-05-09 2013-11-13 浙江大学 基于分布式智能监测控制节点的机器人监控及自主移动系统的运行方法
CN105307115A (zh) * 2015-08-07 2016-02-03 浙江海洋学院 一种基于行动机器人的分布式视觉定位系统及方法
JP6187499B2 (ja) * 2015-02-19 2017-08-30 Jfeスチール株式会社 自律移動ロボットの自己位置推定方法、自律移動ロボット、及び自己位置推定用ランドマーク
CN107368074A (zh) * 2017-07-27 2017-11-21 南京理工大学 一种基于视频监控的机器人自主导航方法

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3841621B2 (ja) * 2000-07-13 2006-11-01 シャープ株式会社 全方位視覚センサー
JP4933354B2 (ja) * 2007-06-08 2012-05-16 キヤノン株式会社 情報処理装置、及び情報処理方法
CN107076557A (zh) * 2016-06-07 2017-08-18 深圳市大疆创新科技有限公司 可移动机器人识别定位方法、装置、系统及可移动机器人

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102914303A (zh) * 2012-10-11 2013-02-06 江苏科技大学 多移动机器人的智能空间系统及导航信息获取方法
CN103389699A (zh) * 2013-05-09 2013-11-13 浙江大学 基于分布式智能监测控制节点的机器人监控及自主移动系统的运行方法
JP6187499B2 (ja) * 2015-02-19 2017-08-30 Jfeスチール株式会社 自律移動ロボットの自己位置推定方法、自律移動ロボット、及び自己位置推定用ランドマーク
CN105307115A (zh) * 2015-08-07 2016-02-03 浙江海洋学院 一种基于行动机器人的分布式视觉定位系统及方法
CN107368074A (zh) * 2017-07-27 2017-11-21 南京理工大学 一种基于视频监控的机器人自主导航方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494848A (zh) * 2021-12-21 2022-05-13 重庆特斯联智慧科技股份有限公司 一种机器人视距路径确定方法和装置
CN114494848B (zh) * 2021-12-21 2024-04-16 重庆特斯联智慧科技股份有限公司 一种机器人视距路径确定方法和装置

Also Published As

Publication number Publication date
CN108781258B (zh) 2021-05-28
CN108781258A (zh) 2018-11-09

Similar Documents

Publication Publication Date Title
US10652452B2 (en) Method for automatic focus and PTZ camera
CN108986164B (zh) 基于图像的位置检测方法、装置、设备及存储介质
TWI709110B (zh) 攝像頭校準方法和裝置、電子設備
WO2021023106A1 (fr) Procédé et appareil de reconnaissance de cible et dispositif de prise de vues
CN103726879B (zh) 利用摄像头自动捕捉矿井矿震坍塌并及时记录报警的方法
CN110022444B (zh) 无人飞行机的全景拍照方法与使用其的无人飞行机
WO2018098824A1 (fr) Procédé et appareil de commande de prise de vues, et dispositif de commande
US9986155B2 (en) Image capturing method, panorama image generating method and electronic apparatus
US20180001480A1 (en) Robot control using gestures
CN110660186A (zh) 基于雷达信号在视频图像中识别目标对象的方法及装置
WO2020237565A1 (fr) Procédé et dispositif de suivi de cible, plate-forme mobile et support de stockage
WO2019019819A1 (fr) Dispositif électronique mobile et procédé de traitement de tâches dans une région de tâche
KR102186875B1 (ko) 움직임 추적 시스템 및 방법
WO2018233217A1 (fr) Procédé de traitement d'image, dispositif et appareil de réalité augmentée
CN112911249B (zh) 目标对象的跟踪方法、装置、存储介质及电子装置
WO2021065265A1 (fr) Dispositif d'estimation de taille, procédé d'estimation de taille et support d'enregistrement
CN107527368A (zh) 基于二维码的三维空间姿态定位方法与装置
KR20190131320A (ko) 관심 영역의 공간 좌표를 산출하는 방법, 시스템 및 비일시성의 컴퓨터 판독 가능 기록 매체
JP2017063287A (ja) 情報処理装置、情報処理方法及びそのプログラム。
TWI774543B (zh) 障礙物偵測方法
CN110602376A (zh) 抓拍方法及装置、摄像机
WO2019153345A1 (fr) Procédé de détermination d'informations d'environnement, appareil, robot et support de stockage
KR101348681B1 (ko) 영상탐지시스템의 다중센서 영상정렬방법 및 이를 이용한 영상탐지시스템의 다중센서 영상정렬장치
CN114794992B (zh) 充电座、机器人的回充方法和扫地机器人
WO2019100216A1 (fr) Procédé de modélisation 3d, dispositif électronique, support d'informations et produit de programme

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905313

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18905313

Country of ref document: EP

Kind code of ref document: A1