WO2019240208A1 - Robot, procédé de commande de robot et programme - Google Patents
Robot, procédé de commande de robot et programme Download PDFInfo
- Publication number
- WO2019240208A1 WO2019240208A1 PCT/JP2019/023433 JP2019023433W WO2019240208A1 WO 2019240208 A1 WO2019240208 A1 WO 2019240208A1 JP 2019023433 W JP2019023433 W JP 2019023433W WO 2019240208 A1 WO2019240208 A1 WO 2019240208A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- marker
- robot
- unit
- event
- movement
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
Definitions
- the present invention relates to a robot, a control method thereof, and a program.
- a robot that takes an image with a camera while moving autonomously inside a house, recognizes the indoor space from the captured image, and sets the movement route based on the recognized space to move indoors.
- the robot movement route is set by a user creating a map that defines the route along which the robot moves.
- the robot can move along a route determined based on the created map (see, for example, Patent Document 1).
- the robot can move autonomously in the space in the autonomous behavior type robot, for example, the robot may enter a dangerous range when the user moves or the robot moves.
- the present invention has been made in view of the above circumstances, and an object of one embodiment is to provide a robot, a control method thereof, and a program that allow a user to control the movement of the robot.
- the robot according to the embodiment includes a moving mechanism, a photographing unit that photographs a surrounding space, and a marker that recognizes a predetermined marker included in a photographed image photographed by the photographing unit.
- a recognition unit and a movement control unit that controls movement by the movement mechanism based on the recognized marker.
- the movement control unit prohibits entry by the movement based on the recognized marker.
- the movement control unit limits the speed of the movement based on the recognized marker.
- the movement control unit controls the movement based on the recognized installation position of the marker.
- the movement control unit sets a limit range based on the installation position, and limits the movement in the limit range.
- the movement control unit sets a predetermined range on the back side of the installation position or around the installation position as the limit range.
- the movement control unit restricts the movement based on the plurality of recognized installation positions.
- the movement control unit restricts the movement based on a line segment connecting the recognized first marker installation position and the recognized second marker installation position.
- the movement control unit controls the movement based on the recognized type of the marker.
- the movement control unit controls the movement based on the recorded marker.
- the movement control unit controls the movement based on the recorded marker when the marker is not recognized in the captured image.
- a spatial data generation unit that generates spatial data recognizing the space, and based on the generated spatial data
- a visualization data generation unit that generates visualization data that visualizes the spatial elements included in the space
- a visualization data provision unit that provides the generated visualization data to a user terminal.
- the robot further includes a designation acquisition unit that acquires designation of an area included in the provided visualization data from the user terminal, and the spatial data generation unit includes the obtained designation The space is re-recognized based on the photographed image re-photographed in the area related to.
- the robot further includes a state information acquisition unit that acquires state information indicating a state of a movement destination in the movement, and the movement control unit further controls the movement based on the state information.
- a marker information storage unit that stores the position of the marker, a first event detection unit that detects a first event, a second event detection unit that detects a second event, An action execution unit that executes an action, and when the first event is detected, moves to the vicinity of the position of the marker, and when the second event is detected, the marker and the first event And executing the action corresponding to at least one of the second events.
- the robot control method includes a shooting step of shooting a surrounding space, and a marker recognition step of recognizing a predetermined marker included in the shot image shot in the shooting step. And a movement control step for controlling movement by the movement mechanism based on the recognized marker.
- the robot control program recognizes a photographing function for photographing a surrounding space and a predetermined marker included in a photographed image photographed by the photographing function.
- a marker recognition function and a movement control function for controlling movement by a movement mechanism based on the recognized marker are realized.
- FIG. 3 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot in the first embodiment.
- FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot in the first embodiment.
- FIG. 6 is a flowchart illustrating an example of an operation of the autonomous behavior robot control program according to the first embodiment. It is a flowchart which shows another example of operation
- FIG. It is a figure which shows an example of the display of the user terminal in Embodiment 1.
- FIG. It is a figure which shows an example of the display of the user terminal in Embodiment 1.
- FIG. It is a figure which shows an example of the display of the user terminal in Embodiment 1.
- FIG. 12A is a flowchart illustrating a processing procedure in the marker registration phase of the second embodiment.
- FIG. 12B is a flowchart illustrating a processing procedure in the action phase of the second embodiment.
- FIG. 13A is a flowchart illustrating a processing procedure in the marker registration phase of the first embodiment.
- FIG. 13B is a flowchart illustrating a processing procedure in the action phase according to the first embodiment.
- FIG. 14A is a flowchart illustrating a processing procedure in the marker registration phase of the second embodiment.
- FIG. 14B is a flowchart illustrating a processing procedure in the action phase of the second embodiment.
- FIG. 15A is a flowchart illustrating a processing procedure in the marker registration phase of the third embodiment.
- FIG. 15B is a flowchart illustrating a processing procedure in the action phase of the third embodiment.
- FIG. 16A is a flowchart illustrating a processing procedure in the marker registration phase of the fourth embodiment.
- FIG. 16B is a flowchart illustrating a processing procedure in the action phase of the fourth embodiment.
- FIG. 16A is a flowchart illustrating a processing procedure in the marker registration phase of the fourth embodiment.
- FIG. 17A is a flowchart illustrating a processing procedure in the marker registration phase of the fifth embodiment.
- FIG. 17B is a flowchart illustrating a processing procedure in the action phase of the fifth embodiment.
- FIG. 18A is a flowchart illustrating a processing procedure in the marker registration phase of the sixth embodiment.
- FIG. 18B is a flowchart illustrating a processing procedure in the action phase of the sixth embodiment.
- FIG. 1 is a block diagram illustrating an example of a software configuration of the autonomous behavior robot 1 according to the embodiment.
- the autonomous behavior type robot 1 includes a data providing device 10 and a robot 2.
- the data providing apparatus 10 and the robot 2 are connected by communication and function as the autonomous behavior type robot 1.
- the robot 2 is a mobile robot having each of the imaging unit 21, the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, and the movement mechanism 29.
- the data providing apparatus 10 includes functional units such as a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, an imaging target recognition unit 15, and a second communication control unit 16.
- the first communication control unit 11 includes functional units such as a captured image acquisition unit 111, a spatial data providing unit 112, and an instruction unit 113.
- the second communication control unit 16 includes functional units such as a visualization data providing unit 161 and a designation acquiring unit 162.
- Each functional unit of the data providing device 10 of the autonomous behavior robot 1 in the present embodiment will be described as a functional module realized by a data providing program (software) that controls the data providing device 10.
- the functional units of the marker recognition unit 22, the movement control unit 23, and the state information acquisition unit 24 of the robot 2 are assumed to be functional modules realized by a program for controlling the robot 2 in the autonomous behavior robot 1. To do.
- the data providing apparatus 10 is an apparatus that can execute a part of the functions of the autonomous behavior robot 1.
- the data providing apparatus 10 is installed in a place physically close to the robot 2, communicates with the robot 2, and This is an edge server that distributes the processing load.
- the autonomous behavior robot 1 will be described as being configured by the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. Also good.
- the robot 2 is a robot that can move based on the spatial data, and is a mode of the robot in which the movement range is determined based on the spatial data.
- the data providing apparatus 10 may be configured with one casing or may be configured with a plurality of casings.
- the first communication control unit 11 controls a communication function with the robot 2.
- the communication method with the robot 2 is arbitrary, and for example, wireless LAN (Local Area Network), Bluetooth (registered trademark), near field communication such as infrared communication, wired communication, or the like can be used.
- Each function of the captured image acquisition unit 111, the spatial data providing unit 112, and the instruction unit 113 included in the first communication control unit 11 communicates with the robot 2 using a communication function controlled by the first communication control unit 11.
- the captured image acquisition unit 111 acquires a captured image captured by the imaging unit 21 of the robot 2.
- the imaging unit 21 is provided in the robot 2 and can change the imaging range as the robot 2 moves.
- the imaging unit 21, the marker recognition unit 22, the movement control unit 23, the state information acquisition unit 24, and the movement mechanism 29 of the robot 2 will be described.
- the photographing unit 21 can be composed of one or a plurality of cameras.
- the photographing unit 21 can three-dimensionally photograph a spatial element that is a photographing target from different photographing angles.
- the imaging unit 21 is a video camera using an image sensor such as a CCD (Charge-Coupled Device) sensor or a CMOS (Complementary Metal Oxide Semiconductor) sensor.
- the shape of the spatial element can be measured by photographing the spatial element with two cameras (stereo cameras).
- the photographing unit 21 may be a camera using ToF (Time of Flight) technology.
- ToF Time of Flight
- the shape of the spatial element can be measured by irradiating the spatial element with modulated infrared light and measuring the distance to the spatial element.
- the photographing unit 21 may be a camera using a structured light.
- a structured light is a light that projects light in a stripe or lattice pattern onto a spatial element.
- the imaging unit 21 can measure the shape of the spatial element from the distortion of the projected pattern by imaging the spatial element from a different angle from the structured light.
- the imaging unit 21 may be any one of these cameras or a combination of two or more.
- the photographing unit 21 is attached to the robot 2 and moves in accordance with the movement of the robot 2.
- the photographing unit 21 may be installed separately from the robot 2.
- the captured image captured by the capturing unit 21 is provided to the captured image acquisition unit 111 in a communication method corresponding to the first communication control unit 11.
- the captured image is temporarily stored in the storage unit of the robot 2, and the captured image acquisition unit 111 acquires the captured image temporarily stored in real time or at a predetermined communication interval.
- the marker recognition unit 22 recognizes a predetermined marker included in the photographed image photographed by the photographing unit 21.
- the marker is a spatial element that indicates a restriction on movement of the robot 2.
- the marker is a shape, pattern or color of an article recognizable from a captured image, a character or a figure attached to the article, or a combination thereof.
- the marker may be a planar article or a three-dimensional article.
- the marker is, for example, a seal or paper on which a two-dimensional code or a specific color combination or shape is printed.
- the marker may be a figurine or a rug having a specific color or shape.
- the movement of the robot can be restricted by the user's intention without impairing the atmosphere of the room.
- the movement restriction range can be grasped intuitively, and the restriction range can be easily changed.
- the marker is set by the user, for example, by being affixed to a wall or furniture, or placed on the floor.
- the marker recognizing unit 17 can recognize that the movement of the robot 2 is restricted by recognizing the marker image included in the captured image.
- the marker when the marker is flat, it can be attached to a wall or furniture, so installation in a space-saving manner is possible.
- the marker When the marker is planar, if the plane of the marker is photographed from the horizontal direction (when the photographing angle is small), the marker in the photographed image is distorted, making recognition difficult.
- the plane of the marker is photographed from the vertical direction (when the photographing angle is large), the marker is easily recognized. Therefore, for example, when a marker is attached to the hallway, the imaging angle is small at a position far from the marker, so that the robot 2 can be prevented from recognizing the marker. When the robot moves along the corridor and approaches the marker, the shooting angle increases, so that the marker is recognized.
- the marker installation position (described later) can be brought close to the position where the robot can recognize the marker, so that the robot can accurately grasp the marker installation position. .
- installation in the center of a room etc. becomes easy.
- the marker can be recognized from various shooting angles. Therefore, by installing a three-dimensional marker, it is possible to cause the robot 2 at a position far from the marker installation position to recognize the marker.
- the marker recognition unit 22 stores the visual characteristics of the marker in advance.
- the marker recognition unit 22 stores in advance a two-dimensional code or a three-dimensional object to be recognized as a marker.
- the marker recognizing unit 22 may recognize an object registered in advance by the user as a marker. For example, when a user registers a flower pot photographed with the camera of the user terminal 3 as a marker, the flower pot installed in the hallway or the like can be recognized as the marker. Therefore, the user can install an object that does not feel uncomfortable as a marker at the place where the marker is installed.
- the marker recognizing unit 22 may recognize a spatial element other than an object as a marker.
- the marker recognizing unit 22 may recognize a user's gesture as a marker such that the user crosses his arm in front of the body.
- the marker recognizing unit 22 recognizes a position where the user has made a gesture as a marker installation position.
- the marker recognition unit 22 recognizes the position where the marker is attached or installed (hereinafter referred to as “installation position”).
- the installation position is a position in the space where the marker in the spatial data is installed.
- the installation position can be recognized based on, for example, the distance between the current position of the robot 2 and the photographed marker based on the spatial data recognized by the robot 2. For example, when the size of the marker is known in advance, the marker recognition unit 22 calculates the distance between the robot 2 and the marker from the size of the marker image included in the captured image, and the current position of the robot 2 and the imaging direction ( For example, the marker installation position can be recognized based on an orientation (not shown).
- the installation position may be recognized from a relative position from a spatial element whose position in the space is already known to the marker.
- the marker recognition unit 22 may recognize the installation position from the relative position of the marker and the door.
- the installation position can be recognized based on the photographing depth of the marker photographed by the depth camera.
- the marker recognition unit 22 may recognize a plurality of markers included in the captured image. For example, when it is desired to set a range for restricting movement in a straight line, the user can install a marker composed of a pair of two markers including a first marker and a second marker.
- the marker recognizing unit 22 can recognize the position of the line segment (straight line or curve) connecting the start point and the end point by recognizing the installation position (start point) of the first marker and the installation position (end point) of the second marker. Good.
- the marker recognition unit 22 can recognize the position of the line segment in the spatial data by mapping the positions of the first marker and the second marker to the spatial data.
- the user can easily set a line segment that restricts movement by installing a marker at a predetermined position.
- Three or more markers may be installed. For example, when there are three or more markers, the marker recognition unit 22 can recognize a polygonal line or a polygon (area) based on the installation positions of the respective markers.
- the movement control unit 23 restricts movement based on the marker installation position recognized by the marker recognition unit 22.
- the movement control unit 23 includes a restriction range setting unit 231 that sets a restriction range for restricting movement according to the recognized marker installation position.
- the movement control unit 23 restricts the movement of the robot 2 with respect to the restriction range set in the restriction range setting unit 231.
- the marker installation position is a point, line, surface, or space set based on the installation position of one or more markers.
- the restriction range setting unit 231 can set the restriction range by recognizing the marker installation position as coordinate data in the spatial data, for example.
- the restriction range setting unit 231 may set a restriction range based on the installation position and restrict movement in the restriction range.
- the restriction range setting unit 231 defines a line segment that divides a space element such as a corridor or the area of a circle or a sphere in a space centered on the marker as a restriction range that restricts movement based on the installation position of one marker. Can be set. That is, the limited range setting unit 231 sets the limited range by arranging a geometrically determined range such as a rectangle, a circle, and a straight line in the space based on the marker installation position. For example, if the range is set in a circular shape, the limited range setting unit 231 may set a circular range having a predetermined radius around the marker installation position as the limited range.
- the limit range setting unit 231 may determine the rectangular limit range by arranging the marker installation position so as to be at the center of one side of the rectangle.
- the limited range is, for example, about 1 to 3 m from the marker, and is a range narrower than the range in which the marker recognition unit 22 can recognize the marker.
- the limit range may be determined in advance for each marker, or may be arbitrarily adjusted by the user by using an application described later.
- the limited range setting unit 231 may set a line, a surface, or a space set by a plurality of markers as the limited range. For example, the limited range setting unit 231 sets a predetermined range on the back side of the marker installation position or around the installation position as the limited range with reference to the position of the robot 2 when the marker recognition unit 22 recognizes the marker. May be.
- the limit range setting unit 231 sets the limit range in a line
- the movement control unit 23 limits the movement of the robot 2 so as not to exceed the line.
- the limit range setting unit 231 may set the limit range based on a rule determined in advance with the marker installation position as a reference.
- the restriction range setting unit 231 may recognize a spatial feature around the marker and set a restriction range according to the spatial feature.
- the limit range setting unit 231 may recognize the floor plan and set the limit range according to the floor plan. For example, if the marker is near the entrance of the passage (within a predetermined range), the restriction range setting unit 231 may set the passage as the restriction range.
- the limited range setting unit 231 may set a circular range centered on the marker as the limited range.
- the limit range setting unit 231 may set a predetermined range as the limit range from the wall if the marker is attached to the wall and there is no door in the vicinity.
- the marker may be set with a type. For example, a marker that restricts movement only when the marker can be visually recognized (referred to as “temporary marker”), and a marker that stores the position of the marker and permanently restricts movement even if the marker cannot be visually recognized (referred to as “permanent marker”). ) And the marker type.
- temporary marker a marker that restricts movement only when the marker can be visually recognized
- permanent marker a marker that stores the position of the marker and permanently restricts movement even if the marker cannot be visually recognized
- the marker type When the permanent marker is visually recognized, the robot 2 stores the position of the marker in a storage unit (not shown), and even when the marker disappears from the location, the movement is limited based on the stored marker position. . In addition, when the temporary marker is visually recognized, the robot 2 does not store the position of the temporary marker, so that the restriction range is canceled if the temporary marker is removed.
- the marker recognition unit 22 recognizes the set marker type.
- the types of markers can be determined in advance by, for example, marker shapes, patterns, colors, characters or figures, or a combination thereof. In addition, the types of markers may be classified according to the number of markers installed, the marker installation method (for example, installation by changing the vertical direction of the markers), or the like.
- the marker includes a two-dimensional code
- information specifying the type of marker is written in the information of the two-dimensional code.
- the marker recognizing unit 22 can identify the temporary marker or the permanent marker by reading the two-dimensional code.
- identification information (referred to as “marker identification information”) for identifying a marker may be written in the two-dimensional code.
- the marker recognizing unit 22 reads the marker identification information from the two-dimensional code, refers to a table prepared in advance, and identifies the type of marker associated with the marker identification information.
- the marker recognizing unit 22 may be configured to read information attached to the marker itself when information attached to the marker itself is included like a two-dimensional code.
- the marker recognizing unit 22 may be configured to read the marker identification information from the marker and read the incidental information by referring to the table using the marker identification information as a key.
- the marker recognizing unit 22 has a marker information storage unit (not shown) that holds the supplementary information of the marker in association with the marker identification information, and the marker information storage unit refers to the marker information storage unit. The case where it is comprised so that incidental information for every may be acquired is illustrated.
- the marker recognition unit 22 may read the marker identification information from the two-dimensional code, or may acquire the marker identification information by specifying the marker by general object recognition.
- the action of the robot 2 can be restricted (referred to as “behavior restriction”). For example, when it is not desired to allow the robot 2 to enter the dressing room during the time in which the bathroom is used, an entry prohibited time zone is associated with the marker. If the robot 2 is not desired to enter the kitchen when using the kitchen, a condition for prohibiting the entry is associated with the marker when there is a person (when a person is detected).
- An instruction for restricting the behavior of the robot may be associated as incidental information. That is, the supplementary information may include information for specifying the type of marker or information for defining the behavior of the robot in the limited range.
- the information that regulates the behavior is information for restricting the robot's behavior, and if prohibiting movement within the restricted range, in addition to prohibiting it, information specifying the prohibited time zone is included. May be.
- the accompanying information may include information (referred to as “behavioral conditions”) for specifying the conditions in addition to the permission if the movement within the restricted range is permitted with a condition. .
- the movement control unit 23 may restrict the movement based on the stored installation position when the marker recognition unit 22 does not recognize the marker. For example, when the command by the marker sets a permanent marker that sets a permanent limit, the movement control unit 23 moves based on the marker even when the marker is removed and cannot be recognized from the captured image. Permanently restrict Note that the markers set in the limit range setting unit 231 may be edited, for example, according to an instruction from the user terminal 3, such as deletion, position change, or command change.
- the user terminal 3 may have an application program (not shown) (hereinafter referred to as “application”) that can edit the marker.
- the application may display the marker so as to be selectable on the display screen of the user terminal 3 described above, and edit the marker selected by the user.
- the application may change the marker to a permanent marker.
- the user can cancel the restriction range by removing the installed temporary marker, and even after removing the installed marker by changing the temporary marker to a permanent marker with the app.
- the limited range can be maintained.
- the application may have a registration function for registering a spatial element photographed by the camera of the user terminal 3 described above as a marker. Further, the application may have a function of setting or changing the content of the restriction range adjustment or the action restriction described above.
- the app may be connected to a marker information storage unit included in the robot 2 and have a function of referring to and updating behavior restrictions and behavior conditions for each marker.
- the restriction on the movement of the robot 2 set by the marker can coexist with the setting of the restriction range by the state information described later.
- the entry prohibition area to the corridor can be set by setting a marker, and entry into the dressing room can be prohibited from the status information.
- the contents of restriction in the area may be set based on the state information.
- the state information acquisition unit 24 acquires state information indicating the state of the movement destination in movement.
- the state information is information for restricting the movement of the robot 2 according to the state of the movement destination detected by the robot 2.
- the destination state is, for example, the presence / absence of a person, the presence / absence of a pet, the temperature / humidity of a room, the door locking state, or the lighting state of the lighting, etc. May be included.
- the state information is information for limiting the moving speed in the area (range) when, for example, a person is detected in the moving range.
- status information prohibits entry in the area on a predetermined day of the week or time, prohibits movement through the door when the door is locked, or shoots in an area where lighting is lit. It may be prohibited.
- State information can be provided in conjunction with spatial data.
- the spatial data providing unit 112 provides the spatial data generated by the spatial data generating unit 13 to the robot 2.
- Spatial data is data obtained by converting spatial elements recognized by the robot in the space where the robot 2 exists.
- the robot 2 can move within a range determined by the spatial data. That is, the spatial data functions as a map for determining the movable range in the robot 2.
- the robot 2 is provided with spatial data from the spatial data providing unit 112.
- the spatial data can include position data of spatial elements such as walls, furniture, electrical appliances, and steps that the robot 2 cannot move.
- the robot 2 can determine whether or not the robot 2 can move based on the provided spatial data. Further, the robot 2 may be able to recognize whether or not an ungenerated range is included in the spatial data. Whether or not an ungenerated range is included can be determined, for example, based on whether or not a space having no spatial element is included in part of the spatial data.
- the instruction unit 113 instructs the robot 2 to shoot based on the spatial data generated by the spatial data generation unit 13. Since the spatial data generation unit 13 creates spatial data based on the captured image acquired by the captured image acquisition unit 111, for example, when creating indoor spatial data, spatial data is not generated for a portion that has not been captured. May be included. Further, if the captured image is unclear, noise may be included in the created spatial data, and an inaccurate part may be included in the spatial data. When there is an ungenerated part in the spatial data, the instructing unit 113 may issue an imaging instruction for the ungenerated part. In addition, when the spatial data includes an inaccurate portion, the instructing unit 113 may instruct the imaging for the inaccurate portion. The instruction unit 113 may voluntarily instruct photographing based on the spatial data.
- the instruction unit 113 may instruct photographing based on an explicit instruction from a user who has confirmed visualization data (described later) generated based on the spatial data.
- the user can recognize the space and generate the spatial data by designating the area included in the visualization data and instructing the robot 2 to perform photographing.
- the point cloud data generation unit 12 generates three-dimensional point cloud data of spatial elements based on the captured image acquired by the captured image acquisition unit 111.
- the point cloud data generation unit 12 generates point cloud data by converting a spatial element included in the captured image into a three-dimensional set of points in a predetermined space.
- the spatial elements are room walls, steps, doors, furniture placed in the room, home appliances, luggage, houseplants, and the like. Since the point cloud data generation unit 12 generates point cloud data based on the captured image of the spatial element, the point cloud data represents the shape of the surface of the captured spatial element.
- the photographed image is generated by the photographing unit 21 of the robot 2 photographing at a predetermined photographing angle at a predetermined photographing position.
- the spatial data generation unit 13 generates spatial data that determines the movable range of the robot 2 based on the point cloud data of the spatial elements generated by the point cloud data generation unit 12. Since the spatial data is generated based on the point cloud data in the space, the spatial element included in the spatial data also has three-dimensional coordinate information.
- the coordinate information may include point position, length (including height), area, or volume information.
- the robot 2 can determine a movable range based on the position information of the spatial elements included in the generated spatial data. For example, when the robot 2 has a moving mechanism 29 that horizontally moves on the floor surface, the robot 2 has a level difference from the floor surface that is a spatial element in the spatial data at a predetermined height or higher (for example, 1 cm or higher).
- the robot 2 determines, in the spatial data, a range in which the space between the wall, which is a spatial element, and the furniture is a predetermined width or more (for example, 40 cm or more) as a movable range in consideration of the clearance with its own width. To do.
- the spatial data generation unit 13 may set attribute information for a predetermined area in the space.
- the attribute information is information that defines the movement condition of the robot 2 for a predetermined area.
- the movement condition is, for example, a condition that defines a clearance from a space element that the robot 2 can move.
- attribute information in which the clearance for a predetermined area is 5 cm or more can be set.
- information for restricting the movement of the robot may be set.
- the movement restriction is, for example, movement speed restriction or entry prohibition.
- attribute information that reduces the moving speed of the robot 2 may be set in an area where the clearance is small or an area where people exist.
- the movement condition set in the attribute information may be determined by the floor material of the area.
- the attribute information may be set to change the operation of the moving mechanism 29 (travel speed or travel means, etc.) when the floor is a cushion floor, a flow long, a tatami mat, or a carpet.
- the above conditions may be set at the charging spot where the robot 2 can move and charge, the step where the movement of the robot 2 is unstable and the movement is restricted, or the end of the carpet.
- the area in which the attribute information is set may be understood by the user, for example, by changing the display method in the visualization data described later.
- the spatial data generation unit 13 performs, for example, a Hough transform on the point cloud data generated by the point cloud data generation unit 12 to extract a common line, curve, or other graphic in the point cloud data.
- Spatial data is generated by the contour of the spatial element to be expressed.
- the Hough transform is a coordinate transformation method for extracting a figure that passes through the feature points most when the point cloud data is a feature point. Since the point cloud data expresses the shape of a spatial element such as furniture placed in a room in the point cloud, the user can determine what the spatial element is represented by the point cloud data (for example, Recognition of tables, chairs, walls, etc.) may be difficult.
- the spatial data generation unit 13 can express the outline of furniture or the like by performing the Hough transform on the point cloud data, the user can easily determine the spatial elements.
- the spatial data generation unit 13 converts the point cloud data generated by the point cloud data generation unit 12 into a basic shape in a spatial element (for example, a table, a chair, a wall, etc.) recognized in the image recognition. Data may be generated.
- a spatial element such as a table is a table by image recognition
- the shape of the table is determined from a part of point cloud data of the spatial element (for example, point cloud data when the table is viewed from the front). It can be predicted accurately.
- the spatial data generation unit 13 can generate spatial data that accurately grasps the spatial elements by combining point cloud data and image recognition.
- the spatial data generation unit 13 generates spatial data based on point cloud data included in a predetermined range from the position where the robot 2 has moved.
- the predetermined range from the position to which the robot 2 has moved includes the position where the robot 2 has actually moved, and may be, for example, a range at a distance such as 30 cm from the position to which the robot 2 has moved. Since the point cloud data is generated based on the captured image captured by the capturing unit 21 of the robot 2, the captured image may include a spatial element at a position away from the robot 2. When the imaging unit 21 is far away from the space element, there may be a portion where the robot 2 is not moved due to the presence of an uncaptured portion or the presence of an obstacle that is not captured.
- the spatial element extracted at the feature point may be distorted.
- the spatial data generation unit 13 may generate spatial data that does not include a spatial element with low accuracy or a distorted spatial element by ignoring feature points that are far apart.
- the spatial data generation unit 13 deletes point cloud data outside a predetermined range from the position where the robot 2 has moved to generate spatial data, thereby preventing the occurrence of an enclave where no data actually exists.
- the spatial data generation unit 13 sets a limit range for the generated spatial data. By setting a limit range for spatial data, the limit range can be visualized as part of the visualization data.
- the spatial data generation unit 13 sets state information for the spatial data. By setting the state information for the spatial data, the state information can be made part of the visualization data.
- the visualization data generation unit 14 generates visualization data that is visualized based on the spatial data generated by the spatial data generation unit 13 so that a person can intuitively determine the spatial elements included in the space.
- a robot has various sensors such as a camera and a microphone, and recognizes surrounding conditions by comprehensively judging information obtained from these sensors.
- the robot In order for the robot to move, it is necessary to recognize various objects existing in the space and determine the moving route in the spatial data, but the moving route may not be appropriate because the object cannot be recognized correctly. Due to misrecognition, for example, even if a person thinks that there is a sufficiently large space, the robot may recognize that there is an obstacle and that only a narrow range can move.
- the self-supporting action robot in the present embodiment visualizes and provides spatial data, which is its own recognition state, to the person in order to reduce the recognition error between the person and the robot, and again to the point indicated by the person. Recognition processing can be performed.
- Spatial data is data including spatial elements recognized by the autonomous behavior type robot 1, whereas visualization data is used for the user to visually recognize the spatial elements recognized by the autonomous behavior type robot 1. It is data. Spatial data may include misrecognized spatial elements. By visualizing the spatial data, it becomes easy for a person to confirm the recognition state (presence / absence of misrecognition) of the spatial element in the autonomous behavior robot 1.
- Visualized data is data that can be displayed on the display device.
- the visualization data is a so-called floor plan, and a spatial element recognized as a table, chair, sofa, or the like is included in an area surrounded by a spatial element recognized as a wall.
- the visualization data generation unit 14 generates the shape of furniture or the like formed in the graphic extracted by the Hough transform as visualization data expressed by RGB data, for example.
- the spatial data generation unit 13 generates visualization data in which the plane drawing method is changed based on the direction of the plane in three dimensions of the spatial element.
- the direction of the three-dimensional plane of the spatial element is, for example, the normal direction of the plane formed by the figure generated in the point cloud data by Hough transforming the point cloud data generated in the point cloud data generation unit 12 It is.
- the visualization data generation unit 14 generates visualization data in which the plane drawing method is changed according to the normal direction.
- the drawing method is, for example, a hue attributed to a plane, a color attribute such as brightness or saturation, a pattern imparted to a plane, a texture, or the like.
- the visualization data generation unit 14 when the normal line of the plane is the vertical direction (the plane is the horizontal direction), the visualization data generation unit 14 renders the plane with high brightness and draws in a bright color.
- the visualization data generation unit 14 renders the plane with low brightness and draws in a dark color.
- the visualization data may include coordinate information (referred to as “visualization coordinate information”) in the visualization data associated with the coordinate information of each spatial element included in the spatial data. Since the visualization coordinate information is associated with the coordinate information, the point in the visualization coordinate information corresponds to the point in the actual space, and the surface in the visualization coordinate information corresponds to the surface in the actual space. Therefore, when the user specifies the position of a certain point in the visualization data, the position of the point in the actual room corresponding to the point can be specified.
- a conversion function for converting the coordinate system may be prepared so that the coordinate system in the visualization data and the coordinate system in the spatial data can be mutually converted.
- the coordinate system in the visualization data and the coordinate system in the actual space may be mutually convertible.
- the visualization data generation unit 14 generates visualization data as stereoscopic (3D (Dimensions)) data.
- the visualization data generation unit 14 may generate the visualization data as planar (2D) data.
- the visualization data generation unit 14 may generate the visualization data in 3D when the spatial data generation unit 13 generates sufficient data to generate the visualization data in 3D.
- the visualization data generation unit 14 may generate the visualization data in 3D based on the 3D viewpoint position (viewpoint height, viewpoint elevation angle, etc.) designated by the user. By making it possible to specify the viewpoint position, the user can easily check the shape of furniture or the like.
- the visualization data generation unit 14 may generate visualization data in which the wall or ceiling of the room is colored only for the back wall and the front wall or ceiling is transparent (not colored). By making the near wall transparent, the user can easily confirm the shape of the furniture or the like arranged at the end (inside the room) of the near wall.
- the visualization data generation unit 14 generates visualization data to which a color attribute corresponding to the captured image acquired by the captured image acquisition unit 111 is added. For example, when the captured image includes woodgrain furniture and the color of the wood (for example, brown) is detected, the visualization data generation unit 14 gives a color approximate to the detected color to the extracted furniture figure. Generate visualization data. By assigning a color attribute according to the photographed image, the user can easily check the type of furniture or the like.
- the visualization data generation unit 14 generates visualization data in which the drawing method between the fixed object that is fixed and the moving object that moves is changed.
- the fixed object is, for example, a wall of a room, a step, furniture that is fixed, and the like.
- the moving object is, for example, a chair, a trash can, furniture with casters, or the like.
- the moving object may include a temporary object temporarily placed on the floor, such as a luggage or a bag.
- the drawing method is, for example, a hue attributed to a plane, a color attribute such as brightness or saturation, a pattern imparted to a plane, a texture, or the like.
- the classification of fixed, moving or temporary items can be identified by the time period existing at the location.
- the spatial data generation unit 13 identifies the classification of the fixed object, the moving object, or the temporary object based on the time-dependent change of the point cloud data generated by the point cloud data generation unit 12, and obtains the spatial data. Generate.
- the spatial data generation unit 13 determines that the spatial element is a fixed object when the spatial element has not changed from the difference between the spatial data generated at the first time and the spatial data generated at the second time. to decide. Further, the spatial data generation unit 13 may determine that the spatial element is a moving object when the position of the spatial element changes from the difference of the spatial data.
- the spatial data generation unit 13 may determine that the spatial element is a primary object when the spatial element is lost or appears from the difference of the spatial data.
- the visualization data generation unit 14 changes the drawing method based on the classification identified by the spatial data generation unit 13.
- the drawing method change includes, for example, color coding, addition of hatching, addition of a predetermined mark, and the like.
- the spatial data generation unit 13 may display a fixed object in black, a moving object in blue, or a temporary object in yellow.
- the spatial data generation unit 13 generates spatial data by identifying a classification of a fixed object, a moving object, or a temporary object.
- the visualization data generation unit 14 may generate visualization data in which the drawing method is changed based on the classification identified by the spatial data generation unit 13. Further, the spatial data generation unit 13 may generate visualization data obtained by changing the drawing method of the spatial element recognized by the image recognition.
- the visualization data generation unit 14 can generate visualization data in a plurality of divided areas. For example, the visualization data generation unit 14 generates visualization data for each of the spaces partitioned by walls such as a living room, a bedroom, a dining room, and a hallway as one room. By generating visualization data for each room, for example, generation of spatial data or visualization data can be performed separately for each room, and generation of spatial data or the like is facilitated. In addition, it is possible to create spatial data or the like only for an area where the robot 2 may move.
- the visualization data providing unit 161 provides visualization data that allows the user to select an area. For example, the visualization data providing unit 161 may enlarge the visualization data of the area selected by the user or provide detailed visualization data of the area selected by the user.
- the imaging object recognition unit 15 recognizes a spatial element based on the captured image acquired by the captured image acquisition unit. Spatial element recognition can be performed by using an image recognition engine that determines what a spatial element is based on, for example, image recognition results accumulated in machine learning. The image recognition of the space element can be recognized, for example, in the shape, color, pattern, character or figure attached to the space element. For example, the imaging target recognition unit 15 may be able to recognize a spatial element by using an image recognition service provided in a cloud server (not shown).
- the visualization data generation unit 14 generates visualization data in which the drawing method is changed in accordance with the spatial element recognized by the imaging target recognition unit 15.
- the visualization data generation unit 14 when the image-recognized space element is a sofa, the visualization data generation unit 14 generates visualization data in which a texture having a cloth texture is added to the space element.
- the visualization data generation unit 14 may generate visualization data with a wallpaper color attribute (for example, white).
- the second communication control unit 16 controls communication with the user terminal 3 owned by the user.
- the user terminal 3 is, for example, a smartphone, a tablet PC, a notebook PC, a desktop PC, or the like.
- the communication method with the user terminal 3 is arbitrary, and for example, wireless LAN, Bluetooth (registered trademark), short-range wireless communication such as infrared communication, or wired communication can be used.
- Each function of the visualization data providing unit 161 and the designation acquiring unit 162 included in the second communication control unit 16 communicates with the user terminal 3 using a communication function controlled by the second communication control unit 16.
- the visualization data providing unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3.
- the visualization data providing unit 161 is, for example, a Web server, and provides visualization data as a Web page to the browser of the user terminal 3.
- the visualization data providing unit 161 may provide visualization data to a plurality of user terminals 3. By visually recognizing the visualization data displayed on the user terminal 3, the user can confirm the range in which the robot 2 can move as a 2D or 3D display. In the visualization data, the shape of furniture or the like is drawn by a predetermined drawing method. By operating the user terminal 3, the user can switch between 2D display and 3D display, zoom in or out of the visualization data, or move the viewpoint in 3D display, for example.
- the user can visually check the visualization data displayed on the user terminal 3 and can confirm the generation state of the spatial data and the attribute information of the area.
- the user can instruct the creation of the spatial data by designating an area where the spatial data is not generated from the visualization data.
- An area can be specified to instruct the regeneration of spatial data.
- the visualization coordinate information in the visualization data is associated with the coordinate information in the spatial data, the area in the visualization data that is specified to be regenerated by the user can be uniquely identified as the area in the spatial data. . Is associated with the coordinate information.
- the regenerated spatial data is provided from the visualization data providing unit 161 after the visualization data is regenerated in the visualization data generation unit 14.
- the generation state of the spatial data may not change, for example, the spatial element may be misrecognized in the regenerated visualization data.
- the user may instruct generation of spatial data by changing the operation parameter of the robot 2.
- the operation parameters are, for example, shooting conditions (exposure amount, shutter speed, etc.) in the shooting unit 21 in the robot 2, sensitivity of a sensor (not shown), clearance conditions when allowing the robot 2 to move, and the like.
- the operation parameter may be included in the spatial data as area attribute information.
- the visualization data generation unit 14 generates visualization data including a display of a button for instructing creation of spatial data (including “re-creation”), for example.
- the user terminal 3 can transmit an instruction to create spatial data to the autonomous behavior robot 1 by the user operating the displayed button.
- the designation acquisition unit 162 acquires the spatial data creation instruction transmitted from the user terminal 3.
- the designation obtaining unit 162 obtains an instruction to create spatial data of an area designated by the user based on the visualization data provided by the visualization data providing unit 161.
- the designation acquisition unit 162 may acquire an instruction to set (including change) the attribute information of the area.
- the designation acquisition unit 162 acquires the position of the area and the direction when the robot approaches the area, that is, the direction to be photographed. Acquisition of a creation instruction can be executed, for example, in the operation of a Web page provided by the visualization data providing unit 161. Thereby, the user can grasp how the robot 2 recognizes the space, and can instruct the robot 2 to redo the recognition process according to the recognition state.
- the instruction unit 113 instructs the robot 2 to shoot in the area where the creation of spatial data is instructed.
- the instruction unit 113 may instruct photographing of a marker installed in the area.
- the shooting in the area instructed to create the spatial data may include, for example, shooting conditions such as the coordinate position of the robot 2 (shooting unit 21), the shooting direction of the shooting unit 21, and the resolution.
- the spatial data generation unit 13 adds the newly created spatial data to the existing spatial data when the spatial data for which creation is instructed relates to an ungenerated region, and the spatial data generation unit 13 If the instructed spatial data relates to re-creation, spatial data obtained by updating the existing spatial data is generated. Further, when a marker is included in the captured image, spatial data including the recognized marker may be generated.
- FIG. 1 illustrates the case where the autonomous behavior robot 1 includes the data providing device 10 and the robot 2, but the function of the data providing device 10 is included in the function of the robot 2. It may be a thing.
- the robot 2 may include all the functions of the data providing apparatus 10.
- the data providing apparatus 10 may temporarily substitute a function when the robot 2 has insufficient processing capability.
- “acquisition” may be acquired by the acquiring entity actively, or may be acquired passively by the acquiring entity.
- the designation acquisition unit 162 may acquire the designation by receiving a spatial data creation instruction transmitted from the user terminal 3 by the user, and the user stores the data in a storage area (not shown) that is not shown.
- An instruction to create the spatial data may be obtained by reading from the storage area.
- the data providing apparatus 10 includes a first communication control unit 11, a point cloud data generation unit 12, a spatial data generation unit 13, a visualization data generation unit 14, an imaging target recognition unit 15, a second communication control unit 16, and a captured image.
- the functional units of the acquisition unit 111, the spatial data provision unit 112, the instruction unit 113, the visualization data provision unit 161, and the designation acquisition unit 162 are examples of functions of the autonomous behavior robot 1 in the present embodiment.
- the functions of the autonomous behavior robot 1 are not limited.
- the autonomous behavior robot 1 does not need to have all the functional units that the data providing apparatus 10 has, and may have some functional units.
- the autonomous behavior type robot 1 may have a function part other than the above.
- the functional units of the marker recognition unit 22, the movement control unit 23, the restriction range setting unit 231, and the state information acquisition unit 24 included in the robot 2 are examples of functions of the autonomous behavior robot 1 according to the present embodiment.
- the functions of the autonomous behavior robot 1 are not limited.
- the autonomous behavior robot 1 does not have to have all the functional units that the robot 2 has, and may have some functional units.
- the above-described functional units included in the autonomous behavior robot 1 have been described as being realized by software as described above. However, at least one of the functions of the autonomous behavior robot 1 may be realized by hardware.
- any one of the above functions that the autonomous behavior robot 1 has may be implemented by dividing one function into a plurality of functions. Further, any two or more functions of the autonomous behavior robot 1 may be integrated into one function. That is, FIG. 1 represents the functions of the autonomous behavior robot 1 by function blocks, and does not indicate that each function is configured by a separate program file, for example.
- the autonomous behavior robot 1 may be a device realized by a single housing or a system realized by a plurality of devices connected via a network or the like.
- the autonomous behavior robot 1 may be realized by a virtual device such as a cloud service provided by a cloud computing system, part or all of its functions. That is, the autonomous behavior robot 1 may realize at least one or more of the above functions in another device.
- the autonomous behavior robot 1 may be a general-purpose computer such as a tablet PC, or may be a dedicated device with limited functions.
- the autonomous behavior type robot 1 may realize part or all of the functions in the robot 2 or the user terminal 3.
- FIG. 2 is a block diagram illustrating an example of a hardware configuration of the autonomous behavior robot 1 according to the embodiment.
- the autonomous behavior type robot 1 includes a CPU (Central Processing Unit) 101, a RAM (Random Access Memory) 102, a ROM (Read Only Memory) 103, a touch panel 104, a communication I / F (Interface) 105, a sensor 106, and a clock 107. .
- the autonomous behavior type robot 1 is a device that executes the autonomous behavior type robot control program described in FIG.
- the CPU 101 controls the autonomous behavior robot 1 by executing the autonomous behavior robot control program stored in the RAM 102 or the ROM 103.
- the autonomous behavior type robot control program is acquired from, for example, a recording medium that records the autonomous behavior type robot control program or a program distribution server via a network, installed in the ROM 103, read from the CPU 101, and executed.
- the touch panel 104 has an operation input function and a display function (operation display function).
- the touch panel 104 enables operation input using a fingertip or a touch pen to the user of the autonomous behavior robot 1.
- the autonomous behavior robot 1 in this embodiment will be described using a touch panel 104 having an operation display function.
- the autonomous behavior robot 1 has a display device having a display function and an operation input device having an operation input function separately. You may have.
- the display screen of the touch panel 104 can be implemented as a display screen of the display device, and the operation of the touch panel 104 can be implemented as an operation of the operation input device.
- the touch panel 104 may be realized in various forms such as a head mount type, a glasses type, and a wristwatch type display.
- the communication I / F 105 is a communication I / F.
- the communication I / F 105 executes short-range wireless communication such as a wireless LAN, a wired LAN, and infrared rays. Although only the communication I / F 105 is illustrated in FIG. 2 as the communication I / F, the autonomous behavior robot 1 may have each communication I / F in a plurality of communication methods.
- the communication I / F 105 may communicate with a control unit that controls the photographing unit 21 (not shown) or a control unit that controls the moving mechanism 29.
- the sensor 106 is hardware such as a camera of the photographing unit 21, a TOF or a thermo camera, a microphone, a thermometer, an illuminometer, or a proximity sensor. Data acquired by these hardware is stored in the RAM 102 and processed by the CPU 101.
- the clock 107 is an internal clock for acquiring time information.
- the time information acquired by the clock 107 is used, for example, for confirmation of a time zone in which entry is prohibited.
- FIG. 3 is a flowchart illustrating an example of the operation of the robot control program in the embodiment.
- the execution subject of the operation is the autonomous behavior type robot 1, but each operation is executed in each function of the autonomous behavior type robot 1 described above.
- the autonomous behavior robot 1 determines whether a captured image has been acquired (step S11). Whether or not a captured image has been acquired can be determined by whether or not the captured image acquisition unit 111 has acquired a captured image from the robot 2. The determination as to whether or not a captured image has been acquired is made on a per-process basis for captured images. For example, since the moving image is continuously transmitted from the robot 2 when the captured image is a moving image, the determination as to whether or not the captured image has been acquired is based on whether the number of frames or the data amount of the acquired moving image is a predetermined value It can be done by whether or not.
- the acquired captured image may be acquired by the mobile robot as a main component for transmitting the captured image or may be acquired by the captured image acquisition unit 111 as the main component for taking the captured image from the mobile robot. If it is determined that a captured image has not been acquired (step S11: NO), the autonomous behavior robot 1 repeats the process of step S11 and waits for a captured image to be acquired.
- the autonomous behavior robot 1 when it is determined that a captured image has been acquired (step S12: NO), the autonomous behavior robot 1 generates point cloud data (step S12).
- the point cloud data is generated by the point cloud data generation unit 12 detecting, for example, a point having a large change in luminance in the captured image as a feature point, and giving three-dimensional coordinates to the detected feature point. Can be executed.
- the feature point may be detected by, for example, performing a differentiation process on the captured image to detect a portion having a large gradation change. Moreover, you may perform the provision of the coordinate with respect to a feature point by detecting the same feature point image
- Whether or not a captured image is acquired in step S11 can be determined based on whether or not captured images captured from a plurality of directions have been acquired.
- step S13 the autonomous behavior type robot 1 generates spatial data (step S13).
- the generation of the spatial data can be executed by the spatial data generation unit 13 by, for example, Hough transforming the point cloud data. Details of step S13 will be described with reference to FIG.
- the autonomous behavior robot 1 After executing the process of step S13, the autonomous behavior robot 1 provides the generated spatial data to the robot 2 (step S14).
- the provision of the spatial data to the robot 2 may be sequentially provided as the spatial data is generated as shown in FIG. 3, or may be provided asynchronously with the processing shown in steps S11 to S18. Good.
- the robot 2 provided with the spatial data can grasp the movable range based on the spatial data.
- the autonomous behavior robot 1 determines whether or not to recognize a spatial element (step S15).
- the determination of whether or not to recognize a spatial element can be executed by, for example, setting the imaging target recognition unit 15 whether or not to recognize a spatial element. Even if it is determined that the spatial element is recognized, if the recognition fails, it may be determined that the spatial element is not recognized.
- the autonomous behavior robot 1 If it is determined that the spatial element is recognized (step S15: YES), the autonomous behavior robot 1 generates first visualization data (step S16).
- the generation of the first visualization data can be executed in the visualization data generation unit 14.
- the first visualization data is visualization data generated after the imaging object recognition unit 15 recognizes a spatial element. For example, when the imaging target recognition unit 15 determines that the spatial element is a table, the visualization data generation unit 14 does not have point cloud data even if the top surface of the table is not captured. Visualization data can be generated as if the top surface is flat. Further, when it is determined that the spatial element is a wall, the visualization data generation unit 14 can generate visualization data by assuming that a portion that has not been shot is also a plane.
- the autonomous behavior robot 1 If it is determined that the spatial element is not recognized (step S15: NO), the autonomous behavior robot 1 generates second visualization data (step S17).
- the generation of the second visualization data can be executed in the visualization data generation unit 14.
- the second visualization data is visualization data that is generated based on the point cloud data and the spatial data generated from the captured image without the imaging target recognition unit 15 recognizing the spatial element.
- the autonomous behavior robot 1 can reduce the processing load by not performing the spatial element recognition process.
- the autonomous behavior type robot 1 After executing the process of step S16 or the process of step S17, the autonomous behavior type robot 1 provides visualization data (step S18).
- the provision of the visualization data is executed when the visualization data provision unit 161 provides the visualization data generated by the visualization data generation unit 14 to the user terminal 3.
- the autonomous behavior type robot 1 may generate and provide visualization data in response to a request from the user terminal 3, for example.
- the autonomous behavior robot 1 ends the operation shown in the flowchart.
- FIG. 4 is a flowchart illustrating another example of the operation of the robot control program in the embodiment.
- the autonomous behavior type robot 1 generates spatial data (step S121).
- the generation of the spatial data can be executed by the spatial data generation unit 13 by, for example, Hough transforming the point cloud data.
- the autonomous behavior robot 1 determines whether or not the marker has been recognized (step S122). Whether or not the marker has been recognized can be determined by whether or not the marker recognizing unit 22 has recognized the marker image in the captured image captured by the imaging unit 21.
- the robot 2 can notify the data providing apparatus 10 of the marker recognition result.
- step S122 If it is determined that the marker has been recognized (step S122: YES), the autonomous behavior robot 1 sets a restriction range in which movement is restricted to the spatial data generated in step S121 (step S123).
- the autonomous behavior robot 1 determines whether or not the state information is acquired (step S124). Whether or not the state information is acquired can be determined by whether or not the state information is acquired in the state information acquisition unit 24. When it is determined that the state information has been acquired (step S124: YES), the autonomous behavior robot 1 sets the acquired state information corresponding to the spatial data (step S125). The set state information is provided from the visualization data providing unit 161 in correspondence with the visualization data.
- step S124 when it is determined that the state information has not been acquired (step S124: NO), after executing the process of step S123 or after executing the process of step S125, the autonomous behavior robot 1 performs the steps shown in the flowchart. The operation of providing provided data in S12 is terminated.
- FIG. 5 is a diagram illustrating a method of setting an entry prohibition line in the embodiment.
- the user installs the marker 2 on the side of the wall 1 near the passage in the passage between the wall 1 and the wall 3.
- the marker 2 is recognized as a single marker by the marker recognition unit 22.
- the limit range setting unit 231 checks whether or not there is a passage near the marker 2. When there is a passage, the restriction range setting unit 231 sets a straight line on the passage as the entry prohibition line 2 from the installation position of the marker 2 and the position of the passage. Since the restriction range setting unit 231 can confirm whether or not there is a passage in the vicinity of the marker 2, the user can prohibit the robot from entering the passage even with a single marker.
- the user attaches the marker 3 to the wall 3.
- the marker 3 is recognized as a single marker by the marker recognition unit 22.
- the limit range setting unit 231 checks whether or not there is a passage near the marker 3. When there is no passage, the limited range setting unit 231 sets a predetermined range from the installation position of the marker 2 (for example, a semicircle centering on the installation position of the marker 3) as the first entry prohibition area.
- the user installs the marker 4 in the center of the room.
- the marker 4 is, for example, a three-dimensional marker.
- the marker 4 is recognized as a single marker by the marker recognition unit 22.
- the restriction range setting unit 231 sets a predetermined range around the marker 4 (for example, a circle centered on the installation position of the marker 4) as the second entry prohibition area.
- FIGS. 6-7 is a figure which shows an example of the display of the user terminal 3 in embodiment.
- 6 to 7 are display examples in which a Web page provided as visualization data from the visualization data providing unit 161 is displayed on a touch panel of a smartphone exemplified as the user terminal 3.
- 2D display data generated by the visualization data generation unit 14 is displayed on the user terminal 3 based on the captured image of the living room captured by the robot 2.
- the visualization data generation unit 14 rasterizes a line segment (straight line or curve) extracted by the Hough transform of the point cloud data, and draws a boundary line of a spatial element such as a wall or furniture in 2D.
- the visualization data generation unit 14 displays the entry prohibition line 36 based on the pair of marker images recognized by the marker recognition unit 22 or the state information acquired by the state information acquisition unit 24.
- the entry prohibition line 36 can be set by designating a start point and an end point. The start point and the end point can be set by setting a pair of markers or setting from the user terminal 3.
- An entry prohibition mark is displayed at the center of the entry prohibition line 36.
- a delete button 37 is displayed.
- the delete button 37 is pressed, the entry prohibition line 36 once set can be deleted.
- the house-shaped icon h shown in the figure represents the home position where the robot 2 returns for charging.
- the area on the right side of the entry prohibition line 36 is an area for which no spatial data has been created.
- the user can set, confirm, or delete the entry prohibition line 36 from the visualization data displayed on the user terminal 3. That is, the autonomous behavior type robot 1 can generate a visualized map that defines the range in which the robot 2 can move from the image captured by the imaging unit 21, and can enter the entry prohibited area where the robot 2 cannot enter the user terminal 3. It is possible to set from.
- the user terminal 3 may be able to set conditions for restricting the movement of the robot 2.
- the user terminal 3 allows the user to set the time zone during which the robot 2 is prohibited from entering, the details of movement restrictions on the presence / absence of a person, the lighting state when the movement is restricted, and the like. Also good.
- the user terminal 3 sets conditions such that entry into the kitchen is prohibited in the morning and evening hours when a person prepares a meal, or entry into the study is prohibited when the light is off. You may be able to do it.
- 2D visualization data generated by the visualization data generation unit 14 is displayed on the user terminal 3 based on the captured image of the living room captured by the robot 2 in the same manner as in FIG. 6.
- the Western room visualization data is displayed on the right side of the entry prohibition line 36, and it can be confirmed by the entry prohibition line 36 that the robot 2 is prohibited from entering the western room 38.
- the user can set the Western room 38 as an area where entry is prohibited by installing a marker for prohibiting the entry of a pair at the entrance of the Western room 38 or by setting the entry prohibition line 36 from the user terminal 3. It becomes possible. Since the Western room 38 has already generated spatial data, the user can confirm that the robot 2 can be moved to the western room 38 by deleting the entry prohibition line 36. Note that the user can enlarge or reduce the display by pinching in or out the touch panel of the user terminal 3.
- the visualization data providing unit 161 in FIG. 1 may provide the user terminal 3 with the visualization data and the original image.
- an image obtained by capturing a part of the floor plan displayed on the user terminal 3 may be displayed by the user. That is, an image used for specifying each spatial element is accumulated, and when the user designates a spatial element, an image associated with the spatial element is provided. Accordingly, when the user cannot determine the recognition state from the visualization data, the user can determine the recognition state using the image.
- the user can slide the fingertip in the manner of drawing a circle with the fingertip on the touch panel screen of the user terminal 3. Specify by moving your fingertips around the restricted range. In conjunction with this operation, the fingertip trajectory is drawn on the screen, and the fingertip trajectory is visualized so as to be superimposed on the visualization data.
- the marker may not be recognized depending on the shooting angle of the marker as described above.
- the marker in the case of a road sign, it is installed so as to be substantially perpendicular to the traveling direction so that the driver can easily see the vehicle traveling.
- a marker may be affixed to the wall in the path along which the robot 2 moves, and the marker may be overlooked depending on the shooting direction of the camera.
- the marker 2 shown in FIG. 5 is affixed to the wall 1, if the robot 2 moves toward the marker 3 along the wall 3, there is a possibility of entering the passage before checking the marker. .
- a space for example, a passage provided in a wall
- a marker may be installed near the entrance of the space.
- the robot 2 moves to a position where the marker which may be installed on the entrance wall of the passage is easy to visually recognize, for example, a position in front of the passage, and performs an active confirmation operation of photographing the wall, thereby overlooking the marker. Can be prevented.
- the environmental conditions are, for example, the illuminance, temperature, humidity or noise level of the space in which the robot 2 moves, and changes in the environmental conditions may include changes in spatial elements such as wall colors.
- the limited range setting unit 231 may set the limited range based on the surrounding feature points where the marker is installed, instead of the spatial data of the installation position where the marker is installed.
- the peripheral feature points are, for example, spatial elements such as cables and steps arranged on the floor.
- the content of movement restriction may be set differently for each robot.
- the entry prohibition areas restricted by the robot may be set differently.
- the robot may learn the contents of restrictions once set by the marker. For example, the robot may learn that the entry prohibition area has been set by the temporary marker, and then lower the entry frequency even after the marker is removed.
- the marker recognizing unit 22 stores the positions of the temporary marker and the permanent marker in association with information specifying the marker type. In this way, by storing the position of the temporary marker, the robot acts to hesitate to enter the area where the temporary marker was installed, or to reduce the frequency of entering the area. Can be executed.
- the marker is used to cause the autonomous behavior robot 1 to recognize the place where entry is prohibited.
- the marker may be used to cause the autonomous behavior robot 1 to recognize an arbitrary location. That is, a marker may be installed at an arbitrary place in a residence or facility, and the autonomous behavior robot 1 may be made to recognize the position of the place where the marker is installed.
- Housing includes any area such as the entrance, children's room and bedroom.
- Facilities include any areas such as reception counters, rest areas and emergency exits.
- a marker that can identify an area type (for example, an entrance type or a reception counter type described later) associated with these areas is used.
- the marker may be any shape as long as it has a feature that allows the area type to be identified by image recognition, such as a shape, pattern, color, character or figure attached to the marker, or a combination thereof.
- a graphic code obtained by converting an area type code into a graphic by a general conversion method such as a barcode method or a two-dimensional barcode method
- the marker recognizing unit 22 can read the area type code from the graphic code by the conversion method.
- the marker recognizing unit 22 specifies the type of the graphic when the shape of the graphic included in the captured image is a predetermined shape, and specifies the area type corresponding to the specified graphic type. May be. Assume that the data providing apparatus 10 or the robot 2 stores data that associates graphic types with area types. That is, the marker in the second embodiment can specify any area type.
- the autonomous behavior robot 1 can recognize that the marker installation location corresponds to the area identified by the area type. For example, if an entrance type marker is installed at the entrance, the autonomous behavior type robot 1 can recognize that the entrance type marker is installed at the entrance.
- the autonomous behavior robot 1 stores marker information that associates a predetermined event with an area type.
- the robot 2 moves to a place where a marker of an area type corresponding to the predetermined event is installed (referred to as marker installation place).
- a predetermined event that triggers the robot 2 to move to the marker installation location is referred to as a first event.
- the first event may be detected by the robot 2 or may be detected by the data providing apparatus 10.
- the robot 2 executes a predetermined action.
- An event that triggers the robot 2 to execute a predetermined action is referred to as a second event.
- the second event may be detected by the robot 2 or detected by the data providing apparatus 10.
- the action to be executed corresponds to at least one of the first event, the area type, and the second event.
- the autonomous behavior robot 1 stores event information that associates at least one of the first event, the area type, and the second event with an action.
- the event information may be stored.
- FIG. 8 is a block diagram illustrating an example of a module configuration of the robot 2 according to the second embodiment.
- FIG. 8 also shows functional units related to Examples 1 to 6 described later.
- the robot 2 includes functional units such as a marker recognition unit 22, a position measurement unit 25, a movement control unit 23, a communication control unit 26, a first event detection unit 210, a second event detection unit 220, and an action execution unit 230.
- Each functional unit of the robot 2 according to the second embodiment will be described as a functional module realized by a program that controls the robot 2.
- the marker recognizing unit 22 recognizes the marker included in the captured image and identifies the area type indicated by the marker.
- the position measuring unit 25 measures the current position and direction of the robot 2.
- the position measurement unit 25 may measure the current position and direction based on the captured image, or may measure the current position based on radio waves received from a wireless communication device installed at a predetermined position.
- the method for measuring the current position may be a conventional technique.
- the position measurement unit 25 may use SLAM (Simultaneous Localization and Mapping) technology that simultaneously estimates the self-location and creates an environment map.
- the movement control unit 23 controls the movement of the robot 2.
- the movement control unit 23 sets a route to the destination, drives the moving mechanism 29 to follow the route, and moves itself to the destination.
- the communication control unit 26 communicates with the data providing apparatus 10.
- the first event detection unit 210 detects the first event.
- the first event detection unit 210 may detect the first event based on the result of recognition processing such as voice recognition or image recognition. That is, the first event detection unit 210 may detect the first event when it is determined that the sound input from the microphone included in the robot 2 includes a characteristic that is estimated to correspond to the predetermined sound.
- the first event detection unit 210 records, for example, a predetermined sound as a sample in advance, analyzes the sound, and extracts characteristic data such as a frequency distribution, an inflection and a period in which the volume increases.
- the first event detection unit 210 performs the same analysis on the sound input by the microphone, and when the feature data such as the frequency distribution, the inflection and the period in which the volume is increased matches or approximates the case of the sample, It may be estimated that the sound input with the microphone corresponds to the predetermined sound.
- the first event detection unit 210 may detect the first event when it is estimated that an arbitrary person, a predetermined person, or a predetermined object is reflected in the image captured by the imaging unit 21.
- the first event detection unit 210 captures, for example, an arbitrary person, a predetermined person, or a predetermined object as a sample in advance by the image capturing unit 21, analyzes the captured image, and analyzes the size, shape, and arrangement of parts of the subject.
- the feature data is extracted.
- the first event detection unit 210 analyzes the image captured by the imaging unit 21 and determines that the sample includes a subject whose feature data such as size, shape, and part arrangement is common or approximate.
- the first event detection unit 210 may detect the first event based on measurement results of various sensors such as a temperature sensor, a contact sensor, or an acceleration sensor.
- the first event detection unit 210 detects when the sensor measurement value falls below a predetermined lower limit, when the sensor measurement value falls within a predetermined range, when the sensor measurement value falls outside the predetermined range, or when the sensor The first event may be detected when the measured value exceeds a predetermined upper limit value.
- the first event detector 210 may detect the first event when, for example, a temperature sensor measures a temperature corresponding to a human body temperature.
- the first event detection unit 210 may detect the first event when, for example, a contact corresponding to a human touch is measured by a contact sensor.
- the first event detection unit 210 may detect the first event when an acceleration sensor measures a change in acceleration corresponding to a large shake such as a traffic accident impact or an earthquake.
- the first event detection unit 210 may detect the first event based on the communication state in the communication process.
- the first event detection unit 210 indicates a communication state when a characteristic value indicating a communication state such as a radio wave intensity in wireless communication, a communication time with a predetermined communication partner, or a transmission amount per predetermined time is below a lower limit value.
- the first event is detected when the characteristic value falls within a predetermined range, when the characteristic value indicating the communication state is out of the predetermined range, or when the characteristic value indicating the communication state exceeds a predetermined upper limit value. May be.
- the first event detection unit 210 may detect the first event based on data received from the data providing device 10, the user terminal 3, or another external device.
- the first event detection unit 210 may detect the first event when, for example, a predetermined notification is received from the data providing device 10 or another external device.
- the first event detection unit 210 may detect the first event when a predetermined request from the user is received from the user terminal 3.
- the voice recognition unit 211, the radio wave state detection unit 213, the tour event detection unit 215, and the wake-up event detection unit 217 are examples of the first event detection unit 210.
- the voice recognition unit 211 will be described in a second embodiment (application example of babysitting).
- the radio wave state detection unit 213 will be described in Example 4 (application example of call support).
- the tour event detection unit 215 will be described in Example 5 (application example of security).
- the wake-up event detection unit 217 will be described in Example 6 (an application example of an alarm clock).
- the second event detection unit 220 detects the second event. Similarly to the case of the first event detection unit 210, the second event detection unit 220 may detect the second event based on the result of recognition processing such as voice recognition and image recognition. Similar to the case of the first event detection unit 210, the second event detection unit 220 may detect the second event based on measurement results of various sensors such as a temperature sensor, a contact sensor, or an acceleration sensor. Similar to the case of the first event detection unit 210, the second event detection unit 220 may detect the second event based on the communication state in the communication process. Similarly to the case of the first event detection unit 210, the second event detection unit 220 may detect a second event based on data received from the data providing device 10, the user terminal 3, or another external device. Good.
- User recognition unit 221, call request reception unit 223, person recognition unit 225, and posture recognition unit 227 are examples of second event detection unit 220.
- the user recognition unit 221 will be described in the first embodiment (application example of welcome).
- the call request receiving unit 223 will be described in Example 4 (application example of call support).
- the person recognition unit 225 will be described in Example 5 (an application example of security).
- the body position recognizing unit 227 will be described in Example 6 (an application example of an alarm clock).
- the action execution unit 230 executes an action triggered by the second event.
- the action execution unit 230 may execute an action involving the movement of the robot 2 itself.
- the action execution unit 230 may execute an action associated with input processing such as image, sound or communication in the robot 2.
- the action execution unit 230 may execute an action accompanied by output processing such as image, sound, or communication in the robot 2.
- the attitude control unit 231, the voice output unit 232, the remote control unit 233, the message output unit 235, and the telephone communication unit 237 are examples of the action execution unit 230.
- the movement control unit 23 may function as the action execution unit 230.
- the movement control unit 23 controls the movement mechanism 29 to perform an action related to the movement of the robot 2.
- the posture control unit 231 performs an action related to the posture of the robot 2. If the robot 2 has a shape imitating a person or a virtual character and can move its neck and arm with an actuator, the posture control unit 231 drives the actuator to cause the robot 2 to take various poses. Or various gestures may be performed. If the robot 2 has a shape imitating a quadruped animal and each leg can be moved by an actuator provided at the joint, the posture control unit 231 drives the actuator to The user may be allowed to take a pose or perform various gestures.
- the remote control unit 233 will be described in Example 2 (application example for babysitting) and Example 6 (application example for alarm).
- the message output unit 235 will be described in Example 3 (application example of customer service).
- the telephone communication unit 237 will be described in Example 4 (application example of call support).
- FIG. 9 is a block diagram illustrating an example of a module configuration of the data providing apparatus 10 according to the second embodiment.
- FIG. 9 also shows functional units related to Examples 1 to 6 described later.
- the robot 2 includes functional units such as a first communication control unit 11, a second communication control unit 16, a marker registration unit 110, a first event detection unit 120, a second event detection unit 130, and an action selection unit 140.
- Each functional unit of the data providing apparatus 10 according to the second embodiment will be described as a functional module realized by a program that controls the data providing apparatus 10.
- the first communication control unit 11 controls wireless communication with the robot 2.
- the second communication control unit 16 controls wireless communication or wired communication with the user terminal 3 and other external devices.
- the marker registration unit 110 registers marker information including the marker position and orientation in a marker information storage unit 153 described later.
- the first event detection unit 120 detects the first event.
- the first event detection unit 120 may detect the first event based on the result of recognition processing such as voice recognition and image recognition, as in the case of the first event detection unit 210 of the robot 2.
- the first event detection unit 120 detects the first event based on measurement results of various sensors such as a temperature sensor, a contact sensor, or an acceleration sensor of the robot 2. May be.
- the first event detection unit 120 may detect the first event based on the communication state in the communication process, as in the case of the first event detection unit 210 of the robot 2.
- the first event detection unit 120 may detect the first event based on data received from the user terminal 3 or another external device, as in the case of the first event detection unit 210 of the robot 2.
- the first event detection unit 120 may detect the first event based on the data received from the robot 2.
- the return home event detection unit 121 and the visitor event detection unit 123 are examples of the first event detection unit 120.
- the return home event detection unit 121 will be described in Example 1 (application example of welcome).
- the customer event detection unit 123 will be described in Example 3 (application example of customer service).
- the second event detection unit 130 detects the second event. Similarly to the case of the first event detection unit 210 of the robot 2, the second event detection unit 130 may detect the second event based on the result of recognition processing such as voice recognition and image recognition. Similar to the case of the first event detection unit 210 of the robot 2, the second event detection unit 130 detects the second event based on the measurement results of various sensors such as the temperature sensor, contact sensor, or acceleration sensor of the robot 2. May be. The second event detection unit 130 may detect the second event based on the communication state in the communication process, as in the case of the first event detection unit 210 of the robot 2. Similarly to the case of the first event detection unit 210 of the robot 2, the second event detection unit 130 may detect the second event based on data received from the user terminal 3 or another external device. The second event detection unit 130 may detect the second event based on the data received from the robot 2.
- the empty room event detection unit 131 is an example of the second event detection unit 130.
- the vacant room event detection unit 131 will be described in Example 3 (application example of customer service).
- the action selection unit 140 selects an action corresponding to at least one of the first event, the area type, and the second event. In the following, an example in which an action corresponding to a combination of the first event and the second event is mainly selected will be described.
- the robot 2 further includes an event information storage unit 151 and a marker information storage unit 153.
- the event information storage unit 151 stores event information including an area type corresponding to the first event, a second event, and an action.
- the event information storage unit 151 will be described later with reference to FIG.
- the marker information storage unit 153 stores marker information that associates an area type with a marker position and orientation.
- the marker information storage unit 153 will be described later with reference to FIG.
- FIG. 10 is a block diagram illustrating an example of a data configuration of the event information storage unit 151 according to the second embodiment.
- Each record in FIG. 10 defines that when the first event is detected, the robot 2 moves to the place where the marker of the area type corresponding to the first event is installed. Further, each record in FIG. 10 defines that the robot 2 executes an action corresponding to, for example, a combination of the first event and the second event when the second event is detected. Details of each record will be described in the first to sixth embodiments.
- the event information may be set as a default, or may be set by a user using an application of the user terminal 3.
- FIG. 11 is a block diagram illustrating an example of a data configuration of the marker information storage unit 153 according to the second embodiment.
- the position and orientation of the marker of the area type are set in association with the area type.
- the user pastes in advance an area type marker for identifying an area near the position at an arbitrary position of a house or facility.
- the data providing apparatus 10 registers the marker information.
- FIG. 12A is a flowchart showing a processing procedure in the marker registration phase of the second embodiment.
- an unknown marker may be detected when the robot 2 is moving autonomously (S21).
- the marker recognition unit 22 of the robot 2 recognizes the marker included in the image captured by the imaging unit 21 and identifies the area type indicated by the marker.
- the marker recognition unit 22 stores a marker that has been detected in the past, and can determine an undetected marker by comparing the stored marker with the detected marker.
- the marker recognizing unit 22 determines that an undetected marker has been recognized, the marker recognizing unit 22 identifies the relative positional relationship and direction between the robot 2 and the marker by image recognition. The marker recognizing unit 22 obtains the distance between the robot 2 and the marker based on the size of the marker included in the captured image. The marker recognizing unit 22 determines the orientation of the marker with respect to the robot 2 based on how the marker included in the captured image is distorted. Then, the marker recognizing unit 22 obtains the marker position and orientation based on the current position and direction of the robot 2 measured by the position measuring unit 25. The communication control unit 26 transmits marker information including the area type and the position and orientation of the marker of the area type to the data providing apparatus 10.
- the marker registration unit 110 registers the position and orientation of the marker of the area type in the marker information storage unit 153 in association with the area type (Step S1). S22).
- the robot 2 moves to the marker installation location triggered by the first event, and the robot 2 performs a predetermined action triggered by the second event.
- FIG. 12B is a flowchart showing a processing procedure in the action phase of the second embodiment.
- the action selection unit 140 of the data providing device 10 displays a mark corresponding to the first event.
- the type is specified, and the robot 2 is instructed to move to the mark type mark position.
- the movement control unit 23 of the robot 2 sets the path to the mark position according to the movement instruction and controls the movement mechanism 29.
- the robot 2 moves to the vicinity of the mark by the operation of the moving mechanism 29 (S24).
- the action selection unit 140 of the data providing device 10 may, for example, combine the first event and the second event. The action corresponding to is selected, and the selected action is instructed to the robot 2. Then, the robot 2 executes the instructed action (S26). Examples 1 to 6 relating to the second embodiment will be described below.
- Example 1 An application example of the welcome by the autonomous behavior robot 1 will be described.
- the robot 2 goes to the entrance and greets the user. If an area type marker that identifies the entrance as an area is installed at the entrance, the autonomous behavior robot 1 can recognize that the installation location of the marker is the entrance.
- the area type that identifies the entrance is called the entrance type.
- the timing when the user returns home can be determined by the fact that the position measured by the GPS device (Global Positioning System) of the user terminal 3 held by the user approaches the home. For example, when the distance between the position of the user terminal 3 and the entrance (or the data providing device 10) is shorter than the reference length, it is determined that it is time for the user to go home.
- An event for determining the timing when the user returns home is called a user return event. That is, the user return home event corresponds to the first event in the application example of the welcome.
- the robot 2 arriving at the entrance recognizes the user by the image taken by the photographing unit 21, it performs an action that reacts to the user's return home.
- An event in which the robot 2 recognizes a user is called a user recognition event.
- An action that reacts to the user's return home is called a welcome action.
- the user recognition event is an opportunity to perform a welcome action.
- the user recognition event corresponds to the second event in the application example of the greeting.
- the voice output unit 232 outputs a voice from a speaker included in the robot 2.
- the voice to be output may be a natural language such as “return” or a non-language such as a cheer.
- the movement control unit 23 may control the moving mechanism 29 as a welcome action to perform an action of moving the robot 2 back and forth in small increments or rotating it. If the robot 2 has a shape imitating a person or a virtual character and can move the arm with an actuator, the posture control unit 231 may drive the actuator to raise and lower the arm. If the robot 2 can move the neck with an actuator, the posture control unit 231 may swing the head by driving the actuator.
- the posture control unit 231 will stand up with only the rear foot as a welcome action. You may be allowed to take Thereby, it can produce that the robot 2 is pleased of a user's return home. The user becomes familiar with the robot 2 waiting for him and deepens his attachment.
- the home return event detection unit 121 of the data providing apparatus 10 shown in FIG. 9 detects a user home return event when it is determined that the user has approached home based on the location information of the user terminal 3 as described above.
- the return home event detection unit 121 may detect a return home event when receiving a notification from the user terminal 3 that indicates that the user terminal 3 has communicated with a beacon transmitter installed at the entrance.
- the return home event detection unit 121 may detect a return home event when the user is recognized based on an image captured by an interphone with a camera or an input voice. Further, the return home event detection unit 121 may detect a return home event when a home return notice mail is received from the user terminal 3.
- the user recognition unit 221 of the robot 2 shown in FIG. 8 detects a user by, for example, recognizing a face part included in a captured image or recognizing an input voice.
- the user recognizing unit 221 takes a user's face as a sample in advance by the photographing unit 21, analyzes the photographed image, and arranges the size, shape, and parts (parts such as eyes, nose and mouth). Extract feature data.
- the person recognition unit 225 analyzes the image captured by the imaging unit 21 and determines that a subject whose feature data such as the size, shape, and part arrangement is common or approximate to the sample is included. It may be estimated that the user's face is reflected in the captured image.
- the user recognition unit 221 records, for example, a user's voice as a sample in advance, analyzes the user's voice, and extracts characteristic data such as frequency distribution and intonation. Then, the user recognition unit 221 performs the same analysis on the sound input by the microphone, and when the feature data such as the frequency distribution and the inflection is the same as or approximates the case of the user voice sample, It may be inferred that the sound input in step 1 corresponds to the voice of the user.
- the first record in the data structure of the event information storage unit 151 shown in FIG. 10 is provided with a marker whose area type is the entrance type when a user return event is detected as the first event for the application example of the greeting. It is determined that the robot 2 moves to the place where it is placed (that is, the entrance). The first record further defines that the robot 2 performs a welcome action when a user recognition event is detected as the second event.
- a marker whose area type is the entrance type is called an entrance marker.
- the position and orientation of the entrance marker are set in association with the entrance type of the area type with respect to the application example of the greeting.
- FIG. 13A is a flowchart illustrating a processing procedure in the marker registration phase of the first embodiment.
- the communication control unit 26 includes the entrance type including the area type entrance type and the location and orientation of the entrance marker. Information is transmitted to the data providing apparatus 10.
- the marker registration unit 110 registers the position and orientation of the entrance marker in the marker information storage unit 153 in association with the area type entrance type ( Step S32).
- FIG. 13B is a flowchart illustrating a processing procedure in the action phase of the first embodiment.
- the home return event detection unit 121 of the data providing apparatus 10 detects the user home return event (step S33)
- the event information storage unit 151 is referred to and the entrance type of the area type corresponding to the user home return event is specified.
- the return home event detection unit 121 further refers to the marker information storage unit 153 to identify the position and orientation of the entrance marker corresponding to the entrance type of the area type.
- the first communication control unit 11 transmits a movement instruction to the entrance including the position and orientation of the entrance marker to the robot 2.
- the return home event detection unit 121 notifies the action selection unit 140 of the user return home event.
- the movement control unit 23 of the robot 2 controls the moving mechanism 29, and the robot 2 moves to the entrance (step). S34).
- the robot 2 stays at least for the first predetermined time before the position of the entrance marker.
- the first predetermined time corresponds to an upper limit value of an assumed interval from when a user return event is detected until a user recognition event is detected. If the user recognition event is not detected when the first predetermined time has elapsed, the processing in the action phase of the first embodiment may be interrupted.
- the robot 2 recognizes the user. Specifically, the user recognition unit 221 detects a user recognition event by recognizing a user's face included in an image captured by the image capturing unit 21 or by recognizing sound input with a microphone ( Step S35). When the user recognition unit 221 of the robot 2 detects a user recognition event, the communication control unit 26 transmits the user recognition event to the data providing apparatus 10.
- the action selection unit 140 refers to the event information storage unit 151 and determines the first event corresponding to the user recognition event of the second event. Identify user return events.
- the action selection unit 140 refers to the event information storage unit 151 and selects a welcome action corresponding to the combination of the user return event of the first event and the user recognition event of the second event.
- the first communication control unit 11 transmits an instruction of the selected greeting action to the robot 2.
- the action execution unit 230 of the robot 2 executes the welcome action (step S36).
- Example 2 An application example of lullaby by the autonomous behavior robot 1 will be described.
- the robot 2 when an infant starts crying in a children's room, the robot 2 goes to the children's room and takes care of the infant. If an area type marker for identifying a child room as an area is installed in the child room, the autonomous behavior robot 1 can recognize that the marker is installed in the child room.
- An area type for identifying a child room is called a child room type.
- the robot 2 moves to the child room and produces a state of worrying about the infant.
- the infant's cry is detected by analyzing the sound input to the microphone of the robot 2. That is, the detection of the infant cry corresponds to the first event in the application example of babysitting. This event is called an infant crying event.
- the robot 2 behaves like an infant.
- the behavior that nurtures an infant is called Ayashi Action.
- the absence of an adult is detected by recognizing an image photographed by the photographing unit 21.
- the detection of the absence of an adult corresponds to the second event in the application example of babysitting. This event is called an adult absence event.
- the Ayashi action for example, a sound (for example, a crushing voice, a laughing voice, and a sound that makes a paper bag rustle) effective to stop the crying of the infant is output from a speaker provided in the robot 2.
- the Ayashi action may be a behavior that moves the components of the robot 2 such as the robot 2 tilting its head or raising or lowering its arm, or it is an image that is effective for preventing infants from crying.
- An action such as displaying a predetermined image on the display may be used.
- the shadow action may be a remote control for controlling a device different from the robot 2, such as activating a television nearby or causing the audio device to output music. If the robot 2 calms the toddler by the action, the parent user is relieved.
- the voice recognition unit 211 of the robot 2 shown in FIG. 8 detects an infant cry event when the sound input to the microphone is determined to be an infant cry. For example, the voice recognition unit 211 records an infant cry as a sample in advance, analyzes the infant cry, and extracts feature data such as a frequency distribution and a period in which the volume increases. Then, the voice recognition unit 211 performs the same analysis on the sound input to the microphone, and when the feature data such as the frequency distribution and the period when the volume increases matches or approximates the case of the infant cry sample, You may guess that the sound input into the microphone corresponds to the infant cry.
- the human recognition unit 225 of the robot 2 shown in FIG. 8 recognizes that an adult is not shown in the image captured by the imaging unit 21, the human absence event is detected.
- the human recognizing unit 225 for example, previously captures a plurality of adult videos having different genders and body shapes as samples, and analyzes the captured images to extract feature data such as size and shape. Then, when the person recognition unit 225 analyzes the image captured by the capturing unit 21 and determines that a subject whose feature data such as size and shape is the same as or similar to that of the sample is included, an adult is included in the captured image. You may guess that it is reflected.
- the remote control unit 233 of the robot 2 shown in FIG. 8 transmits a remote control wireless signal to control a device different from the robot 2 such as starting a nearby television or outputting music to an audio device. Perform remote control.
- the second record in the data structure of the event information storage unit 151 shown in FIG. 10 is provided with a marker whose area type is a child room type when an infant crying event is detected as the first event for the application example of babysitting. It is determined that the robot 2 moves to a place where it is placed (that is, a child room). The second record further defines that the robot 2 performs a masquerade action when an adult absence event is detected as the second event.
- a marker whose area type is a child room type is called a child room marker.
- the position and orientation of the child room marker are set in association with the child room type of the area type with respect to the application example of babysitting.
- FIG. 14A is a flowchart illustrating a processing procedure in the marker registration phase of the second embodiment. For example, if the marker recognizing unit 22 detects a child room marker while the robot 2 is moving autonomously (step S41), the communication control unit 26 determines the position and orientation of the child room type of the area type and the child room marker. The child room marker information is transmitted to the data providing apparatus 10.
- the marker registration unit 110 registers the position and orientation of the entrance marker in the marker information storage unit 153 in association with the entrance type of the area type. (Step S42).
- FIG. 14B is a flowchart illustrating a processing procedure in the action phase of the second embodiment.
- the voice recognition unit 211 of the robot 2 detects an infant crying event (step S43)
- the communication control unit 26 notifies the data providing apparatus 10 of the infant crying event.
- the action selection unit 140 refers to the event information storage unit 151 and the area type child room associated with the infant crying event Identify the type.
- the action selection unit 140 further refers to the marker information storage unit 153 to specify the position and orientation of the child room marker corresponding to the child room type of the area type.
- the first communication control unit 11 transmits to the robot 2 an instruction to move to the child room including the position and orientation of the child room marker.
- the movement control unit 23 controls the moving mechanism 29 and the robot 2 moves to the child room (step). S44).
- the robot 2 stays at least for a second predetermined time before the position of the child room marker.
- the second predetermined time corresponds to an upper limit value of an assumed time required for the robot 2 to recognize the absence of an adult after entering the child room. If the adult absence event is not detected when the second predetermined time has elapsed, the processing in the action phase of the second embodiment may be interrupted.
- step S45 When the person recognition unit 225 detects an adult absence event (step S45), the communication control unit 26 transmits the adult absence event to the data providing apparatus 10.
- the action selection unit 140 refers to the event information storage unit 151 and the infant cry of the first event corresponding to the adult absence event of the second event. Identify the event.
- the action selection unit 140 refers to the event information storage unit 151, and selects a beating action corresponding to the combination of the infant cry event of the first event and the adult absence event of the second event.
- the first communication control unit 11 transmits an instruction of the selected remedy action to the robot 2.
- step S46 When the communication control unit 26 of the robot 2 receives the instruction for the mist action, the action execution unit 230 of the robot 2 executes the mist action (step S46).
- Example 3 An application example of customer service by the autonomous behavior type robot 1 will be described.
- customer service for example, a store that provides a food service in a guest room is assumed.
- the robot 2 goes to the reception counter and guides it to the guest room. If an area type marker for identifying a reception counter as an area is installed in the reception counter, the autonomous behavior robot 1 can recognize that the marker is installed at the reception counter.
- An area type for identifying a reception counter is called a reception counter type.
- the robot 2 moves to the reception counter and directs the customer.
- a visitor is detected when the customer who enters a shop is reflected in the image image
- the robot 2 behaves to guide the customer to the vacant room.
- the behavior of guiding customers to vacant rooms is called guidance action.
- Vacant room information is obtained from a guest room management system (not shown).
- the detection of a vacant room corresponds to the second event in the application example of customer service. This event is called a vacant room event.
- the robot 2 leads the customer to an empty room.
- the message output unit 235 may output a guidance message “Please enter room ⁇ ” or may be displayed on a display device included in the robot 2.
- the customer feels a taste that is not in human service.
- the visitor event detection unit 123 of the data providing apparatus 10 shown in FIG. 9 detects a visitor event when, for example, a customer entering the store is recognized from an image taken by a camera provided at the entrance of the store.
- the vacant room event detection unit 131 of the data providing apparatus 10 shown in FIG. 9 inquires of the guest room status to a guest room management system (not shown) and detects a vacant room event if there is a vacant guest room.
- the movement control unit 23 of the robot 2 shown in FIG. 8 drives the movement mechanism 29 so as to move slowly to the cabin.
- the robot 2 may control the speed of movement so as to keep the distance from the customer constant while measuring the time between the customer and the customer photographed by the photographing unit 21 of the robot 2.
- the message output unit 235 of the data providing apparatus 10 shown in FIG. 9 outputs a guidance message “Please enter room No.” as a voice, or displays it on the display device of the robot 2. A guidance message is output at.
- the third record in the data structure of the event information storage unit 151 shown in FIG. 10 is provided with a marker whose area type is a reception counter type when a visitor event is detected as the first event for the customer service application example. It is determined that the robot 2 moves to a place where it is located (that is, a reception counter). The third record further defines that the robot 2 performs a guidance action when an empty room event is detected as the second event.
- a marker whose area type is a reception counter type is called a reception counter marker.
- the position and orientation of the reception counter marker are set in association with the reception counter type of the area type for the application example of customer service.
- FIG. 15A is a flowchart illustrating a processing procedure in the marker registration phase of the third embodiment. For example, if the marker recognition unit 22 detects the reception counter marker while the robot 2 is moving autonomously (step S51), the communication control unit 26 determines the area type reception counter type and the position and orientation of the reception counter marker. Is received to the data providing apparatus 10.
- the marker registration unit 110 associates the position and orientation of the reception counter marker with the marker information storage unit 153 in association with the reception counter type of the area type. Register (step S52).
- FIG. 15B is a flowchart illustrating a processing procedure in the action phase of the third embodiment.
- the visitor event detection unit 123 of the data providing apparatus 10 detects a visitor event (step S53)
- the event information storage unit 151 is referred to and the reception counter type of the area type associated with the visitor event is specified.
- the visitor event detection unit 123 further refers to the marker information storage unit 153 to identify the position and orientation of the reception counter marker corresponding to the area type reception counter type.
- the first communication control unit 11 transmits to the robot 2 an instruction to move to the reception counter including the position and orientation of the reception counter marker. At this time, the visitor event detection unit 123 notifies the action selection unit 140 of the visitor event.
- the movement control unit 23 controls the moving mechanism 29, and the robot 2 moves to the reception counter (step). S54).
- the robot 2 stays at least for a third predetermined time before the position of the reception counter marker.
- the third predetermined time corresponds to the upper limit value of the switching time until the store clerk responds to the customer instead of the robot 2. If a vacant room event is not detected when the third predetermined time has elapsed, the processing in the action phase of the third embodiment may be interrupted.
- the action selecting unit 140 refers to the event information storage unit 151 and visits the first event corresponding to the vacant room event of the second event. Identify the event.
- the action selection unit 140 refers to the event information storage unit 151 and selects the customer service action corresponding to the combination of the visitor event of the first event and the empty room event of the second event.
- the first communication control unit 11 transmits an instruction of the selected customer service action to the robot 2.
- the action execution unit 230 of the robot 2 executes the customer service action (step S56).
- Example 4 An application example of call support by the autonomous behavior robot 1 will be described.
- the robot 2 has a telephone function.
- there are places where the radio wave used for the telephone function is easy to reach and places where the robot 2 is located are difficult to reach.
- the robot 2 moves from a place where the radio wave condition is poor to a place where the radio wave condition is good (hereinafter referred to as a place where the radio wave is good), and starts a call in response to a call request.
- a place where the radio wave is good a place where the radio wave is good
- an area type marker for identifying a place with good radio wave is installed as an area in a place with good radio wave
- the autonomous behavior type robot 1 can recognize that the marker is placed in a place with good radio wave.
- An area type for identifying a place with good radio wave is called a radio wave good type.
- radio wave deterioration event when the radio wave condition deteriorates, the robot 2 moves to a place where the radio wave is good so that wireless communication is not hindered. That is, detection of radio wave deterioration corresponds to the first event in the application example of call support. This event is called a radio wave deterioration event.
- Robot 2 starts a call when it receives a call request from a user.
- the processing operation for starting a call is called a call start action.
- the reception of a call request is detected by a recognition process such as recognizing a voice such as “Please make a call” or recognizing a gesture or pose for making a call from a photographed image.
- the reception of the call request corresponds to the second event in the application example of the call support. This event is called a call request event.
- the radio wave state detection unit 213 of the robot 2 shown in FIG. 8 monitors the radio wave state of wireless communication used in the telephone function, and detects a radio wave deterioration event when the intensity of the radio wave is below an acceptable standard.
- the call request reception unit 223 of the robot 2 shown in FIG. 8 recognizes the utterance of “Please call me” or recognizes a gesture or pose for making a call based on a photographed image. Accept a call request from.
- the call request receiving unit 223 may specify a telephone number or a call partner by voice recognition.
- the telephone communication unit 237 of the robot 2 illustrated in FIG. 8 controls telephone communication and performs call processing.
- a marker whose area type is a radio wave good type is displayed. It is defined that the robot 2 moves to a place where it is installed (that is, a place where radio waves are good).
- the fourth record further defines that the robot 2 starts a call when a call request event is detected as the second event.
- a marker whose area type is a good radio wave type is called a good radio wave marker.
- the position and orientation of the radio wave good marker are set in association with the radio wave good type of the area type for the application example of call support.
- FIG. 16A is a flowchart illustrating a processing procedure in the marker registration phase of the fourth embodiment.
- the communication control unit 26 determines the position and direction of the good radio wave type of the area type and the good radio wave marker.
- the included radio wave good marker information is transmitted to the data providing apparatus 10.
- the marker registration unit 110 associates the position and direction of the radio wave good marker with the marker information storage unit 153 in association with the area type radio wave good type. Registration is performed (step S62).
- FIG. 16B is a flowchart illustrating a processing procedure in the action phase of the fourth embodiment.
- the radio wave state detection unit 213 of the robot 2 detects a radio wave deterioration event (step S63)
- the communication control unit 26 notifies the data providing device 10 of the radio wave deterioration event.
- the action selection unit 140 refers to the event information storage unit 151 and the radio wave condition of the area type associated with the radio wave deterioration event is good. Identify the type.
- the action selection unit 140 further refers to the marker information storage unit 153 to identify the position and orientation of the radio wave good marker corresponding to the area type radio wave good type.
- the first communication control unit 11 transmits to the robot 2 an instruction to move to a place with good radio wave including the position and orientation of the radio wave good marker.
- the movement control unit 23 controls the movement mechanism 29 and the robot 2 moves to a place with good radio wave.
- the robot 2 stays at least for a fourth predetermined time before the position of the radio wave good marker.
- the fourth predetermined time corresponds to an upper limit length of a period during which it is assumed that the user is requested to talk. If the call request event is not detected when the fourth predetermined time has elapsed, the processing in the action phase of the fourth embodiment may be interrupted.
- step S65 When the call request receiving unit 223 detects a call request event (step S65), the communication control unit 26 transmits the call request event to the data providing device 10.
- the action selection unit 140 refers to the event information storage unit 151 and selects a call start action corresponding to the call request event of the second event. .
- the first communication control unit 11 transmits an instruction of the selected call start action to the robot 2. Therefore, the call start action is transmitted regardless of whether or not the radio wave deterioration event has been previously notified.
- the telephone communication 237 of the robot 2 executes the call start action (step S66).
- the telephone communication unit 237 makes a call and enters a call state, a signal obtained by converting the input voice of the microphone included in the robot 2 is transmitted, and the other party voice converted from the received signal is output from the microphone.
- Example 5 An application example of security by the autonomous behavior robot 1 will be described.
- the robot 2 looks around assuming that the safe vandalism is warned in a residence or facility where the safe is placed. If an area type marker for identifying a safe storage area is installed in the safe storage area, the autonomous behavior robot 1 can recognize that the marker is installed in the safe storage area.
- the area type for identifying the safe storage area is called a safe type.
- the robot 2 instructed to look around behaves like moving to the vicinity of the safe and grasping the situation.
- the reception of the look-in instruction may be performed by recognition processing such as recognition of a voice such as “Visit to the safe” or recognition of a predetermined pose or gesture from the photographed image.
- the data providing apparatus 10 may receive a watch instruction from the application of the user terminal 3 and transfer it to the robot 2 so that the robot 2 receives the watch instruction.
- the data providing apparatus 10 may automatically send a look-in instruction to the robot 2 and the robot 2 may receive the watch instruction.
- a look around instruction to the robot 2 corresponds to the first event. This event is called a tour event.
- the robot 2 executes a warning action. That is, in the security application example, the recognition of the person near the safe corresponds to the second event. This event is called a human recognition event. Assume that people near the safe may be suspicious.
- the communication control unit 26 transmits the video (moving image or still image) captured by the imaging unit 21 to the data providing apparatus 10. Regarding this processing, the communication control unit 26 is an example of the action execution unit 230.
- the data providing apparatus 10 may record the received video as evidence. Further, the data providing apparatus 10 may transmit data to the application of the user terminal 3 by transmitting a warning message to the application of the user terminal 3 or transferring a video received from the robot 2. .
- the voice output unit 232 of the robot 2 may emit a warning sound.
- the roundabout event detection unit 215 of the robot 2 shown in FIG. 8 detects the roundabout event when the roundabout instruction is received in the above recognition process or when the roundabout instruction is received from the data providing apparatus 10.
- the person recognition unit 225 of the robot 2 illustrated in FIG. 8 recognizes the appearance of a person included in the captured image. The presence of a person may be recognized by voice recognition of the person's speaking voice. The person recognizing unit 225 may determine that the voice is a human voice when a frequency corresponding to a standard human voice is extracted from the sound input to the microphone.
- the fifth record in the data structure of the event information storage unit 151 shown in FIG. 10 includes a marker whose area type is a safe type when a roundabout event is detected as the first event in the security application example. It is determined that the robot 2 moves to a certain place (that is, a safe storage area). The fifth record further defines that the robot 2 performs a warning action when a human recognition event is detected as the second event.
- a marker whose area type is a safe type is called a safe marker.
- the position and orientation of the safe marker are set in association with the safe type of the area type regarding the security application example.
- FIG. 17A is a flowchart illustrating a processing procedure in the marker registration phase of the fifth embodiment.
- the marker recognizing unit 22 detects a safe marker when the robot 2 is moving autonomously (step S71)
- the communication control unit 26 includes a safe marker including the area type safe type and the position and orientation of the safe marker. Information is transmitted to the data providing apparatus 10.
- the marker registration unit 110 registers the position and orientation of the safe marker in the marker information storage unit 153 in association with the safe type of the area type ( Step S72).
- FIG. 17B is a flowchart illustrating a processing procedure in the action phase of the fifth embodiment.
- the action selection unit 140 refers to the event information storage unit 151 and specifies the safe type of the area type associated with the tour event. To do.
- the action selection unit 140 further refers to the marker information storage unit 153 to specify the position and orientation of the safe marker corresponding to the safe type of area type.
- the first communication control unit 11 transmits to the robot 2 an instruction to move to the safe storage area including the position and orientation of the safe marker.
- the movement control unit 23 controls the moving mechanism 29 and the robot 2 moves to the vicinity of the safe (step S74). ).
- the robot 2 stays at least for a fifth predetermined time before the position of the safe marker.
- the fifth predetermined time corresponds to an upper limit value of an assumed time required for the robot 2 to move to the vicinity of the safe and recognize a person. If a human recognition event is not detected when the fifth predetermined time has elapsed, the processing in the action phase of the fifth embodiment may be interrupted.
- step S75 the communication control unit 26 transmits the human recognition event to the data providing apparatus 10.
- the action selection unit 140 refers to the event information storage unit 151 and looks around the first event corresponding to the person recognition event of the second event. Is identified.
- the action selection unit 140 refers to the event information storage unit 151 and selects a warning action corresponding to the combination of the tour event of the first event and the human recognition event of the second event.
- the first communication control unit 11 transmits an instruction of the selected alert action to the robot 2.
- the action execution unit 230 of the robot 2 executes the warning action (step S76).
- Example 6 An application example of alarming by the autonomous behavior type robot 1 will be described. For example, the robot 2 wakes up a user sleeping in the bedroom in the morning. If an area type marker for identifying a bedroom as an area is installed in the bedroom, the autonomous behavior robot 1 can recognize that the marker is installed in the bedroom. An area type for identifying a bedroom is called a bedroom type.
- the robot 2 moves to the bedroom and performs an alarm action at the timing when the user wakes up.
- the timing to wake up the user is determined by, for example, a scheduled wake-up time.
- the user is woken up by recognition processing such as recognition of voices such as “Please wake up dad (user)” issued by the user's family and recognition of a predetermined pose or gesture from the captured image.
- Timing may be determined. That is, the determination of the timing to wake up the user corresponds to the first event in the application example of the alarm clock. This event is called a wake-up event.
- the robot 2 When the robot 2 recognizes that the user is lying down on the bed in the bedroom (roaring position), the robot 2 performs an alarm action. When the user is not lying down, such as when there is no user on the bed or when the user has already woken up, the robot 2 does not execute the alarm action. That is, detecting the user lying on the bed in the application example of the alarm clock corresponds to the second event. This event is called a user position event.
- the audio output unit 232 outputs an audio such as an alarm sound or a call from a speaker.
- the remote control unit 233 may cause the external device to output sound, such as activating a television or outputting music to the audio device.
- the remote control unit 233 may turn on the lighting device.
- the movement control unit 23 may control the moving mechanism 29 so that the robot 2 moves violently around the bed.
- the wake-up event detection unit 217 of the robot 2 shown in FIG. 8 detects the wake-up event at the scheduled wake-up time as described above or by the above-described recognition processing.
- the posture recognition unit 227 of the robot 2 shown in FIG. 8 recognizes the posture of a person sleeping on a bed in a bedroom (actually, it is regarded as a user).
- the body position recognition unit 227 detects a user position event if the person sleeping on the bed is in the supine position.
- the body position recognizing unit 227 takes a sample of a supine user as a sample in advance by the photographing unit 21, analyzes the photographed image, and analyzes the size, shape, and parts (head, hand, Feature data such as the arrangement of parts such as feet is extracted in advance.
- the body position recognizing unit 227 analyzes the image photographed by the photographing unit 21, and when it is determined that a subject whose feature data such as size, shape, and part arrangement is common or approximate to the sample is included. It may be determined that the recumbent user is reflected in the captured image.
- the sixth record in the data configuration of the event information storage unit 151 illustrated in FIG. 10 includes a marker whose area type is a bedroom type when a wake-up event is detected as the first event in the application example of the alarm clock. It is determined that the robot 2 moves to a certain place (that is, a bedroom). The sixth record further defines that the robot 2 performs a wake-up action when a user depression event is detected as the second event.
- a marker that is a bedroom type is called a bedroom marker.
- the position and orientation of the bedroom marker are set in association with the bedroom type of the area type for the application example of the alarm.
- FIG. 18A is a flowchart illustrating a processing procedure in the marker registration phase of the sixth embodiment.
- the communication control unit 26 includes the bedroom type including the bedroom type of the area type and the position and orientation of the bedroom marker. Information is transmitted to the data providing apparatus 10.
- the marker registration unit 110 registers the position and orientation of the bedroom marker in the marker information storage unit 153 in association with the bedroom type of the area type ( Step S82).
- FIG. 18B is a flowchart illustrating a processing procedure in the action phase of the sixth embodiment.
- the action selection unit 140 refers to the event information storage unit 151 and identifies the bedroom type of the area type associated with the wake-up event. To do.
- the action selection unit 140 further refers to the marker information storage unit 153 to specify the position and orientation of the bedroom marker corresponding to the bedroom type of the area type.
- the first communication control unit 11 transmits an instruction to move to the bedroom including the position and orientation of the bedroom marker to the robot 2.
- the movement control unit 23 controls the moving mechanism 29, and the robot 2 moves to the bedroom (step S84).
- the robot 2 stays at least for a sixth predetermined time before the position of the bedroom marker.
- the sixth predetermined time corresponds to an upper limit value of an assumed time required for the robot 2 to enter the bedroom and detect a user depression event. If a user depression event is not detected when the sixth predetermined time has elapsed, the processing in the action phase of the sixth embodiment may be interrupted.
- step S85 the communication control unit 26 transmits the user position event to the data providing apparatus 10.
- the action selection unit 140 refers to the event information storage unit 151 and corresponds to the first event corresponding to the user position event. Identify wake-up events.
- the action selection unit 140 refers to the event information storage unit 151 and selects a wake-up action corresponding to the combination of the wake-up event of the first event and the user depression event of the second event.
- the first communication control unit 11 transmits an instruction of the selected alarm action to the robot 2.
- the action execution unit 230 of the robot 2 executes the alarm action (step S86).
- the processing described as being performed by the first event detection unit 210 of the robot 2 may be performed by the first event detection unit 120 of the data providing apparatus 10.
- the processing described as being performed by the second event detection unit 220 of the robot 2 may be performed by the second event detection unit 130 of the data providing apparatus 10.
- the processing described as being performed by the first event detection unit 120 of the data providing apparatus 10 may be performed by the first event detection unit 210 of the robot 2.
- the processing described as being performed by the second event detection unit 130 of the data providing apparatus 10 may be performed by the second event detection unit 220 of the robot 2.
- the robot 2 is moved to a predetermined location at a predetermined timing, and a predetermined action is executed when a predetermined condition is satisfied. It is convenient that such a series of operations can be easily realized by installing the marker at a predetermined place.
- the robot 2 may execute an action instructed by the marker.
- a marker in which an action identifier is made into a graphic is used.
- a graphic code obtained by converting an action identifier into a graphic by a general conversion method such as a barcode method or a two-dimensional barcode method
- the marker recognizing unit 22 can read the action identifier from the graphic code by the conversion method. Or you may use the figure designed uniquely as a marker.
- the marker recognizing unit 22 specifies the type of the graphic when the shape of the graphic included in the captured image is a predetermined shape, and specifies the identifier of the action corresponding to the specified graphic type. It may be. It is assumed that the marker recognizing unit 22 stores data that associates graphic types with action identifiers. That is, the marker in the third embodiment can specify an identifier of any action.
- the robot 2 does not enter the child room when it recognizes the marker in front of the child room. Immediately move to the living room.
- the action indicated by the marker may be a search for another predetermined marker. If the action indicated by the marker A is a search for the marker B, the robot 2 searches for the marker B when the marker A is recognized and starts moving. Furthermore, if the action instructed by the marker B is a search for the marker C, the robot 2 searches for the marker C and starts moving when the marker B is recognized. If a series of markers instructing to search for the markers in sequence is arranged at points on the route, the robot 2 is caused to search for a route through the series of markers in order. If it does in this way, the way of the play which makes the indoor game imitating orienteering compete with the child with the robot 2 can be performed. In addition, a plurality of robots 2 can be run to perform an indoor race.
- the user instructs the autonomous behavior type robot 1 from the application of the user terminal 3 so that the robot 2 first searches for the marker A. Also good.
- the robot 2 may recognize a user's voice such as “Start” or “Find” and start searching for a marker triggered by an event in which the user's voice is detected.
- the robot 2 may recognize the pose or gesture that the user instructs to start from the captured image, and may start searching for a marker triggered by an event in which the pose or gesture is detected.
- the user sets a start time for the autonomous behavior type robot 1 from the application of the user terminal 3, and the robot 2 starts searching for a starting marker when the start time is reached. Also good.
- the robot 2 may be made to recognize an article that should not be approached by using a marker for instructing prohibition of approach.
- a marker in which an access prohibition code is made into a graphic such as a decorative article or a precision instrument is installed.
- a graphic code obtained by converting an access prohibition code into a graphic by a general conversion method (such as a barcode method or a two-dimensional barcode method) may be used as a marker.
- the marker recognizing unit 22 stores the type of figure corresponding to access prohibition, and determines that the type of figure specified from the captured image corresponds to access prohibition.
- the marker recognition unit 22 of the robot 2 measures the interval with the recognized marker.
- the movement control unit 23 controls the movement mechanism 29 so as to move the robot 2 in a direction away from the marker when it is determined that the distance from the marker is shorter than the first reference distance.
- the first reference distance is a distance necessary for controlling the robot 2 to change its direction so as not to reach the marker position by the movement of the robot 2. In this way, the risk of the robot 2 colliding with a fragile article can be reduced.
- a marker for instructing the prohibition of access to an article that is likely to move such as a balance ball or a vacuum cleaner body, may be installed so that the robot 2 can recognize an article that is likely to move.
- the movement control unit 23 controls the movement mechanism 29 so as to move the robot 2 in a direction away from the marker when it is determined that the distance from the marker is shorter than the second reference distance.
- the second reference distance is a distance necessary for controlling the robot 2 to increase the distance to the article before the article moves and reaches the position of the robot 2. In this way, the risk of the article moving and colliding with the robot 2 can be reduced.
- the type of the article may be identified by the marker, and the approach of the robot 2 may be restricted according to the type of the article.
- a marker indicating the type of article may be used, or article management data that associates the marker ID with the type of article may be held in the data providing apparatus 10.
- the type of article is also detected when the marker recognition unit 22 recognizes the marker.
- the marker recognizing unit 22 stores article management data that associates the marker ID with the type of article.
- the marker recognizing unit 22 detects the marker ID from the marker and identifies the type of the article corresponding to the marker ID with reference to the article management data.
- the article management data may be set by the user from the application of the user terminal 3.
- the marker recognizing unit 22 may store access control data in which the accessibility of the robot 2 is set for each type of article, and determine the accessibility of the type of article specified by the marker ID.
- access control data for example, an access permission is set for an article that is not easily broken such as a table or a chair and has a low possibility of moving.
- Access prohibition is set for an article that is likely to move. Since the cleaner body always moves forward and never moves backward, the cleaner body may be allowed to approach the rear. In other words, it is only necessary to prohibit the approach of the vacuum cleaner main body to the front. If the front and back of the vacuum cleaner main body can be recognized by the direction of the marker, only the forward approach of the vacuum cleaner main body may be prohibited.
- the robot 2 can determine the range in which the approach is prohibited based on the marker. If a rule that prohibits the approach indicated by the arrow is provided, there is no operational problem if attention is paid to the direction of the marker at the stage of attaching the marker.
- an access prohibition range based on the arrow may be set in the access control data, or similar access may be performed from the application of the user terminal 3. A prohibited range may be set.
- the marker recognition unit 22 may store marker definition information that associates a marker ID with an area type, detects the marker ID from the marker, and specifies the area type corresponding to the marker ID.
- the association between the marker ID and the area type may be set by the user from the application of the user terminal 3.
- the first event is not limited to the example described above.
- the first event is optional.
- the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing apparatus 10 may detect the first event when a person other than the user is recognized.
- the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing apparatus 10 may detect the first event when an unknown person who has never been recognized is recognized for the first time.
- the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing apparatus 10 may detect the first event when a known person who has been recognized in the past is recognized again.
- the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing device 10 may detect the first event when the same person is repeatedly recognized within a predetermined time.
- the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing apparatus 10 may detect the first event when recognizing a person who has not recognized within a predetermined time. Further, when the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing apparatus 10 determine that the positional relationship and orientation between the recognized person and the marker in the captured image satisfy a predetermined condition. The first event may be detected.
- the second event is not limited to the example described above.
- the second event is optional.
- the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing apparatus 10 may detect the second event when a person other than the user is recognized.
- the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing apparatus 10 may detect the second event when an unknown person who has never been recognized is recognized for the first time.
- the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing device 10 may detect the second event when recognizing a known person who has been recognized in the past.
- the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing apparatus 10 may detect the second event when the same person is repeatedly recognized within a predetermined time.
- the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing apparatus 10 may detect the second event when recognizing a person who has not recognized within a predetermined time. Furthermore, when the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing apparatus 10 determine that the positional relationship and orientation between the recognized person and the marker in the captured image satisfy a predetermined condition. A second event may be detected.
- the first event and the second event may be a combination of events in a plurality of stages.
- the return home event detection unit 121 of the data providing device 10 detects the first stage event when the position of the user terminal 3 approaches the home, and the user terminal A second stage event may be detected when 3 communicates with a beacon transmitter installed at the entrance.
- the home return event detection unit 121 may determine that the first event has been detected when both the first stage event and the second stage event are detected.
- the action selection unit 140 may select the action corresponding to the second event.
- the action selection unit 140 may select an action corresponding to the area type.
- the action selection unit 140 may select an action corresponding to the first event.
- the action selection unit 140 may select an action corresponding to the combination of the area type and the second event.
- the action selection unit 140 may select an action corresponding to the combination of the first event and the area type.
- the action selection unit 140 may select an action corresponding to a combination of the first event, the area type, and the second event.
- the action execution unit 230 may perform an action for a specific person.
- the action execution unit 230 may guide a vacant room to a specific customer.
- the content of the action executed by the action execution unit 230 may be set by the user from the application of the user terminal 3.
- the first event detection unit 210 of the robot 2 may detect the first event when the communication control unit 26 communicates with another robot 2.
- the second event detection unit 220 of the robot 2 may detect the second event when the communication control unit 26 communicates with another robot 2.
- the first event detection unit 210 of the robot 2 and the first event detection unit 120 of the data providing device 10 may detect a first event when a predetermined instruction or data is received from the user terminal 3 or another external device. Good.
- the second event detection unit 220 of the robot 2 and the second event detection unit 130 of the data providing device 10 may detect a second event when a predetermined instruction or data is received from the user terminal 3 or another external device. Good.
- the visitor event detection unit 123 of the data providing apparatus 10 notifies the reception tablet terminal that the reception tablet terminal has accepted the guest room conditions such as the number of customers and smoking cessation.
- the first event may be detected when it is done.
- the detection conditions of the first event and the second event may be different for each of the plurality of robots 2. For example, only the specific robot 2 may detect the first event or the second event later than the timing at which the first event or the second event is normally detected. If the detection timing of the first event or the second event is delayed, the passive character of the robot 2 can be produced. Conversely, only the specific robot 2 may detect the first event or the second event earlier than the timing at which the first event or the second event is normally detected. If the detection timing of the first event or the second event is advanced, a positive character of the robot 2 can be produced. The contents of the first event and the second event may be set by the user from the application on the user terminal 3.
- event information may be set for each robot 2 in the event information storage unit 151. That is, event information applicable only to a specific robot 2 may be provided. For example, in a home where two robots 2 are operated, event information may be set so that only one robot performs a welcome action and only the other robot performs a babysitting action.
- marker information recognized by one robot 2 may be notified to the other robot 2. By doing so, the area type, marker position and orientation can be quickly shared.
- the robot 2 may regard feature points and feature shapes detected by the SLAM technology as markers.
- a device that emits light may be used as a marker.
- a marker may be installed at a charging station for supplying power to the storage battery included in the robot 2 so that the robot 2 detects the positional relationship and orientation with the charging station based on the marker. When the robot 2 approaches the charging station for automatic connection, the positional relationship and orientation of the charging station detected by the marker may be used.
- the marker recognizing unit 22 may measure the position of the same marker a plurality of times, and obtain an average value regarding those positions. If the marker recognizing unit 22 uses the average value of the marker positions, the influence of errors in measuring the marker positions can be reduced. Similarly, the marker recognizing unit 22 may measure the direction of the same marker a plurality of times and obtain an average value regarding the direction. If the marker recognizing unit 22 uses the average value of the marker direction, the influence of the error in the measurement of the marker position can be reduced.
- the application of the user terminal 3 may display the marker position and orientation on the output device of the user terminal 3. From the application of the user terminal 3, the user sets the content of the marker (such as entry prohibition, area type, action identifier or access prohibition), the position and orientation of the marker via the input device of the user terminal 3, Or you may enable it to correct.
- the content of the marker such as entry prohibition, area type, action identifier or access prohibition
- the robot 2 may include a beacon receiver, receive a beacon signal transmitted from a beacon transmitter installed at a predetermined position at the beacon receiver, and specify the ID of the beacon transmitter.
- the robot 2 may further include a beacon analysis unit, and the beacon analysis unit may identify the position of the beacon transmitter by analyzing the radio wave intensity of the beacon signal. Therefore, the autonomous behavior type robot 1 may consider the ID of the beacon transmitter as the marker ID and the position of the beacon transmitter as the marker position, and may be applied to the above-described embodiment.
- a program for realizing the functions constituting the apparatus described in this embodiment is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed.
- the various processes described above in the present embodiment may be performed.
- the “computer system” may include an OS and hardware such as peripheral devices.
- the “computer system” includes a homepage providing environment (or display environment) if a WWW system is used.
- the “computer-readable recording medium” means a flexible disk, a magneto-optical disk, a ROM, a writable nonvolatile memory such as a flash memory, a portable medium such as a CD-ROM, a hard disk built in a computer system, etc. This is a storage device.
- the “computer-readable recording medium” refers to a volatile memory (for example, DRAM (Dynamic) in a computer system serving as a server or a client when a program is transmitted via a network such as the Internet or a communication line such as a telephone line. Random Access Memory)), etc. that hold a program for a certain period of time.
- the program may be transmitted from a computer system storing the program in a storage device or the like to another computer system via a transmission medium or by a transmission wave in the transmission medium.
- the “transmission medium” for transmitting the program refers to a medium having a function of transmitting information, such as a network (communication network) such as the Internet or a communication line (communication line) such as a telephone line.
- the program may be for realizing a part of the functions described above. Furthermore, what implement
Landscapes
- Engineering & Computer Science (AREA)
- Aviation & Aerospace Engineering (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
- Manipulator (AREA)
Abstract
L'invention concerne un robot comportant un mécanisme de déplacement, une unité d'imagerie qui forme des images d'un espace environnant, une unité de reconnaissance de marqueur qui reconnaît un marqueur prescrit inclus dans une image capturée par l'unité d'imagerie, et une unité de commande de mouvement qui commande le mouvement par le mécanisme de déplacement sur la base du marqueur reconnu.
Priority Applications (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2020525643A JPWO2019240208A1 (ja) | 2018-06-13 | 2019-06-13 | ロボットおよびその制御方法、ならびにプログラム |
| JP2023202283A JP2024020582A (ja) | 2018-06-13 | 2023-11-29 | ロボットおよびその制御方法、ならびにプログラム |
| JP2025141799A JP2025172863A (ja) | 2018-06-13 | 2025-08-27 | ロボットおよびその制御方法、ならびにプログラム |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2018-113083 | 2018-06-13 | ||
| JP2018113083 | 2018-06-13 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019240208A1 true WO2019240208A1 (fr) | 2019-12-19 |
Family
ID=68842204
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/JP2019/023433 Ceased WO2019240208A1 (fr) | 2018-06-13 | 2019-06-13 | Robot, procédé de commande de robot et programme |
Country Status (2)
| Country | Link |
|---|---|
| JP (3) | JPWO2019240208A1 (fr) |
| WO (1) | WO2019240208A1 (fr) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2022034686A1 (fr) * | 2020-08-14 | 2022-02-17 | 日本電気株式会社 | Dispositif de réglage de plage de fonctionnement, procédé de réglage de plage de fonctionnement et support d'enregistrement |
| CN114299392A (zh) * | 2021-12-28 | 2022-04-08 | 深圳市杉川机器人有限公司 | 移动机器人及其门槛识别方法、装置及存储介质 |
| US11493930B2 (en) * | 2018-09-28 | 2022-11-08 | Intrinsic Innovation Llc | Determining changes in marker setups for robot localization |
| CN115705064A (zh) * | 2021-08-03 | 2023-02-17 | 北京小米移动软件有限公司 | 足式机器人的跟随控制方法、装置及机器人 |
| WO2023119566A1 (fr) * | 2021-12-23 | 2023-06-29 | 本田技研工業株式会社 | Système de transport |
| CN116755464A (zh) * | 2023-05-17 | 2023-09-15 | 贵州师范学院 | 一种基于物联网的移动机器人的控制方法 |
| JPWO2023204025A1 (fr) * | 2022-04-20 | 2023-10-26 | ||
| WO2024004453A1 (fr) * | 2022-06-28 | 2024-01-04 | ソニーグループ株式会社 | Procédé de génération d'informations de commande de corps en mouvement, dispositif de génération d'informations de commande de corps en mouvement, corps en mouvement, et système de commande de corps en mouvement |
| WO2024014529A1 (fr) * | 2022-07-15 | 2024-01-18 | Thk株式会社 | Robot mobile autonome et système de commande de robot mobile autonome |
| RU2825022C1 (ru) * | 2024-04-05 | 2024-08-19 | Общество С Ограниченной Ответственностью "Беспилотный Погрузчик" | Программно-аппаратный комплекс для управления автономным мобильным роботом-погрузчиком |
| EP4407399A4 (fr) * | 2021-09-22 | 2024-11-06 | Fuji Corporation | Corps mobile et son procédé de commande |
| JP2025057665A (ja) * | 2023-09-28 | 2025-04-09 | キヤノン株式会社 | 情報処理方法、情報処理システムおよびプログラム |
Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0854927A (ja) * | 1994-08-10 | 1996-02-27 | Kawasaki Heavy Ind Ltd | ランドマークの決定方法および装置 |
| JP2013508874A (ja) * | 2009-10-30 | 2013-03-07 | ユージン ロボット シーオー., エルティーディー. | 移動ロボットの位置認識のための地図生成および更新方法 |
| WO2016103562A1 (fr) * | 2014-12-25 | 2016-06-30 | 村田機械株式会社 | Système de véhicule en déplacement et procédé de changement d'état de déplacement |
| JP2017041200A (ja) * | 2015-08-21 | 2017-02-23 | シャープ株式会社 | 自律移動装置、自律移動システム及び環境地図評価方法 |
| WO2017169826A1 (fr) * | 2016-03-28 | 2017-10-05 | Groove X株式会社 | Robot à comportement autonome qui exécute un comportement de bienvenue |
| JP2018068885A (ja) * | 2016-11-02 | 2018-05-10 | 東芝ライフスタイル株式会社 | 自律型電気掃除装置 |
Family Cites Families (20)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH1011135A (ja) * | 1996-06-27 | 1998-01-16 | Ohbayashi Corp | 標識認識水平搬送システム |
| JPH11267074A (ja) * | 1998-03-25 | 1999-10-05 | Sharp Corp | 掃除ロボット |
| JP4473849B2 (ja) * | 2003-06-02 | 2010-06-02 | パナソニック株式会社 | 物品取扱いシステムおよび物品取扱いサーバ |
| JP4555035B2 (ja) * | 2004-03-30 | 2010-09-29 | 日本電気株式会社 | 掃除機制御装置、掃除機、ロボット、および掃除機制御方法 |
| JP2008217741A (ja) * | 2007-03-08 | 2008-09-18 | Kenwood Corp | 移動機器の移動設定方法および移動設定システム |
| US9014848B2 (en) * | 2010-05-20 | 2015-04-21 | Irobot Corporation | Mobile robot system |
| JP5488930B2 (ja) * | 2011-03-08 | 2014-05-14 | 独立行政法人科学技術振興機構 | 家事計画作成支援装置および家事計画作成支援方法 |
| JP5862344B2 (ja) * | 2012-02-10 | 2016-02-16 | 富士通株式会社 | 画像処理装置、事前情報更新方法及びプログラム |
| DE102012211071B3 (de) * | 2012-06-27 | 2013-11-21 | RobArt GmbH | Interaktion zwischen einem mobilen Roboter und einer Alarmanlage |
| KR102094347B1 (ko) * | 2013-07-29 | 2020-03-30 | 삼성전자주식회사 | 자동 청소 시스템, 청소 로봇 및 그 제어 방법 |
| US10209080B2 (en) * | 2013-12-19 | 2019-02-19 | Aktiebolaget Electrolux | Robotic cleaning device |
| JP2015121928A (ja) * | 2013-12-24 | 2015-07-02 | トヨタ自動車株式会社 | 自律移動ロボットの制御方法 |
| DE102014110265A1 (de) * | 2014-07-22 | 2016-01-28 | Vorwerk & Co. Interholding Gmbh | Verfahren zur Reinigung oder Bearbeitung eines Raumes mittels eines selbsttätig verfahrbaren Gerätes |
| US9868211B2 (en) * | 2015-04-09 | 2018-01-16 | Irobot Corporation | Restricting movement of a mobile robot |
| JP6572618B2 (ja) * | 2015-05-08 | 2019-09-11 | 富士通株式会社 | 情報処理装置、情報処理プログラム、情報処理方法、端末装置、設定方法、設定プログラム |
| EP3101889A3 (fr) * | 2015-06-02 | 2017-03-08 | LG Electronics Inc. | Terminal mobile et son procédé de contrôle |
| KR102427836B1 (ko) * | 2015-06-26 | 2022-08-02 | 삼성전자주식회사 | 로봇 청소기, 정보 제공 시스템 및 정보 제공 방법 |
| KR102526083B1 (ko) * | 2016-08-30 | 2023-04-27 | 엘지전자 주식회사 | 이동 단말기 및 그의 동작 방법 |
| JP6411585B2 (ja) * | 2017-06-14 | 2018-10-24 | みこらった株式会社 | 電気掃除装置及び電気掃除装置用のプログラム |
| JP7124280B2 (ja) * | 2017-09-13 | 2022-08-24 | 富士フイルムビジネスイノベーション株式会社 | 情報処理装置及びプログラム |
-
2019
- 2019-06-13 WO PCT/JP2019/023433 patent/WO2019240208A1/fr not_active Ceased
- 2019-06-13 JP JP2020525643A patent/JPWO2019240208A1/ja active Pending
-
2023
- 2023-11-29 JP JP2023202283A patent/JP2024020582A/ja active Pending
-
2025
- 2025-08-27 JP JP2025141799A patent/JP2025172863A/ja active Pending
Patent Citations (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| JPH0854927A (ja) * | 1994-08-10 | 1996-02-27 | Kawasaki Heavy Ind Ltd | ランドマークの決定方法および装置 |
| JP2013508874A (ja) * | 2009-10-30 | 2013-03-07 | ユージン ロボット シーオー., エルティーディー. | 移動ロボットの位置認識のための地図生成および更新方法 |
| WO2016103562A1 (fr) * | 2014-12-25 | 2016-06-30 | 村田機械株式会社 | Système de véhicule en déplacement et procédé de changement d'état de déplacement |
| JP2017041200A (ja) * | 2015-08-21 | 2017-02-23 | シャープ株式会社 | 自律移動装置、自律移動システム及び環境地図評価方法 |
| WO2017169826A1 (fr) * | 2016-03-28 | 2017-10-05 | Groove X株式会社 | Robot à comportement autonome qui exécute un comportement de bienvenue |
| JP2018068885A (ja) * | 2016-11-02 | 2018-05-10 | 東芝ライフスタイル株式会社 | 自律型電気掃除装置 |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11493930B2 (en) * | 2018-09-28 | 2022-11-08 | Intrinsic Innovation Llc | Determining changes in marker setups for robot localization |
| JPWO2022034686A1 (fr) * | 2020-08-14 | 2022-02-17 | ||
| WO2022034686A1 (fr) * | 2020-08-14 | 2022-02-17 | 日本電気株式会社 | Dispositif de réglage de plage de fonctionnement, procédé de réglage de plage de fonctionnement et support d'enregistrement |
| CN115705064B (zh) * | 2021-08-03 | 2024-05-24 | 北京小米移动软件有限公司 | 足式机器人的跟随控制方法、装置及机器人 |
| CN115705064A (zh) * | 2021-08-03 | 2023-02-17 | 北京小米移动软件有限公司 | 足式机器人的跟随控制方法、装置及机器人 |
| EP4407399A4 (fr) * | 2021-09-22 | 2024-11-06 | Fuji Corporation | Corps mobile et son procédé de commande |
| WO2023119566A1 (fr) * | 2021-12-23 | 2023-06-29 | 本田技研工業株式会社 | Système de transport |
| CN114299392A (zh) * | 2021-12-28 | 2022-04-08 | 深圳市杉川机器人有限公司 | 移动机器人及其门槛识别方法、装置及存储介质 |
| JP7796371B2 (ja) | 2022-04-20 | 2026-01-09 | パナソニックIpマネジメント株式会社 | 移動管理システム、移動管理方法及びプログラム |
| JPWO2023204025A1 (fr) * | 2022-04-20 | 2023-10-26 | ||
| WO2023204025A1 (fr) * | 2022-04-20 | 2023-10-26 | パナソニックIpマネジメント株式会社 | Système de gestion de mouvement, procédé de gestion de mouvement et programme |
| WO2024004453A1 (fr) * | 2022-06-28 | 2024-01-04 | ソニーグループ株式会社 | Procédé de génération d'informations de commande de corps en mouvement, dispositif de génération d'informations de commande de corps en mouvement, corps en mouvement, et système de commande de corps en mouvement |
| WO2024014529A1 (fr) * | 2022-07-15 | 2024-01-18 | Thk株式会社 | Robot mobile autonome et système de commande de robot mobile autonome |
| CN116755464B (zh) * | 2023-05-17 | 2024-04-16 | 贵州师范学院 | 一种基于物联网的移动机器人的控制方法 |
| CN116755464A (zh) * | 2023-05-17 | 2023-09-15 | 贵州师范学院 | 一种基于物联网的移动机器人的控制方法 |
| JP2025057665A (ja) * | 2023-09-28 | 2025-04-09 | キヤノン株式会社 | 情報処理方法、情報処理システムおよびプログラム |
| RU2825022C1 (ru) * | 2024-04-05 | 2024-08-19 | Общество С Ограниченной Ответственностью "Беспилотный Погрузчик" | Программно-аппаратный комплекс для управления автономным мобильным роботом-погрузчиком |
Also Published As
| Publication number | Publication date |
|---|---|
| JP2025172863A (ja) | 2025-11-26 |
| JPWO2019240208A1 (ja) | 2021-06-24 |
| JP2024020582A (ja) | 2024-02-14 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2019240208A1 (fr) | Robot, procédé de commande de robot et programme | |
| CN109998421B (zh) | 移动清洁机器人组合及持久性制图 | |
| US11341355B2 (en) | Robot and method of controlling the same | |
| US11257292B2 (en) | Object holographic augmentation | |
| JP7377837B2 (ja) | ゲームプレイを介して環境の詳細データセットを生成する方法およびシステム | |
| CN112714684B (zh) | 清洁机器人及其执行任务的方法 | |
| CN106687850B (zh) | 扫描激光平面性检测 | |
| US11475390B2 (en) | Logistics system, package delivery method, and program | |
| JP2022017301A (ja) | サーマルイメージングシステムにおける位置判断の装置および方法 | |
| CN110178101A (zh) | 虚拟传感器配置 | |
| JP2020101560A (ja) | レーダ対応センサフュージョン | |
| JP2024094366A (ja) | ロボットならびにその制御方法および制御プログラム | |
| US11657085B1 (en) | Optical devices and apparatuses for capturing, structuring, and using interlinked multi-directional still pictures and/or multi-directional motion pictures | |
| JP6517255B2 (ja) | キャラクタ画像生成装置、キャラクタ画像生成方法、プログラム、記録媒体及びキャラクタ画像生成システム | |
| US20200169705A1 (en) | Vehicle system | |
| CN109324693A (zh) | Ar搜索装置、基于ar搜索装置的物品搜索系统及方法 | |
| TW201724022A (zh) | 對象辨識系統,對象辨識方法及電腦記憶媒體 | |
| JP2005056213A (ja) | 情報提供システム、情報提供サーバ、情報提供方法 | |
| US11233937B1 (en) | Autonomously motile device with image capture | |
| US11412133B1 (en) | Autonomously motile device with computer vision | |
| US11460994B2 (en) | Information processing apparatus and information processing method | |
| WO2019188697A1 (fr) | Robot à action autonome, dispositif d'alimentation en données et programme d'alimentation en données | |
| JP2019139793A (ja) | キャラクタ画像生成装置、キャラクタ画像生成方法、プログラム及び記録媒体 | |
| KR20250051852A (ko) | 로봇 청소기 및 그의 객체 이동 방법 | |
| WO2023204025A1 (fr) | Système de gestion de mouvement, procédé de gestion de mouvement et programme |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19819866 Country of ref document: EP Kind code of ref document: A1 |
|
| ENP | Entry into the national phase |
Ref document number: 2020525643 Country of ref document: JP Kind code of ref document: A |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19819866 Country of ref document: EP Kind code of ref document: A1 |