EP3105696A1 - Entryway based authentication system - Google Patents
Entryway based authentication systemInfo
- Publication number
- EP3105696A1 EP3105696A1 EP15746074.2A EP15746074A EP3105696A1 EP 3105696 A1 EP3105696 A1 EP 3105696A1 EP 15746074 A EP15746074 A EP 15746074A EP 3105696 A1 EP3105696 A1 EP 3105696A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- robot
- depth sensor
- delivery target
- item
- controller
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0214—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/08—Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S901/00—Robots
- Y10S901/01—Mobile robot
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S901/00—Robots
- Y10S901/46—Sensing device
- Y10S901/47—Optical
Definitions
- the present disclosure relates generally to authentication of persons and/or objects at predetermined locations for a mobile delivery robot, including authentication of persons at particular locales, such as doorways and/or entryways.
- Facilitating delivery or receipt of packages using mobile autonomous robots can improve service to residents and staff in hospitality spaces including hotels, nursing homes, hospitals, residential apartment buildings, or office buildings.
- a hotel guest can order room service, and an autonomous robotic delivery service could deliver the guest's order directly to their door. After authentication of the delivery recipient, the package can be made available to the hotel guest.
- Authentication typically requires that a guest take deliberate actions to identify themselves.
- a recipient can sign for their delivery, use a provided RFID dongle or key card, or enter a passcode to confirm identity and permit package delivery.
- a recipient could swipe their room key through a card reader in order to authenticate.
- a prospective hotel room resident can provide biometric data such as face image, fingerprint, or voice samples during check-in a biometric. All of these mechanisms require the recipient to take a deliberate action in order to authenticate themselves, may be considered by many to be an unwarranted invasion of privacy, or may require unconventional user actions that are difficult to learn or perform.
- FIGS. 1A to 1 D are diagrams showing the authentication and delivering of an item to a person according to various embodiments
- FIGS. 2A to 2F various views of robots according to embodiments.
- FIG. 3 is a cartoon illustrating a depth sensor mounted on a robot maneuvering in an environment.
- FIG. 4 is a diagram showing one example of a robot having multiple depth sensors mounted on a rotatable, generally cylindrical robot, according to an embodiment.
- FIG. 5A and 5B are diagrams showing fields of view for sensors of a robot according to embodiments.
- FIG. 6 is a perspective view of a robot according to one particular embodiment.
- FIG. 7 is a flow diagram of a robot operation according to an embodiment.
- FIG. 8 is a flow diagram of a robot operation according to another embodiment.
- FIG. 9 is a flow diagram of a robot operation according to a further embodiment.
- FIGS. 10A to 10C are a series of views showing robot operations according to an embodiment.
- FIG. 1 1 is a flow diagram of a robot operation according to a particular embodiment.
- Embodiments can include systems, methods and devices for the authentication of designated persons at particular locations.
- systems include robots with low cost sensor systems and can interact with entryways or geographic locales.
- sensor systems can be configured to assist in authenticating people or areas such as an open doorway or entryway threshold prior to delivering items.
- FIG. 1 A shows a series of panels illustrating methods and systems according to particular embodiments.
- the system can execute a method for authenticating and delivering an item to a person.
- a method 101 can include a person, located in a predetermined area, ordering one or more items. As seen in a first panel 103, in particular embodiments, this can include a hotel guest ordering an item such as food, drinks, snacks, toiletries, magazines, books, or other miscellaneous items. Ordering can be accomplished using any suitable method, including but not limited to using a mobile applet for a tablet or phone, a web page interaction, a television cable interface, or by simply calling a concierge or other hotel employee.
- a method 101 can further include an order being received and the particular item(s) placed in a securable storage container of a robot.
- Such actions can be executed with a purely automated system, or in part with the assistance of people.
- an order can be received by a person (e.g., concierge) at a location within a hotel, the desired item(s) 107 can be found (e.g., in a hotel store or stockroom), and then placed into a top accessible, securable container 116 of a mobile robot 100.
- a securable storage container of the robot e.g., 1 16
- the container can be secured (e.g., closed and locked).
- a closing/locking can be an automated operation, one executed by a person, or combinations thereof.
- a method 101 can further include a robot, which securely contains delivery items, being directed to a delivery location.
- a robot which securely contains delivery items, being directed to a delivery location.
- such an action can include a robot 100 receiving destination information by any suitable means and being directed to a room corresponding to the order (the designated room).
- this can also be an automated step, with the designated room information being sent to the robot 100 via a wired or wireless connection.
- a person e.g., concierge
- a method 101 can also include a robot autonomously navigating from the location at which it received its delivery item(s) to a designated delivery location. Such an action can include avoiding stationary and moving objects along the way. In some embodiments, this can include navigating a building, a campus of multiple buildings, or an ever larger geographical area. In the embodiment shown, as seen in a third panel 109, a robot 100 can autonomously navigate through hallways of an environment 122 (e.g., hotel), avoiding other guests or objects potentially blocking a route to the designated room.
- an environment 122 e.g., hotel
- a method 101 can further include a robot arriving at the delivery location and stopping at a particular zone associated with the location.
- a notification can be generated to signal the arrival of the item(s).
- the securable storage container can enable the item(s) to be removed. This can include, but is not limited to, the storage container opening or becoming unlocked.
- a robot 100 can arrive at a destination zone 132, which can be the front of the ordering guest's door.
- a notification can be given to the guest to indicate the arrival of the item(s).
- this can include a person (e.g., concierge) contacting the guest to inform them of the robot's presence at the door (e.g., phone, electronic message, etc.) or an automated message can be sent to the guest (e.g., mobile applet or other electronic interface can signal the guest of the robot's arrival).
- the robot 100 can provide a suitable notification, including visual or audio signals such as those generated by on-board speakers of the robot, a tablet with speakers mounted to the robot, or the like.
- the robot 100 can be maneuvered to physically knock the door to provide a notification.
- the robot 100 can complete authentication and allow pickup of the item(s) 107 by the guest (e.g., preset item(s), unlock and open a lid, etc.).
- Authentication can include, but is not limited to, identifying that the door has opened, alone or in conjunction with additional electronic or biometric identification techniques.
- FIGS. 1 B, 1 C and 1 D are a series of top-down looking cartoon views showing movement of a robot through an area containing obstacles to a delivery zone, as well as the opening of a container to complete delivery according to an embodiment.
- a robot containing item(s) for delivery can evade stationary and moving obstacles to reach a delivery zone.
- a robot 100 can navigate through a hallway 122 (i.e., an environment) with various obstacles, to a delivery zone 134 positioned adjacent to a door 113.
- Such an operation can include the robot 100 maneuvering through a hallway 150, navigating to evade obstacles such as a cart 1 15 and a person 126, in order to position itself in a destination zone 134 in front of a doorway 1 13.
- FIG. 1 B shows a destination zone suitable for an inwardly opening door 113, as indicated by door open angle 1 17.
- features at a delivery location can be used to identify and/or confirm a delivery zone.
- a robot 100 can maneuver to a delivery zone 134 adjacent to a door 1 13.
- a hallway 122 i.e., environment
- a robot 100 having a secured container 116 can autonomously navigate in a direction indicate by arrow 132 to a delivery zone 134 outlined by dotted lines, with the delivery zone 134 being in front and slightly to the side of the door 113.
- a robot can arrive at a delivery zone and enable delivered item(s) to be retrieved.
- a robot can orient itself in a predetermined fashion to provide easy access to the item(s).
- a robot 170 can complete delivery upon door 1 13 opening. In some embodiments this can include unlocking container 1 16.
- a robot 100 can navigate to the delivery zone 134 and rotate in place (indicated by arrow 133) to present the container 116 in a position easily retrievable by the guest.
- the robot 100 can complete authentication (using the opening of the pre-selected door as a portion of the authentication process) and allow pickup of the item(s) resting in the container 1 16 by the guest. In some embodiments, this can include unlock and opening a lid of the container 1 16.
- any suitable location can serve as a delivery zone.
- a delivery zone that is relative to a door in a hotel
- any suitable location can serve as a delivery zone.
- an entryway or threshold area, a defined delivery zone, designated restaurant tables, guest occupied reception or meeting room chairs, poolside lounges, or even a biometrically identified guest can be serve as, or be used to derive, a delivery zone.
- a robot can be used to deliver cleaning supplies or materials to carts of cleaning staff, while in other embodiments robots can deliver items to other robots for later pickup.
- a robot can use image sensors, depth sensors, position sensors, or the like. The sensors can be used to identify room numbers and determine if opening of a door has occurred. Active or passive identification systems can be used, including but not limited to RFID tags, Bluetooth beacons, QR coded markings, ultrasonic emitters, or other suitable area or target identification mechanism or marking system. While such sensors can be mounted on the robot itself, in some embodiments, all or a portion of the sensors can be separate from the robot, but can transmit sensor data to the robot, or have such data be retrieved by the robot.
- a robot can use a precomputed (if door sizes are standardized) or locally determined three dimensional (3D) door opening model. For example, once a robot 100 is localized in front of a door, it can detect the state of the door (open or closed) by using depth or other suitable 3D sensors to measure door dimensions and position. Typically, a door is positioned on a plane that is perpendicular to the floor, and rotates on hinges. As the door opens and closes, it sweeps out an arc along the floor. The 3D sensor data is fit to this model of the door and its surrounding walls.
- the robot By comparing the orientation of the sensed door plane to the detected walls and the map, the robot estimates the angle of the door, determines whether it is in an open or closed state, and can determine whether or not the door opening will contact the robot.
- the robot can use the model to position itself in a position that allows for ease of delivery, while preventing contact with the opening door. In certain embodiments the robot can position itself in a non-blocking position to allow entry or exit of guests through the doorway even during delivery.
- doors are normally locked and often open inward.
- a person present in the room, or a person who can open the door has been authenticated to a certain extent by the hotel.
- this level of authentication can be sufficient for many applications.
- the robot may be programmed to unlock a bin or cargo carrier so that a person can remove its load once the robot detects that the door is open.
- Individual hotels or institutions can augment this authentication technique with others if needed, such as asking the person receiving the delivery to sign on a tablet, by use of a tablet mediated video interface with a hotel employee, detection of a guest key card with RFID or magnetic strip, personal identification number (PIN) generated for a guest at check-in, or other suitable means.
- More advanced biometric techniques including but not limited to fingerprint, voice analysis, or facial image identification can also be used.
- removal of a delivered item can be presumed, and the lid automatically closed and relocked.
- active measures can be utilized to confirm the item(s) have been removed, including but not limited to weight or pressure sensors, RFID tags, imaging, ultrasonic sensors can be removal of item attached is detected.
- one or more types of delivery may not require a locked container, with the robot simply being loaded with an item and autonomously maneuvering to a delivery zone, authenticating (i.e., detecting when the door opens), and the item(s) can be presented for delivery.
- an item(s) can be placed in lockable container inside or to one side (e.g., rear) of a robot, where the robot can maneuver to a delivery zone (e.g., door), authenticate (e.g., interact with a guest), and rotate to present a side or rear container holding item(s) for delivery.
- FIG. 2A is side view and FIG. 2B is a top view of a robot 200 that can be included in embodiments.
- a robot 200 can include one or more depth sensors 202, a body 204 and a movement system 206.
- a depth sensor 202 can be mounted on and/or within a robot body 204.
- a depth sensor 202 can have a relatively low field of view (FOV).
- FOV field of view
- a relatively low FOV can be less than 180° in one direction in some embodiments, less than 90° in one direction in particular embodiments, and less than 65° in one direction in very particular embodiments.
- a depth sensor 202 can be angled downward for a FOV 208 that encompasses the region immediately in front of the robot 200.
- a FOV 208 can include a portion of the robot 200 itself. In embodiments with multiple depth sensors 202, such depth sensors can have different FOV angles or the same FOV angles.
- a depth sensor 202 can have a relatively low FOV and be angled forward (with respect to robot forward movement) for a FOV 208' that encompasses the region forward of the robot 200.
- a robot 200 can include one depth sensor 202 having one of the FOVs shown (208 or 208').
- a robot 200 includes a depth sensor 202 capable of moving between multiple FOV orientations (e.g., between 208 and 208').
- a robot 200 can include multiple depth sensors 202, one of which provides a different FOV (e.g., one provides 208 and another provides FOV 208').
- a depth sensor 202 can operate by combining image capture with a beam emitter.
- a depth sensor 202 can include an image sensor and an emitter that emits some spectra of light (e.g., any of infrared, visible or ultraviolet). The image sensor can detect objects by the emitted light reflected off the objects.
- One or more depth sensors 202 can be fixedly mounted to a body 200. In such embodiments, to scan a region greater than a field of view, a robot 200 is capable of rotational movement to enable the fixed sensors to scan the environment. In other embodiments, one or more depth sensors 202 can be movably mounted to a body 200. In particular embodiments, such movable mountings can provide only limited movement for a depth sensor 202. In very particular embodiments, limited movement of depth sensor mountings can add no more than 45° to the depth sensor's existing FOV.
- a robot body 204 can have a generally cylindrical or elongated shape.
- a robot body 204 can have a height 212 that is greater than its width 120.
- a height 212 can be no less than 1.5 times the width 210.
- a robot body 204 can have a vertical size conducive to interaction with people.
- a robot height 212 can be between 0.8 to 2 meters, in particular embodiments, between 1.2 and 1.5 meters.
- a robot 200 can have a width sufficient to store deliverable items, while at the same time being small enough to enable ease of movement in an environment.
- a robot diameter or maximum width can be less than a meter, in some embodiments between 30 and 60 cm, and in particular embodiments, between 40 and 50 cm.
- a generally cylindrical/elongated body 204 can have a low profile surface when the robot 200 is in motion. That is, as a robot 200 moves, there can be no structures significantly projecting outward in a lateral direction. In some embodiments, a low profile body surface will have no structures extending away from the main body surface by more than 1/3 a width of the body, and in particular embodiments, not more than 1 ⁇ 4 a width of the body. Such a generally cylindrical/elongated body can provide for more efficient movement in an environment, as a space occupied by a robot 200 can be essentially uniform in all lateral directions.
- a robot 200 can maintain a low profile shape whether moving or stationary. However, in other embodiments, when a robot 200 is not moving, structures may extend outward from a body 204. As but one example, a robot 200 can include doors that swing away from a body 204 to enable access to a storage container and/or other locations interior to the body (e.g., maintenance access). Other embodiments, can have other deployable structure when the robot is not in motion.
- depth sensor(s) 202 can be mounted in a top portion of a body 204.
- a top portion can be the upper 1/3 of the robot height 212.
- depth sensor(s) 202 can be mounted in a top 20% of the robot height 212.
- a movement system 206 can include any suitable movement system that enables a robot 200 to move in its operating environment, including but not limited to wheeled systems, tracked systems, roller systems, or combinations thereof.
- a movement system 206 can enable a robot 200 to have both linear and rotational movement.
- a movement system 206 can include at least two wheels positioned apart from one another, each capable of independent rotation in either direction.
- a robot 200 can further include a user interface
- a user l/F 214 can enable a robot 200 to be directed or programmed to perform various tasks and/or to interact with other people.
- a user l/F 214 can be a touch screen l/F for a low profile.
- a robot 200 can be a delivery robot and a user l/F 214 can be used to authenticate delivery to an indicated destination and/or person.
- a robot 200 can also include a container 216.
- a container 216 can be formed within a body 204, to maintain a low profile shape.
- a container 216 can be securable, having some structure to limit access to stored contents.
- a robot 200 can include a door/lid 218 for securing the container 216.
- a door/lid 218 may or may not be lockable.
- a robot can have generally cylindrical or elongated body.
- such a shape can be one that maintains a generally closed curved shape in lateral cross section.
- Such a shape may vary according to vertical position, however.
- a generally cylindrical body does not require a circular or ellipsoid cross section.
- a generally cylindrical body 204 has rounded features, but is not circular or ellipsoid.
- a generally cylindrical body when viewed in lateral cross section, can occupy a majority of the area of an inscribing circle or ellipse.
- a body 204 occupies a majority of an inscribing circle 220.
- FIGS. 2C to 2F show various other cross sectional shapes that can be included in a generally cylindrical body.
- Each of FIGS. 2C to 2F shows a body 204-C to 204-F that occupies the majority of the area of an inscribing circle 220.
- the various body cross section shapes of FIGS. 2B to 2F are provide by way of example only, and should not be construed as limiting.
- a robot 300 can move in an environment 322 such as a hallway or room, even in the presence of potentially blocking objects or people moving through the area.
- a robot 300 can be any of those described herein, or equivalents.
- a robot 300 can be autonomously movable in an environment 322 that can include multiple fixed objects 324-0 and 324-1 , as well as one or more movable objects 326, such as person moving toward a door 328 in a direction indicated by arrow 330.
- robot 300 can rotate through 360°, permitting environment scanning with one or more sensors 302 fixedly mounted or having a limited movement.
- a sensor 302 can include at least one image based depth sensor.
- a robot 300 can move in a direction indicated by arrow 332 to a target destination zone 334 in front of the door 328.
- deliveries held in a securable container 316 Upon reaching the target destination zone 328, deliveries held in a securable container 316, which, in the particular embodiment shown, can be built into a top of the robot 300. Deliveries within securable container 316 can be removed by a room occupant (not shown).
- Sensor(s) 302 can be fixed or movably mounted near or at a top of the robot 300.
- a key area to sense during obstacle avoidance can be the area directly in a movement path (e.g., 322) of the robot 300, particularly the area directly in front of the robot 300.
- sensors 302 can include one or more sensors that are directed generally downward or outward, with a field of view typically maintained to include an area into which the robot 300 is moving.
- sensor(s) 302 can include a depth camera that is mounted such that it points directly downward, with about half of its field of view filled with a body of robot 300 while the remaining half can be used for obstacle detection.
- a depth sensor within sensors 302 can be mounted out and down at an angle of up to FV/2 from vertical to provide greater viewable area for obstacle detection.
- depth sensors can include components similar to, or derived from, video gaming technology, enabling three dimensional sensing. Such depth sensors can be more cost effective than wide FOV laser-based sensors employed in conventional systems. Very particular examples of possible sensors of this type can include, but are not limited to, the Kinect manufactured by Microsoft Corporation, Carmine by Primsense (now owned by Apple Computer), or DepthSense 325 by SoftKinetic. Such depth sensors can be more cost effective, and typically direct infrared light to bounce off objects and be captured by an image sensor in order to determine how far those objects are from the sensor; while further incorporating an video camera (such as an RGB video camera) to allow the depth image to be combined with the video image.
- an RGB video camera such as an RGB video camera
- depth sensors included in a robot can have a much narrower field of view (typically less than 90°), a much shorter effective range of depth detection (around 1 -3 meters), and often have a "dead zone" with limited or absent depth ranging within a half meter or so of the depth sensor.
- a depth sensor can be movable, with hinged, rail, hydraulic piston, or other suitable actuating mechanisms used to rotate, elevate, depress, oscillate, or laterally scan the depth sensor.
- multiple depth sensors can be used and generally directed so that forward, backward, upward and downward regions are monitored.
- conventional RGB CMOS or CCD sensors can be used, alone or in combination with narrowband, wideband, polarization or other spectral filters.
- Embodiments can also include infrared, ultraviolet, or other imaging focal plane array devices to allow for hyperspectral image processing. This can allow, for example, monitoring and tracking of guides, markers, or pathways that are not visible, or not easily visible to people.
- ambient light such as sunlight, incandescent, halogen, LED, fluorescent or other commonly available artificial source may illuminate the environment in which a robot (e.g., 100, 200, 300) moves, and depth sensors of the robot can use such light to detect objects/obstacles.
- a robot e.g., 100, 200, 300
- a robot can have one or more attached (movable or fixed) light sources to augment or serve as a light source for object/obstacle detection.
- Such light sources can augment ambient light intensity and/or provide wavelengths not available in the ambient light source and/or substitute for ambient light in dark environments.
- the light sources may be mounted along with, or separately from, the depth sensors, and can include monochromatic or near monochromatic light sources such as lasers, light emitting diodes (LEDs), or organic light emitting diodes (OLEDs).
- monochromatic or near monochromatic light sources such as lasers, light emitting diodes (LEDs), or organic light emitting diodes (OLEDs).
- broadband light sources may be provided by multiple LEDs of varying wavelength (including infrared or ultraviolet LEDs), halogen lamps or other suitable conventional light source.
- Various light shields, lenses, mirrors, reflective surfaces, or other optics can provide wide light beams for area illumination or tightly focused beams for improved local illumination intensity.
- Interaction with a robot can be provided by local input or network interface.
- local input can be through a touchpad, by voice or gesture control, or by dedicated remote controllers.
- Local display of status, functionality, and error messages or the like may be afforded by a touchpad display.
- the display can be a conventional LCD display, a bistable displays (such electronic paper or similar), an OLED display, or other suitable display.
- Local user input can include a robot mounted pad, hard or soft keyboard, touch sensitive element (which may be integrated as part of the optional display), or similar, to provide for user input, voice control, or camera mediated user gestural control.
- a wired or wireless connect subsystem can be used to connect to another user interaction device such as a laptop, tablet, or smart phone (not shown).
- data and control signals can be received, generated, or transported between varieties of external data sources, including wireless networks, personal area networks, cellular networks, the Internet, or cloud mediated data sources.
- a robot e.g., 100, 200, 300
- a source of local data e.g. a hard drive, solid state drive, flash memory, or any other suitable memory, including dynamic memory, such as SRAM or DRAM
- dynamic memory such as SRAM or DRAM
- multiple communication systems can be provided.
- a robot e.g.,
- 100, 200, 300 can be provided with a direct Wi-Fi connection (802.11 b/g/n), as well as a separate 4G cell connection provided as a back-up communication channel (e.g., such as that included on an interface tablet computer).
- a direct Wi-Fi connection 802.11 b/g/n
- 4G cell connection provided as a back-up communication channel (e.g., such as that included on an interface tablet computer).
- tablet or robot mounted Bluetooth or other local communication systems can be used to identify pre-positioned radio beacons, or to form a part of a user interface via a user smartphone or tablet.
- a robot when a robot (e.g., 100, 200, 300) autonomously moves to conduct a task, it can rely on localization for tracking its current position.
- a typical example of localization technologies is a simultaneous localization and mapping (SLAM) technique.
- SLAM simultaneous localization and mapping
- a mobile robot e.g., 100, 200, 300
- Bluetooth beacons, radio beacons, light emitting devices, and/or visible patterns can be placed at particular sites or objects to assist robot navigation.
- a robot e.g., 100, 200, 300
- a robot can carry a wide range of amenities and supplies in various optional lockers, shipping containers, or shelving units, including food and beverages. Some of these supplies (especially beverages) may spill or be damaged if the robot does not move smoothly and gently. Such a problem can be especially acute when the robot starts and stops, particularly during emergency stops (e.g., when someone jumps into its path).
- the robot e.g., 100, 200, 300
- the robot can be controlled to gently accelerate and decelerate, minimizing the forces felt by the payload.
- a robot e.g., 100, 200, 300
- a robot can have a motor control system of sufficient fidelity for smoothly decelerate multiple motors (wheels) simultaneously.
- a robot e.g., 100, 200, 300 can include a high-frequency (e.g. 1000 Hz) motor control loop system.
- a robot e.g., 100, 200, 300
- a robot can detect a movable object 326, and plan a new path to avoid the object, or slow down or halt until the object 326 is no longer in the desired movement path 332 of the robot (e.g., 100, 200, 300).
- Stationary objects 324-0/1 can be avoided according to a local movement plan. Stationary objects 324-0/1 can already be known, and verified by location detection, or newly detected by the robot.
- FIG. 4 shows a robot 400 according to one particular embodiment, as well as selected components of the robot 400 in an exploded view.
- robot 400 can be one very particular implementation of robot 100 shown in FIGS. 1A to 3.
- a robot 400 can have a generally cylindrical shape about a vertical midline 434.
- this shape simplifies movement calculations and simplifies rotation in place, since position and potential interactions of objects with extending arms or the like do not have to be determined.
- a touch tablet computing device (tablet) 414 can be included for user input and/or messaging, and can be mounted at the top of the robot at an angle convenient for viewing and user input. In addition to a visible display, tablet 414 can be used for speech input/output, and/or for processing and controlling the robot 400.
- a speaker 436 separate from the tablet 414 can also be included for providing audible instructions or notices.
- a storage container 416 can be included within a body 404 of the robot, positioned behind the tablet 414. In some embodiments, storage container 416 is securable. In particular embodiments, storage container 416 is lockable, and can be controlled to unlock for delivery to a recipient only when a destination has been reached an authorization to unlock is received.
- robot 400 can support multiple fixed depth sensors, including a forward looking depth sensor 402-0 and a downward looking depth sensor 402-1 mounted adjacent to each other.
- the depth sensors (402-0/1 ) can be fixedly mounted in a manner that does not require a turret or movable actuators.
- each depth sensor 402-0/1 can include a beam emitting device and an image sensor that detects the beam as it reflects off of objects.
- depth sensors 402-0/1 can include an IR emitter and IR image sensor, such as an IR video camera.
- a robot 400 can include a video camera 402-2 to provide additional imaging capabilities.
- a video camera 402-2 can be an RGB CMOS type video camera.
- a robot 400 can include one or more other sensors.
- a robot 400 can further include a base mounted sonar array 438 and a wide angle sonar 440 mounted near a top of the robot 400.
- a robot 400 can be controlled by one or more processors executing stored instructions that can be responsive to sensor inputs and/or transmitted inputs.
- processors executing stored instructions that can be responsive to sensor inputs and/or transmitted inputs.
- an x86 or similar central processing unit 442 can be used in conjunctions with one or more microcontrollers 444 and motor controllers 446 for local control of movement of the robot 400.
- differential drive motors 448 powered by batteries 450 can provide movement by driving wheels (not shown) that support the robot 400.
- batteries 450 can be lithium ion or some other battery type, rechargeable battery systems being preferred.
- a drive mechanism includes separate drive motors 448 each attached to its own wheel, in a differential drive configuration. In some embodiments such a drive mechanism can allow for a robot velocity of 1.5 meters/second, and the ability to move up and down ramps, as well as on level ground.
- a robot 400 can include two drive wheels between 4-8 inches in diameter, preferably about six inches in diameter.
- a robot 400 can be sized to have a height of between 0.8 to 2 meters, preferably between 1.2 to 1.5 meters, and a diameter of between 30- 60 centimeters, preferably between 40-50 centimeters. Such physical dimensions can enable robot 400 to easily move through hallways and doorways.
- FIGS. 5A and 5B are diagrams showing depth sensing field of view configurations according to embodiments. Such field of view configurations can be included in any of the robots described herein.
- FIG. 5A is a perspective view showing a robot 500 and two corresponding fields of view 508 and 508'.
- FIG. 5B is a side view showing a top portion of the robot 500.
- a robot 500 can have two depth sensors 502-0/1 fixedly mounted near a top of the robot 500.
- Depth sensors 502-0/1 can be any suitable depth sensor, but in particular embodiments, can include an emitting device and image sensor as described herein, or an equivalent.
- one or both depth sensors 502-0/1 can include an RGB CMOS type video camera.
- depth sensors 502-0/1 can be positioned adjacent to each other.
- depth sensors 502-0/1 can be within a few centimeters of one another.
- both depth sensors 502-0/1 can produce color images with corresponding depth information. That is, both depth sensors 502-0/1 can include video cameras.
- Depth sensors 502-0/1 can be mounted at the top of the robot 500 facing the forward traveling direction of the robot (i.e., the front). In one particular embodiment, depth sensors 502-0/1 can be mounted 80cm to 85cm above the floor. One depth sensor 502-1 can be pointed directly ahead, while the other depth sensor 502-0 can be angled downward to image the floor directly ahead of the robot. Such an angle is shown as 536 in FIG. 5B. In a very particular embodiment, such an angle can be between 50° and 80° in the vertical direction, preferably between 60° and 70°, and in one very particular implementation, about 65°. A vertical field of view angle for each depth sensor 502-0/1 is shown as 538 and 538' in FIG. 5A.
- such field of view angles can be less than 180°, in other embodiments such angles can be less than 90°, and in further embodiments less than 60°.
- a field of view for each depth sensor 502-0/1 can be 57° in the horizontal direction and 45° in the vertical direction.
- a near edge of the field of view 508 for a downward pointed depth sensor 502-0 can intersect a lower portion of the robot body 504 (i.e., the base).
- the trapezoid shape 508 on the floor in front of the robot shows the area visible to the downward facing depth sensor 502-0.
- the vertical trapezoid 508' in front of the robot 500 shows the area visible to the forward facing depth sensor 502-1.
- the robot 500 of FIGS. 5A and 5B can also include a movement system 506, user l/F 516, or storage container 516 according to any of the embodiments described herein, or equivalents.
- a robot 600 can have a body 604 with a generally cylindrical or elongated shape.
- One or more depth sensors 602 can be mounted on an upper portion of the body 604 to enable downward and/or forward facing depth sensors.
- Depth sensors 602 can have any of the configurations and/or components described for embodiments herein, or equivalents.
- a robot 600 can include a user l/F 614 also mounted on an upper portion of body 614.
- user l/F 614 can be a tablet computing device with a touchscreen.
- a robot 600 can include a storage container 616, which in the embodiment shown, can extend into a top surface of the robot body 604.
- a storage container 616 can be securable, including a door/lid 618 which can be closed.
- door/lid 618 is lockable and can be unlocked and/or opened upon authentication in a delivery operation.
- a robot 600 can include additional body mounted items 640, which can include, but are not limited to, lighting structures to provide notification lighting, lighting for use by sensors 602, or one or more additional sensors.
- FIG. 7 is a flow diagram of a method 750 according to an embodiment.
- a method 750 can include image sensor based depth sensors scanning for the presence of local objects/obstacles 752.
- Such actions can include one or more image based depth sensors mounted at or toward a top of a robot body that detect a beam emitted from the depth sensor, such as a light of a non-visible or visible spectrum.
- such actions can include an infrared video camera operating in conjunction with an infrared emitter to determine the presence and distance of a local object/obstacle 752.
- a method can also include additional sensors scanning for local objects and/or determining a local position of a robot (754). Based on data generated by blocks 752 and 754, a local occupancy grid can be derived 756. As but one example, upon detecting a new object/obstacle, a local occupancy grid can be updated to include the presence of the object/obstacle, as well as whether such an object/obstacle is in motion or stationary.
- a method 750 can also include a robot using map information 758 in conjunction with a local occupancy grid to create local navigation controls 758. In some embodiments, map information can also be used to create a global plan of navigation 760, prior to creating the local plan.
- FIG. 8 shows another method 850 according to an embodiment.
- a method 850 can include a robot navigating according to a local plan 860.
- a robot can use a sensor having an image sensor and emitted signal to detect local object(s)/obstacle(s) 862. If no local object/obstacle is detected (N from 862), a method 850 can return to navigating according to the local plan (860). If a local object/obstacle is detected (Y from 862), a method 850 can revise a local occupancy grid stored in the robot 864 that accounts for the detected object(s)/obstacle(s), and revise its local plan 866 accordingly. A method 800 can then return to navigating according to the local plan (860).
- FIG. 9 shows another method 950 according to an embodiment.
- a method 950 can include actions like those of FIG. 8. Such like actions are referred to by the same reference character but with the leading digit being a "9" instead of an "8".
- Method 950 differs from that of FIG. 8 in that action 962 can include a robot using forward and downward facing low field of view (FOV) depth sensors to detect local object(s)/obstacle(s) 962.
- a low FOV depth sensor can have a field of view less than 180° in some embodiments, less than 90° in other embodiments, and less than 60° in a particular embodiment.
- FIGS. 10A to 10C are a sequence of diagrams showing robot navigation according to a particular embodiment.
- FIGS. 10A to 10C show an environment 1022 (a hallway) that can include target destination zones (1034-A to 1034-D) (areas in front of doors). Environment limits or known objects are shown as 1070-A (walls) and 1070-B (known furniture). These features can be derived from map data, global occupancy grid, or local occupancy grid.
- a robot 1000 can follow a local plan 1074 derived from a local occupancy grid, map data, and global plan.
- Environment 1022 also includes a new object 1072. New object 1072 is not included or known from the map data, global plan or local plan.
- FIGS. 10A to 10C it is assumed that the robot 1000 is navigating to target destination 1034-C according to local plan 1074. Further a field of view for a sensor (or group of sensors) is shown as 1008. A robot 1000 and its corresponding sensor(s) can include any of those described herein, or equivalents.
- a robot 1000 can move according to a local plan 1074, which takes into account the presence of known objects, which includes avoiding obstacle 1070-B.
- Robot 1000 can update its local occupancy grid to include new object 1072.
- robot 1000 can generate a new local plan 1074' that enables it to arrive at the target destination 1034-C, while at the same time avoiding the new object 1072.
- FIG. 11 is a flow chart illustrating process steps for autonomous robot navigation according to an embodiment.
- a robot can navigate through an environment with variously positioned movable and non- movable items, as well as a target destination.
- a sensor suite of a robot can include one or more sonars 1180, one or more bump sensors 1 182, and any of odometers, inertial measurement units (IMU), or other sensors configured to work with beacons, including beacons that emit light or radio signals (represented by 1184).
- a sensor suite can also include one or more camera systems 1 152 capable providing video or multiple still images.
- Sensor data from the camera images (1 152) and/or odometers and position beacons (1184) can be used to localize a position of the robot 1186.
- map data 1 158 can also be used in localization (1186).
- Localization 1186 can be used to arrive at a robot pose 1188, which can include the robot's position and orientation in a local environment.
- sensor data from the camera images (1152), sonar(s) 1 180 and bump sensor(s) can provide local knowledge of object position in the environment around the robot (local occupancy grid) 1 156.
- Data from the sensor suite (e.g., 1152) in combination with map data 1 158 can be used to arrive at a global occupancy grid 1 190.
- a user interface 1 1 14 can be used to enter/receive data that indicates a destination for the robot. Such data can be used to arrive at a goal pose 1 192, which can include the position of a target destination. In some embodiments, such a user interface data can be used in conjunction with map data 1 158 to arrive at a goal pose 1192.
- a given robot pose 1188, goal pose 1192 and global occupancy grid 1 190 can be used by a global planner 1194 to generate a global plan (distance map) 1 160.
- map data can be used with a global occupancy grid that integrates known positions of objects in the mapped area, and in conjunction with robot pose input and the goal pose, a robot global plan 1 160 for navigation can be generated.
- the global plan 1 160 can be reduced to a distance map.
- a local planner 1196 can combine a global plan 1160 with integrated data of local object positions from the local occupancy grid 1 156 and the robot pose 1 188. With such data, a local planner 1 196 can adjust the local plan to avoid or go around blockages or obstructions in the robot path. Local sensing and re- orientation of the robot (twists) 1 158 can verify local paths, and provide control input to the wheels (1198) for partial or full rotation of the robot, or backward or forward movement that avoids objects or people while traveling to a desired destination.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Business, Economics & Management (AREA)
- General Physics & Mathematics (AREA)
- Automation & Control Theory (AREA)
- Economics (AREA)
- Remote Sensing (AREA)
- Radar, Positioning & Navigation (AREA)
- Aviation & Aerospace Engineering (AREA)
- Human Resources & Organizations (AREA)
- Marketing (AREA)
- Strategic Management (AREA)
- General Business, Economics & Management (AREA)
- Quality & Reliability (AREA)
- Theoretical Computer Science (AREA)
- Operations Research (AREA)
- Tourism & Hospitality (AREA)
- Entrepreneurship & Innovation (AREA)
- Development Economics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Electromagnetism (AREA)
- Manipulator (AREA)
Abstract
Description
Claims
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201461938135P | 2014-02-10 | 2014-02-10 | |
US201461944524P | 2014-02-25 | 2014-02-25 | |
PCT/US2015/015264 WO2015120473A1 (en) | 2014-02-10 | 2015-02-10 | Entryway based authentication system |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3105696A1 true EP3105696A1 (en) | 2016-12-21 |
EP3105696A4 EP3105696A4 (en) | 2017-10-11 |
Family
ID=57282207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP15746074.2A Withdrawn EP3105696A4 (en) | 2014-02-10 | 2015-02-10 | Entryway based authentication system |
Country Status (1)
Country | Link |
---|---|
EP (1) | EP3105696A4 (en) |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
SE0004465D0 (en) * | 2000-12-04 | 2000-12-04 | Abb Ab | Robot system |
US6690997B2 (en) * | 2001-09-13 | 2004-02-10 | M.A. Rivalto, Inc. | System for automated package-pick up and delivery |
US9026301B2 (en) * | 2005-10-14 | 2015-05-05 | Aethon, Inc. | Robotic ordering and delivery system software and methods |
-
2015
- 2015-02-10 EP EP15746074.2A patent/EP3105696A4/en not_active Withdrawn
Also Published As
Publication number | Publication date |
---|---|
EP3105696A4 (en) | 2017-10-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9436926B2 (en) | Entryway based authentication system | |
WO2015120473A1 (en) | Entryway based authentication system | |
US9535421B1 (en) | Mobile delivery robot with interior cargo space | |
US10782686B2 (en) | Systems and methods for operating robots including the handling of delivery operations that cannot be completed | |
US11761160B2 (en) | Apparatus and method of monitoring product placement within a shopping facility | |
US11782452B2 (en) | Surveillance prevention by mobile robot | |
US20220075386A1 (en) | Elevator interactions by mobile robot | |
US11433544B2 (en) | Latency control in human operated mobile robot | |
US10857679B1 (en) | Apparatus and method for auxiliary mobile robot functionality | |
US10252419B2 (en) | System and method for robotic delivery between moving targets | |
US20200061839A1 (en) | Inventory management by mobile robot | |
US20210339399A1 (en) | Mobile robot for elevator interactions | |
US20150205297A1 (en) | Infrastructure for robots in human-centric environments | |
US20230374746A1 (en) | Apparatus and method of monitoring product placement within a shopping facility | |
US20210323581A1 (en) | Mobile artificial intelligence robot and method of controlling the same | |
GB2542470A (en) | Shopping facility assistance systems, devices, and methods to dispatch and recover motorized transport units that effect remote deliveries | |
US20180218310A1 (en) | Ad-hoc parcel delivery drop zone and hotspot | |
EP3105696A1 (en) | Entryway based authentication system | |
WO2017132695A1 (en) | Systems and methods for operating robots including the handling of delivery operations that cannot be completed | |
GB2542265A (en) | Shopping facility track system and method of routing motorized transport units | |
US12366043B2 (en) | Overriding control of motorized transport unit systems, devices and methods | |
US20250231565A1 (en) | Autonomous healthcare robot for secure cargo transport, televisits, and image classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20160910 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
A4 | Supplementary search report drawn up and despatched |
Effective date: 20170911 |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: G05D 1/02 20060101ALI20170905BHEP Ipc: G06Q 10/08 20120101AFI20170905BHEP Ipc: G06Q 50/28 20120101ALI20170905BHEP Ipc: G06Q 50/22 20120101ALI20170905BHEP |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20180410 |