[go: up one dir, main page]

WO2021064367A1 - A robot fleet and control system therefor - Google Patents

A robot fleet and control system therefor Download PDF

Info

Publication number
WO2021064367A1
WO2021064367A1 PCT/GB2020/052361 GB2020052361W WO2021064367A1 WO 2021064367 A1 WO2021064367 A1 WO 2021064367A1 GB 2020052361 W GB2020052361 W GB 2020052361W WO 2021064367 A1 WO2021064367 A1 WO 2021064367A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
robot
robots
control system
interactions
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/GB2020/052361
Other languages
French (fr)
Inventor
Andrei DANESCU
Adrian NEGOITA
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dexory Ltd
Original Assignee
Botsandus Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Botsandus Ltd filed Critical Botsandus Ltd
Priority to GB2207501.4A priority Critical patent/GB2604520B/en
Publication of WO2021064367A1 publication Critical patent/WO2021064367A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • B25J11/0005Manipulators having means for high-level communication with users, e.g. speech generator, face recognition means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0248Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0291Fleet control
    • G05D1/0297Fleet control by controlling means in a control room
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • G06Q10/047Optimisation of routes or paths, e.g. travelling salesman problem
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0201Market modelling; Market analysis; Collecting market data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0241Advertisements
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/02Marketing; Price estimation or determination; Fundraising
    • G06Q30/0281Customer communication at a business location, e.g. providing product or service information, consulting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/22Social work or social welfare, e.g. community support activities or counselling services
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q90/00Systems or methods specially adapted for administrative, commercial, financial, managerial or supervisory purposes, not involving significant data processing
    • G06Q90/20Destination assistance within a business structure or complex
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39146Swarm, multiagent, distributed multitask fusion, cooperation multi robots
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39153Human supervisory control of swarm
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40411Robot assists human in non-industrial environment like home or office

Definitions

  • the present invention relates to a robot fleet and control system therefor, and relates particularly, but not exclusively, to a system used by a single retailer, or by multiple retailers, to assist customers in a retail environment.
  • robots to improve customer experiences in a variety of environments. For example, retailers are using robots to assist in directing customers in their stores. Such a robot can travel around the store allowing customers to ask questions, receive answers and be directed to items they have asked for or that might be of interest to them. Such robots are also used to encourage customers to make purchases by making the customers aware of offers.
  • Preferred embodiments of the present invention seek to overcome or alleviate the above described disadvantages of the prior art.
  • a robot fleet and control system therefore, comprising:- a plurality of robots for interacting with users and having a plurality of sensors for gathering data relating to said interactions and to locations and at least one communication device for sending data; and at least one data processor for processing data from said plurality of robots together with said location data and creating output data therefrom.
  • interactions with subsequent users can be improved. For example, in a retail environment if interactions between robots and customers are indicating negative responses from customers then action can be taken to address this. For example, it may indicate that there is a problem at that location and more robots can be dispatched or members of staff called to that area to address the problem. Alternatively, it may indicate a problem with product positioning making it difficult for customers to access a product and this can be addressed. Another example is that the number of interactions can be used to indicate under-utilised or overcrowded areas in building such as a retail store and actions can be taken to alleviate problems that are resulting from the unbalanced utilisation of the space.
  • the processor also processes environmental data to create said output data.
  • the system of the present invention provides the advantage that the impact of the environment on users can be assessed and action taking accordingly.
  • the use of the term environmental within this application is intended to be interpreted very broadly beyond the most obvious considerations of temperature, humidity, weather conditions and lighting conditions. It is intended to include product placement data that can create a 3D planogram and product space mapping which locate products and product quantities within a retail store. Another example is the measuring of the movement of people both in terms of their movement or flow through an area as well as the numbers passing into and out of a specific area.
  • the environmental data is gathered by said robots.
  • the environmental data comprises product location data.
  • Allowing the robots to gather the environmental information is not only efficient but allows insights that are not available by other data gathering means. For example, planograms can be kept up-to-date and data is available that is not easily gathered by other data gathering means. Stock information can be kept up-to-date and on display information can be more accurately gathered than is possible using a combination of in stock and sales information.
  • Another example of using environmental information is by mapping disturbances and noise which impact on the robots' sensors ability to take accurate readings. These areas can be mapped and either be avoided by the robots or less emphasis can be placed on the readings from those sensors and the measurements from other sensors be used to supplement or replace these measurements to ensure continued accuracy. Alternatively, the measurements from sensors can be ignored in certain areas in order to prevent problems resulting from inaccurate measurements. This further demonstrates the importance of using multiple sensors to navigate.
  • the output data may comprise instructions to at least one of said plurality of robots.
  • Transmitting output data to at least one of the plurality of robots allows the robot control system to react to the circumstances in real time. For example, if one or more robots is overwhelmed with enquiries in a particular area, other robots may be sent to assist at that location.
  • the output data may also or alternatively comprises a map.
  • Maps are an extremely powerful way of displaying information and the data gathered by the robots can be overlaid with environmental information to provide insights which are not available via human interactions.
  • Figure 1 is a schematic representation of a robot fleet and control system therefor of the present invention
  • Figure 2 is a schematic representation of a single robot and a control system forming part of the fleet of figure 1;
  • Figure 3 (separated into two parts labelled figures 3A and 3B) is a flowchart showing an example operation of the method used in the present invention
  • Figure 4 is a flowchart showing an alternative example operation of the method used in the present invention.
  • Figure 5 (separated into three parts labelled figures 5A, 5B and 5C) is a flowchart showing a further example operation of the method used in the present invention.
  • Figure 6 is a schematic representation of layered maps used as an example of an output of the present invention.
  • a fleet of robots and control system therefor includes a plurality of robots which in this example includes five robots given the reference numerals 12, 14, 16, 18 and 20.
  • the robots 12 to 20 exchange data, via data exchange routes labelled 22, with at least one data processor in the form of a robot management system 24.
  • FIG 2 An example of a robot 12 is shown in figure 2 and this robot is to be used in interactions with users which could for example be customers in a retail environment.
  • the robot 12 has a plurality of sensors which are used to enable the interactions with users as well as gathering other data.
  • the robot 12 is provided with a plurality of microphones 26 which can be used to pick up environmental noise as well as detecting vocal interactions with users.
  • the robot 12 also has a pair of RGB cameras, indicated at 30 as well as one at the rear (which cannot be seen in figure 2.
  • a further 3D (or range) camera is provided (at 29) which as well as taking RGB images uses infra-red range finding data to determine the distance to objects detected by the camera to around 10-15m.
  • the images from these multiple cameras can be combined to create a 360° image around the robot 12.
  • a 360° camera can be located on the top of the robot.
  • These cameras are used to obtain visual images of the environment around the robot 12 and also as part of the interactions with users.
  • the data from the cameras can be processed to detect faces of customers and can be used to recognise faces as well as determining emotional states of the users.
  • the cameras can be used to identify environmental information including information about goods available for sale and in the creation of planograms.
  • Further sensors provided on the robot 12 include short distance proximity sensors 32 of the type commonly used on motor vehicles to determine the distance to any nearby objects including those which might be in the path of the robot.
  • the robot 12 is provided with further depth perception in the form of a LIDAR sensor 34 which can operate in 2D and 3D modes. In 2D mode the detector is acting in a similar fashion to the proximity sensors detecting objects at, or slightly above, floor level. However, in 3D mode the detector is taking data to determine the full shape of an object to at least the full height of the robot.
  • the data from the LIDAR and proximity sensors is used to allow the robot 12 to navigate the environment, such as a retail store, determining its distance relative to stationary and moving objects to ensure that it does not run into anything.
  • Data from the sensors is processed in one or more processors in the form of CPUs 36.
  • the output from the CPU includes information displayed on a screen 38 or sounds, typically replicated voice sounds, through a speaker 40.
  • the screen 38 is a touch screen to allow manual input of information where necessary.
  • the CPUs 36 also control motor which in turn drive wheels 40 which enable the robot to move.
  • the sending of the data, via communication pathway 26 is also controlled and facilitated by the CPUs 36.
  • connection to further computer devices is a standard computer device such as a laptop computer, a tablet computer device or a mobile telephone computing device.
  • These computer devices 42 are used to review data gathered by the fleet of robots 12 once it has been processed and this can be displayed in the form of maps. This data, and the maps, come from the robot management system 24 to which the computer device 42 is connected via a standard data communication link 44.
  • the computer device 42 can be connected directly to the robot device 12 via an alternative data link 46. This is typically used for diagnostic and maintenance reviews of the robot 12 but can be used for direct communication of the data gathered by the sensors of the robot.
  • An important aspect of the operations of the robots 12 is the determination of their location.
  • One example of how the location of a robot is determined is based on data about the position of the robot relative to known fixed objects. Taking the example of robots operating in a retail environment, the floorspace of that retail environment, whether it is a single retailers store or multiple retailers in a shopping mall, will contain multiple permanently fixed objects such as walls, doors pillars and the like. Typically, there will also be other objects which are generally fixed but which in principle can be moved. Examples of these include display rails and display cabinets which stay in one place for long periods of time but which are occasionally moved during refitting, or rejigging, of a store.
  • the robot 12 moves around the retail environment it is able to use its multiple sensors to determine its location relative to these fixed objects. Since the robot starts from a known location of a docking or recharge station it can use data from its drive system indicating distance and direction travelled to give an approximate location. This can be enhanced by the use of the proximity sensors 32 which determine the distance from the sensor to any nearby object. However, this can lead to inaccurate readings since the proximity sensors 32 are unable to distinguish the fixed objects from movable objects (such as a person standing still). Although over time the proximity sensors can build up a sufficiently accurate map to determine locations more accurately, it is advantageous to also use the lidar sensor 34 in order to gain a more accurate picture of the environment immediately adjacent the robot 12 by using the lidar data to identify the shape of objects.
  • the stereo cameras, 3D cameras 28 and RGB cameras 32 identify objects.
  • the data from these cameras is used, as explained later, to identify people, and in particular peoples' faces using facial recognition and analytics software, and this information can be used to eliminate the people from the location determination as they are not fixed objects.
  • an internally located sensor in the form of an inertial measurement unit (IMU) is included to measure movement of the robot 12. This further assists in determining the location of the robot as it travels.
  • IMU inertial measurement unit
  • All of the data gathered by sensors of the robots 12 is linked to the location at which that data is gathered. This data is then processed in the robot management system 24 and an output is produced.
  • An example of an output from the robot management system 24 is a signal which is sent to one or more of the robots 12 via the data communication link 22 giving them instructions to take an action.
  • An alternative output is a visual representation of the retail environment, typically in the form of a map, which can be viewed on a computer device 42 and is sent to that device via the data communication link 44.
  • a schematic representation of the data in map form is shown in figure 6.
  • a centralised map 50 contains data from a plurality of overlaid maps, generally indicated at 52, each layer containing different information as set out in the following table.
  • the map is provided in layers so that the user gets access to specific elements rather than the whole complex map at once. Turning on/off layers allows for gradual processing of the information on the user's part so as to request/control the layers that matter in that context .
  • the extra layer of information allows for extra depth and complete 3D immersion, similar to Google Maps' Street View .
  • This layer is used to enhance the base 2D layer with spatial information allowing a 3D representation of the world. Same base as the 2D camera but done entirely in three dimensional space.
  • the model uses historic information and previously learned data. It is trained with specific data per location (more specific predictions) or general area data. - Generated insights (extracted from other data)
  • This layer is generated by the AI/ML (machine learning) software that analyses all the other layers and generate insights for each part of the map. It is presented in a useful way to the user that access it, giving instant access to the whole fleet data and actions.
  • AI/ML machine learning
  • One of the main features of the robot is to interact with people in the space it is deployed. There are a high number of interactions generated daily which for a better understanding and monitoring is embedded in a separate layer of the map.
  • the robot interacts with people via voice and touchscreen and displays all the information on the screen. All the information generate by each interaction is stored behind the Face ID of the person interacting with. This information is anonymised placed on the layer.
  • the data involved to create the layer come from: cameras, audio I/O, visual, touchscreen.
  • Facial analytics runs continuously so as to link all these to the information presented on screen. This generates powerful insights. - People flow and footfall monitor
  • This layer contains information about demographics extracted using face analytics algorithms, coupled with any interaction or exchange of information between robot and user.
  • One of the main purposes of this map is to be customer facing making the interior of the store available to customers and potential customers via screens on computer devices or the are devices. Overlaid onto this information is additional information relating to the products that are on display on the shelves allowing a customer to browse before entering the physical environment of the store. - External info layer
  • This layer offers the ability to bring external data overlying it on the existing map structure. This expands the system functionality beyond the fleet sensing capabilities.
  • This layer contains information about the operating environment. This includes air quality, temperature, humidity, noise levels (full spectrum), light conditions, Wifi strength.
  • the robots Based on the temperature and humidity information the robots adjust the navigation path to eliminate possible faults to the internal components
  • This layer is the baseline for all the following layers to build on. It is a map generated (or pre-programmed) from operational environment geometry and includes all the static obstacles, furniture and other architectural elements detected by the robot while navigating the space.
  • the home points and origin of the map are visible in this layer and are used to synchronize the layers between them.
  • RGB RGB, RGB-D, proximity sensors, stereo camera, ultrasonic, time of flight laser, etc (not an exhaustive list).
  • This map can be built by just one unit or by a fleet of units mapping the same space.
  • the Centralise Management System collates all the maps and generates a master map for the entire space.
  • Benefits - Allows robots to orientate, localise and navigate the space
  • the environmental map data is used to determine locations or areas where interference and noise received by the sensors can give inaccurate readings. For example, areas prone to powerful sunlight can cause significant difficulties to infrared sensors or areas prone to high magnetic fields can interfere with the magnetic readers in the inertial measurement unit. This is a particular problem as measurements from the inertial measurement unit are used in accurately locating the position of the robot relative to a starting point. As a result, if a robot is entering an area which contains strong magnetic fields or is sometimes prone to strong magnetic fields the robot can place less emphasis on the data gathered from the inertial measurement unit and rely more heavily on other data to determine its location.
  • figure 3 (which is split into two parts, figure 3A and 3B) this figure represents an interaction between a robot 12 and a customer in a single retailer's store. It should be noted that this same interaction could be in a multiple retailer environment with the robot directing the customer to a particular retailer who can provide the product requested.
  • robot is not interacting with any customers and is moving around looking for potential customers to interact with.
  • a customer approaches the robot and at step 84 the robot analyses the person using facial recognition software to determine whether it, or any other robots, have previously interacted with this customer. This data is part of the social interactions data which forms the social interactions map 66.
  • the robot poses a question saying "Hello. Do you want me to present more details about Product X?”
  • the customer answers "yes, please do” the robot then drives slowly towards where product X is located and at the same time presents technical specifications on its screen to the customer.
  • the customer interrupts the robot and asks "why is this better than product Y".
  • the robot presents an on-screen comparison between the products X and Y with an answer to the question provided vocally with the main advantages and disadvantages.
  • the customer decides to buy product Y and asks the robot "can you please send product Y to the register and help me with some accessories?"
  • the decision to choose product Y instead of product X may be useful to the store manager and this interaction can be highlighted drawing it to their attention (see step 102).
  • the robot sends notification for somebody to take product Y to the register and responds to the customer indicating that this is the case and asking what accessories they would like.
  • the robot undertakes a similar process to that previously described to assist the customer in determining which accessories are required. Once it is clear that the interaction with the customer is coming to an end the robot asks "would you mind answering a few questions about your experience?". If, at step 108, the customer agrees then at step 110 the robot asks a series of questions to obtain further information and determine how happy the customer was with the customer service provided.
  • the facial recognition software is used to determine, from facial expressions, whether the answers correlate with the facially expressed emotions and body language of the customer.
  • This customer feedback information which is link to a number of locations and products, can be provided to a manager via the robot management system either in an aggregated form over a period of time such as a day, week or month or can be provided instantly (step 112).
  • managers or staff can be alerted, at step 114, and can meet the customer and attempt to resolve the problems.
  • step 116 the robot can start with a standard introduction such as "Hello there! How can I help you today?" This is accompanied with an on-screen display of a few options.
  • a verbal response such as, at step 118, a customer saying "I need help to find category X. Can you please help me getting there?”
  • this data is stored against the location indicating that a customer at that location was interested in category X. Later reviewing accumulated data can allow insight enabling managers to improve the retail environment, including the layout of the environment such as product placement on the basis of these interactions.
  • the robot can reply indicating that they will take the customer to the required area and the robot is able to calculate the fastest route to that location, not only using the layout of the retail environment but also adding information from other map layers including footfall data which can indicate a very busy area or data indicating a blockage in an area for example where spilt liquid has required an area to be closed.
  • footfall data can indicate a very busy area or data indicating a blockage in an area for example where spilt liquid has required an area to be closed.
  • step 124 if no further assistance was required because, at step 140, the customer has indicated that they have what they need, this can be logged at step 142 as a customer having successfully located a particular product.
  • Other information, including the facial recognition information is also stored step 144 and can be retrieved in the event that a customer returns later on the same day or on another day.
  • FIG 4 (which is split into three parts, figure 4A, 4B and 4C) this figure represents another interaction between a robot 12 and a customer in a retail environment.
  • a fleet of robots is deployed in the retail space.
  • a robot is taking a customer to an area referred to as area 1 and at step 152 notices that in an area referred to as area 5 there are a lot of people, or more specifically an unusually large number of people, waiting to be served.
  • this information is updated to the map and due to the nature of that data, indicating an exceptional circumstance a manager thus allowing them to take responsive action.
  • the fleet that are not other with customers, can gather further information at step 158.
  • the first robot who noticed the problem in area 5, has completed their journey with their customer (step 158) they determine whether further action is required at step 162 (it should be noted that step 162 is shown in figures 4A, 4B and 4C.)
  • step 164 If other action is required, for example a customer noticing an area of danger (step 164), such as spilled goods, the data is transferred from the robot to the robot management system 24 which updates the map highlighting this area of danger.
  • a manager is notified at step 168 and the updated map is used to alter the course of travel to navigate around the area, step 170.
  • the robot that identified the area navigates to that area, at step 172, and takes pictures allowing it to alert the managers to the type of danger. It is determined at step 174 whether the problem has been cleared and if not, the robot remains in that location, calling for extra assistance if needed, and warning customers and others of the danger.
  • the robot sends data updating the maps and interrogates the robot management system 24 for further tasks or returns to its docking station to recharge. If at step 162 no further action is required then the robot checks the map, at step 182 see if area 5 is still busy. If that area is still busy, at step 182 the robot can navigate to area 5 to assist informing other robots that it is on the way. It should be noted that managers can intervene in the robot management system 24 at any time and redirect robots as they see fit. If area 5 is no longer busy the robot undertakes, at step 184, similar to that set out at step 178, and assesses for further tasks or returns to a docking station to recharge.
  • FIG 5 (which is split into three parts, figure 5A, 5B and 5C) this figure represents an interaction between a robot 12 and a patient in a hospital environment.
  • a plurality of robots are deployed in a hospital at step 200. It is normal that the robots are deployed from and are based around areas where high levels of interaction with patients are required. This would for example be around the reception area or around nurse stations. Docking stations and recharging points can also conveniently be located here.
  • step 202 when a patient comes to reception there is either a robot at the reception desk, and not otherwise engaged in other interactions, or not (step 204). If robot is available the robot, at step 206, approaches the patient and asks "How can I help you?" The patient replies providing information to the robot about the appointment that the patient is due to attend (step 208). This data is recorded, at step 210, to analyse how early patients arrive to attend appointments. The location at which the information is provided is also used to determine where patients are not arriving at the correct locations allowing improvements to signage to be considered. At step 212 whilst receiving this information the robot, using its cameras, it gathers images and processes these using facial analytics to determine the mood of the patient.
  • the facial and head motion analytics are used to estimate the pulse. It is determined at step 214 whether the output of the facial analytics indicates a patient inside or outside expected parameters or in other words, is there a problem detected? If no problem is detected the robot queries the hospital record system to determine whether the doctor in question is still with a patient. If so, the patient is asked to wait until the doctor is free. The patient replies at step 220 and later, once the doctor is free, the robot asks the patient to follow them guiding them to the doctor's room, at step 222, and handing them over to the doctor's care.
  • step 214 If a problem is identified at step 214, for example, if the patient's pulse rate is above or below expected acceptable levels or if the patient appears particularly agitated then, at step 224, the robot will notify nurses suggesting that an intervention may be required before the patient sees the doctor. Because the robot is now engaged in an extended interaction with a patient a request is made, at step 226 that the robot management system 24 dispatches another robot to assist patients in the reception waiting area.
  • a robot is in reception, if a robot is not located there then the robot is alternatively engaged in checking patients who have been admitted to hospital at step 228.
  • All of the data gathered by the robot can be collated centrally by the robot management system 24 and maps produced, 230. These maps would typically include those described above but could in addition to include patient data, either obtained from observations of the patient, from data gathered from diagnostic devices which have sampled the patient or from access to patient records.
  • a further addition to this data is a map indicating insights which have been derived from patient data or from interactions with patients which suggest an improving picture for a patient or otherwise.
  • the robot management system determines prioritised work for the robots.
  • the robot management system 24 has identified area 3 as a priority and the robot moves to that area and start interacting with each patient. This involves running through a standard series of diagnostic and/or consultation questions as well as undertaking the facial recognition and pulsed determination mentioned previously at step 212. It is again determined whether characteristics measured are outside expected parameters and if doing so indicate a problem at step 238. If no problems are detected with any patients at step 240 the robot reports back to the robot management system and requests the next prioritised job. However, if problems are identified at step 242 the robot advises a member of staff, such as an on-call nurse, sending a report together with the location of the patient and their medical records.
  • a member of staff such as an on-call nurse
  • a video call is started between the nurse and the patient whilst a locally available member of staff is called to the room containing the patient.
  • the robot continues to analyse the patient reading parameters such as their pulse which can be displayed to the on- call nurse until at step 248 the patient is handed over to the care of a physically present member of medical staff.
  • the robot management system 24 is described above as being a cloud-based processing system.
  • any suitable alternative location for this management system would be acceptable including, but not limited to, closed local networks, the location of the management system on one of the robots or distributing the processing across some or of the robots' CPUs.

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Physics & Mathematics (AREA)
  • Development Economics (AREA)
  • Economics (AREA)
  • General Physics & Mathematics (AREA)
  • Finance (AREA)
  • Accounting & Taxation (AREA)
  • General Business, Economics & Management (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Human Resources & Organizations (AREA)
  • Game Theory and Decision Science (AREA)
  • Tourism & Hospitality (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • General Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Operations Research (AREA)
  • Quality & Reliability (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Robotics (AREA)
  • Human Computer Interaction (AREA)
  • Child & Adolescent Psychology (AREA)
  • Primary Health Care (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Educational Administration (AREA)
  • Data Mining & Analysis (AREA)
  • Optics & Photonics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
  • Manipulator (AREA)

Abstract

A robot fleet control system is disclosed. A group of robots are provided for interacting with users requiring assistance. Each robot has sensors for gathering data relating to these interactions and to the robot's location. These data are communicated to a data processor which creates output data therefrom. This output data can be instructions to the robots to take actions or to maps graphicly representing the data so gathered.

Description

A Robot Fleet and Control System Therefor
The present invention relates to a robot fleet and control system therefor, and relates particularly, but not exclusively, to a system used by a single retailer, or by multiple retailers, to assist customers in a retail environment.
The use of robots to improve customer experiences in a variety of environments is becoming more commonplace. For example, retailers are using robots to assist in directing customers in their stores. Such a robot can travel around the store allowing customers to ask questions, receive answers and be directed to items they have asked for or that might be of interest to them. Such robots are also used to encourage customers to make purchases by making the customers aware of offers.
This use of robots is helpful to customers enabling them to locate the goods they are after quickly. However, the potential of the robots is underutilised in gathering data and the experience of customers can be frustrating. For example, to most customers all the robots appear the same and it is frustrating if a robot has no knowledge of a conversation which a customer has previously had with another robot.
Preferred embodiments of the present invention seek to overcome or alleviate the above described disadvantages of the prior art.
According to an aspect of the present invention there is provided a robot fleet and control system therefore, comprising:- a plurality of robots for interacting with users and having a plurality of sensors for gathering data relating to said interactions and to locations and at least one communication device for sending data; and at least one data processor for processing data from said plurality of robots together with said location data and creating output data therefrom.
By processing data relating to the interactions of robots with users in relation to the locations of the interactions between them the advantage is provided that interactions with subsequent users can be improved. For example, in a retail environment if interactions between robots and customers are indicating negative responses from customers then action can be taken to address this. For example, it may indicate that there is a problem at that location and more robots can be dispatched or members of staff called to that area to address the problem. Alternatively, it may indicate a problem with product positioning making it difficult for customers to access a product and this can be addressed. Another example is that the number of interactions can be used to indicate under-utilised or overcrowded areas in building such as a retail store and actions can be taken to alleviate problems that are resulting from the unbalanced utilisation of the space.
In a preferred embodiment the processor also processes environmental data to create said output data.
By utilising environmental data and combining this with the location and interaction data the system of the present invention provides the advantage that the impact of the environment on users can be assessed and action taking accordingly. The use of the term environmental within this application is intended to be interpreted very broadly beyond the most obvious considerations of temperature, humidity, weather conditions and lighting conditions. It is intended to include product placement data that can create a 3D planogram and product space mapping which locate products and product quantities within a retail store. Another example is the measuring of the movement of people both in terms of their movement or flow through an area as well as the numbers passing into and out of a specific area.
In another preferred embodiment the environmental data is gathered by said robots.
In a further preferred embodiment the environmental data comprises product location data.
Allowing the robots to gather the environmental information is not only efficient but allows insights that are not available by other data gathering means. For example, planograms can be kept up-to-date and data is available that is not easily gathered by other data gathering means. Stock information can be kept up-to-date and on display information can be more accurately gathered than is possible using a combination of in stock and sales information. Another example of using environmental information is by mapping disturbances and noise which impact on the robots' sensors ability to take accurate readings. These areas can be mapped and either be avoided by the robots or less emphasis can be placed on the readings from those sensors and the measurements from other sensors be used to supplement or replace these measurements to ensure continued accuracy. Alternatively, the measurements from sensors can be ignored in certain areas in order to prevent problems resulting from inaccurate measurements. This further demonstrates the importance of using multiple sensors to navigate.
The output data may comprise instructions to at least one of said plurality of robots.
Transmitting output data to at least one of the plurality of robots allows the robot control system to react to the circumstances in real time. For example, if one or more robots is overwhelmed with enquiries in a particular area, other robots may be sent to assist at that location.
The output data may also or alternatively comprises a map. Maps are an extremely powerful way of displaying information and the data gathered by the robots can be overlaid with environmental information to provide insights which are not available via human interactions.
Preferred embodiments of the present invention will now be described, by way of example only, and not in any limitative sense with reference to the accompanying drawings in which
Figure 1 is a schematic representation of a robot fleet and control system therefor of the present invention;
Figure 2 is a schematic representation of a single robot and a control system forming part of the fleet of figure 1;
Figure 3 (separated into two parts labelled figures 3A and 3B) is a flowchart showing an example operation of the method used in the present invention;
Figure 4 (separated into three parts labelled figures 4A, 4B and 4C) is a flowchart showing an alternative example operation of the method used in the present invention;
Figure 5 (separated into three parts labelled figures 5A, 5B and 5C) is a flowchart showing a further example operation of the method used in the present invention; and
Figure 6 is a schematic representation of layered maps used as an example of an output of the present invention.
Referring initially to figures 1 and 2, a fleet of robots and control system therefor, generally indicated with reference numeral 10, includes a plurality of robots which in this example includes five robots given the reference numerals 12, 14, 16, 18 and 20. The robots 12 to 20 exchange data, via data exchange routes labelled 22, with at least one data processor in the form of a robot management system 24.
An example of a robot 12 is shown in figure 2 and this robot is to be used in interactions with users which could for example be customers in a retail environment. The robot 12 has a plurality of sensors which are used to enable the interactions with users as well as gathering other data. For example, the robot 12 is provided with a plurality of microphones 26 which can be used to pick up environmental noise as well as detecting vocal interactions with users. The robot 12 also has a pair of RGB cameras, indicated at 30 as well as one at the rear (which cannot be seen in figure 2. In addition, there is a stereo camera 28 in the form of a pair of RGB cameras spaced slightly apart. A further 3D (or range) camera is provided (at 29) which as well as taking RGB images uses infra-red range finding data to determine the distance to objects detected by the camera to around 10-15m. The images from these multiple cameras can be combined to create a 360° image around the robot 12. Alternatively, a 360° camera can be located on the top of the robot. These cameras are used to obtain visual images of the environment around the robot 12 and also as part of the interactions with users. For example, the data from the cameras can be processed to detect faces of customers and can be used to recognise faces as well as determining emotional states of the users. In addition, the cameras can be used to identify environmental information including information about goods available for sale and in the creation of planograms.
Further sensors provided on the robot 12 include short distance proximity sensors 32 of the type commonly used on motor vehicles to determine the distance to any nearby objects including those which might be in the path of the robot. The robot 12 is provided with further depth perception in the form of a LIDAR sensor 34 which can operate in 2D and 3D modes. In 2D mode the detector is acting in a similar fashion to the proximity sensors detecting objects at, or slightly above, floor level. However, in 3D mode the detector is taking data to determine the full shape of an object to at least the full height of the robot. The data from the LIDAR and proximity sensors is used to allow the robot 12 to navigate the environment, such as a retail store, determining its distance relative to stationary and moving objects to ensure that it does not run into anything. Data from the sensors is processed in one or more processors in the form of CPUs 36. The output from the CPU includes information displayed on a screen 38 or sounds, typically replicated voice sounds, through a speaker 40. The screen 38 is a touch screen to allow manual input of information where necessary. The CPUs 36 also control motor which in turn drive wheels 40 which enable the robot to move. The sending of the data, via communication pathway 26 is also controlled and facilitated by the CPUs 36.
Also shown in figure 2 are connections to further computer devices. Indicated at 42 is a standard computer device such as a laptop computer, a tablet computer device or a mobile telephone computing device. These computer devices 42 are used to review data gathered by the fleet of robots 12 once it has been processed and this can be displayed in the form of maps. This data, and the maps, come from the robot management system 24 to which the computer device 42 is connected via a standard data communication link 44. In addition, the computer device 42 can be connected directly to the robot device 12 via an alternative data link 46. This is typically used for diagnostic and maintenance reviews of the robot 12 but can be used for direct communication of the data gathered by the sensors of the robot.
An important aspect of the operations of the robots 12 is the determination of their location. One example of how the location of a robot is determined is based on data about the position of the robot relative to known fixed objects. Taking the example of robots operating in a retail environment, the floorspace of that retail environment, whether it is a single retailers store or multiple retailers in a shopping mall, will contain multiple permanently fixed objects such as walls, doors pillars and the like. Typically, there will also be other objects which are generally fixed but which in principle can be moved. Examples of these include display rails and display cabinets which stay in one place for long periods of time but which are occasionally moved during refitting, or rejigging, of a store. As the robot 12 moves around the retail environment it is able to use its multiple sensors to determine its location relative to these fixed objects. Since the robot starts from a known location of a docking or recharge station it can use data from its drive system indicating distance and direction travelled to give an approximate location. This can be enhanced by the use of the proximity sensors 32 which determine the distance from the sensor to any nearby object. However, this can lead to inaccurate readings since the proximity sensors 32 are unable to distinguish the fixed objects from movable objects (such as a person standing still). Although over time the proximity sensors can build up a sufficiently accurate map to determine locations more accurately, it is advantageous to also use the lidar sensor 34 in order to gain a more accurate picture of the environment immediately adjacent the robot 12 by using the lidar data to identify the shape of objects. This can be further enhanced by using the stereo cameras, 3D cameras 28 and RGB cameras 32 identify objects. For example, the data from these cameras is used, as explained later, to identify people, and in particular peoples' faces using facial recognition and analytics software, and this information can be used to eliminate the people from the location determination as they are not fixed objects. In addition to these sensors an internally located sensor in the form of an inertial measurement unit (IMU) is included to measure movement of the robot 12. This further assists in determining the location of the robot as it travels. As is common with inertial measurement units this includes accelerometers, gyroscopes and magnetometers the outputs of which determine the orientation, speed of travel, distance travelled and direction of travel.
All of the data gathered by sensors of the robots 12 is linked to the location at which that data is gathered. This data is then processed in the robot management system 24 and an output is produced. An example of an output from the robot management system 24 is a signal which is sent to one or more of the robots 12 via the data communication link 22 giving them instructions to take an action. An alternative output is a visual representation of the retail environment, typically in the form of a map, which can be viewed on a computer device 42 and is sent to that device via the data communication link 44.
A schematic representation of the data in map form is shown in figure 6. A centralised map 50 contains data from a plurality of overlaid maps, generally indicated at 52, each layer containing different information as set out in the following table.
54 - User information and control layer User-in-the-loop
The map is provided in layers so that the user gets access to specific elements rather than the whole complex map at once. Turning on/off layers allows for gradual processing of the information on the user's part so as to request/control the layers that matter in that context .
This allows for a reshaping and creating a user friendly map and information display.
- data used all.
56 - 360 Map
Used for the ability of representing real spaces in the virtual world. Allows full immersion (VR, AR data sets) so any space can be checked and visited virtually with real time placement of objects.
- data used: 360° camera (spherical system), lidar, 3D lidar. 3D planograms
This is a detailed record of the products on display.
The extra layer of information allows for extra depth and complete 3D immersion, similar to Google Maps' Street View .
This means the product placement can be seen across all dimensions (including height and depth on shelf) and key things can be assessed such as distance between displays and whether things have been placed in the exact right location.
- data used store layout plan, 3D lidar, stereo cam and 3D cam. - 3D map (complete geometric info on Xr Yr Z dimensions)
This layer is used to enhance the base 2D layer with spatial information allowing a 3D representation of the world. Same base as the 2D camera but done entirely in three dimensional space.
- data used: RGB, Lidar, Stereo Camera, 360 Camera. - Forecast and prediction layer
Predictions of the future based on previously learned information and external sources, (similar to google maps
- "arrive by" showing predicted traffic)
The model uses historic information and previously learned data. It is trained with specific data per location (more specific predictions) or general area data. - Generated insights (extracted from other data)
This layer is generated by the AI/ML (machine learning) software that analyses all the other layers and generate insights for each part of the map. It is presented in a useful way to the user that access it, giving instant access to the whole fleet data and actions.
Benefits: - Easier access to the insights
- Have all the information from the other layers aggregate in a single layer that is really useful for the user. - Social interactions
One of the main features of the robot is to interact with people in the space it is deployed. There are a high number of interactions generated daily which for a better understanding and monitoring is embedded in a separate layer of the map.
The robot interacts with people via voice and touchscreen and displays all the information on the screen. All the information generate by each interaction is stored behind the Face ID of the person interacting with. This information is anonymised placed on the layer.
The data involved to create the layer come from: cameras, audio I/O, visual, touchscreen.
Benefits:
Detailed information about the customers that interacted with the robot as well as the information they needed
Facial analytics runs continuously so as to link all these to the information presented on screen. This generates powerful insights. - People flow and footfall monitor
While navigating the space the robot monitors all the people detected and monitor their flow (direction of arrival and of destination) adding all the information on a new layer. This is used to understand the way people are navigating in store and where are the hot and cold spots of the space.
This layer contains information about demographics extracted using face analytics algorithms, coupled with any interaction or exchange of information between robot and user.
This is created using software algorithms based on RGB cameras as well as enhanced by using 3D cameras.
Benefits:
- Understand how people are moving in the space
- Know how many people visited the space
- Map the hot and the cold spots so actions can be taken to level them. - Product - space mapping
This is a detailed 3D map of the space within which the robots move. One of the main purposes of this map is to be customer facing making the interior of the store available to customers and potential customers via screens on computer devices or the are devices. Overlaid onto this information is additional information relating to the products that are on display on the shelves allowing a customer to browse before entering the physical environment of the store. - External info layer
This layer offers the ability to bring external data overlying it on the existing map structure. This expands the system functionality beyond the fleet sensing capabilities.
It is built by integrating directly with external sources/api or manually loading the external information and synchronize it with existing layer. Once the new data is integrated into the system the robot fleet and the user can use it to take decisions or generate insights.
Benefits:
- Extending the functionality by using external data sources - Allowing the user to see the new data into a map view, leads to a more in-depth and accurate prediction - Environment ambient layer
This layer contains information about the operating environment. This includes air quality, temperature, humidity, noise levels (full spectrum), light conditions, Wifi strength.
Benefits:
- Based on the temperature and humidity information the robots adjust the navigation path to eliminate possible faults to the internal components
- Based on the wifi strength navigation path can be adjusted in order to avoid the dead zones situations
- Real time data access to the environment info allows users to optimise the environment - 2D geometric map
This layer is the baseline for all the following layers to build on. It is a map generated (or pre-programmed) from operational environment geometry and includes all the static obstacles, furniture and other architectural elements detected by the robot while navigating the space.
The home points and origin of the map are visible in this layer and are used to synchronize the layers between them.
In order to build this layer you need: Lidar (2D, 3D),
RGB, RGB-D, proximity sensors, stereo camera, ultrasonic, time of flight laser, etc (not an exhaustive list).
This map can be built by just one unit or by a fleet of units mapping the same space. The Centralise Management System collates all the maps and generates a master map for the entire space.
Benefits: - Allows robots to orientate, localise and navigate the space
- Provides localisation for the next layers - gives the x, y coordinates
- Give real time map of the space, accounting for temporary or late changes.
As part of the environmental map data is used to determine locations or areas where interference and noise received by the sensors can give inaccurate readings. For example, areas prone to powerful sunlight can cause significant difficulties to infrared sensors or areas prone to high magnetic fields can interfere with the magnetic readers in the inertial measurement unit. This is a particular problem as measurements from the inertial measurement unit are used in accurately locating the position of the robot relative to a starting point. As a result, if a robot is entering an area which contains strong magnetic fields or is sometimes prone to strong magnetic fields the robot can place less emphasis on the data gathered from the inertial measurement unit and rely more heavily on other data to determine its location.
The initial gathering of the data to indicate an area where interference might be occurring is determined by measuring noise signals in the data gathered by the sensors and determining whether they exceed other average noise levels. Referring now to figure 3 (which is split into two parts, figure 3A and 3B) this figure represents an interaction between a robot 12 and a customer in a single retailer's store. It should be noted that this same interaction could be in a multiple retailer environment with the robot directing the customer to a particular retailer who can provide the product requested.
Initially at step 80 robot is not interacting with any customers and is moving around looking for potential customers to interact with. At step 82 a customer approaches the robot and at step 84 the robot analyses the person using facial recognition software to determine whether it, or any other robots, have previously interacted with this customer. This data is part of the social interactions data which forms the social interactions map 66. At step 86 it is determined whether interactions have taken place with this customer previously and, in this example, when answer is "Yes" the robot then knows, at step 88, that the person wants to buy a product X and that they have asked for technical specifications when interacting with another robot at the entrance to the store because this information was shared via the robot management system 24 to all of the robots. As a result, at step 90, the robot poses a question saying "Hello. Do you want me to present more details about Product X?" When, at step 92, the customer answers "yes, please do" the robot then drives slowly towards where product X is located and at the same time presents technical specifications on its screen to the customer.
At step 96 the customer interrupts the robot and asks "why is this better than product Y". At step 98 the robot presents an on-screen comparison between the products X and Y with an answer to the question provided vocally with the main advantages and disadvantages. After this review the customer decides to buy product Y and asks the robot "can you please send product Y to the register and help me with some accessories?" The decision to choose product Y instead of product X may be useful to the store manager and this interaction can be highlighted drawing it to their attention (see step 102). In response to the customer's request, at step 104, the robot sends notification for somebody to take product Y to the register and responds to the customer indicating that this is the case and asking what accessories they would like. At step 106 the robot undertakes a similar process to that previously described to assist the customer in determining which accessories are required. Once it is clear that the interaction with the customer is coming to an end the robot asks "would you mind answering a few questions about your experience?". If, at step 108, the customer agrees then at step 110 the robot asks a series of questions to obtain further information and determine how happy the customer was with the customer service provided. In addition, the facial recognition software is used to determine, from facial expressions, whether the answers correlate with the facially expressed emotions and body language of the customer. This customer feedback information, which is link to a number of locations and products, can be provided to a manager via the robot management system either in an aggregated form over a period of time such as a day, week or month or can be provided instantly (step 112). In the event that a customer is unhappy, and in particular if this is verified with the facial emotional recognition software, then managers or staff can be alerted, at step 114, and can meet the customer and attempt to resolve the problems.
Returning to step 86, if the customer is not recognised then at step 116 the robot can start with a standard introduction such as "Hello there! How can I help you today?" This is accompanied with an on-screen display of a few options. In the event of a verbal response such as, at step 118, a customer saying "I need help to find category X. Can you please help me getting there?", this data is stored against the location indicating that a customer at that location was interested in category X. Later reviewing accumulated data can allow insight enabling managers to improve the retail environment, including the layout of the environment such as product placement on the basis of these interactions.
At step 122 the robot can reply indicating that they will take the customer to the required area and the robot is able to calculate the fastest route to that location, not only using the layout of the retail environment but also adding information from other map layers including footfall data which can indicate a very busy area or data indicating a blockage in an area for example where spilt liquid has required an area to be closed. Once the robot and customer arrive at the location for category X, at step 124, the customer is asked if they require any further information. If the answer to this is "yes" and the customer requests, at step 126, a "product 10" because it is not visible on the shelf, the robot can check for stock and, at step 128, ask how many of that product the customer requires. At the same time the robot can alert managers that a product is missing or cannot be easily found by the customer thereby suggesting restocking or rearrangement of that area.
The customer replies that they require two items, at step 132, and, at step 134, the robot requests that the customer remains that location while stock is fetched for them. The robot can then ask if the customer requires anything else and when answered "no, thank you that was helpful", at step 136 the map 66 can be updated with a positive customer response for later review.
Returning to step 124, if no further assistance was required because, at step 140, the customer has indicated that they have what they need, this can be logged at step 142 as a customer having successfully located a particular product. Other information, including the facial recognition information is also stored step 144 and can be retrieved in the event that a customer returns later on the same day or on another day.
Turning to figure 4 (which is split into three parts, figure 4A, 4B and 4C) this figure represents another interaction between a robot 12 and a customer in a retail environment. At step 150, a fleet of robots is deployed in the retail space. As a result of an interaction, of the type described in relation to figure 3, a robot is taking a customer to an area referred to as area 1 and at step 152 notices that in an area referred to as area 5 there are a lot of people, or more specifically an unusually large number of people, waiting to be served. At step 154, this information is updated to the map and due to the nature of that data, indicating an exceptional circumstance a manager thus allowing them to take responsive action. The fleet, that are not other with customers, can gather further information at step 158. When the first robot, who noticed the problem in area 5, has completed their journey with their customer (step 158) they determine whether further action is required at step 162 (it should be noted that step 162 is shown in figures 4A, 4B and 4C.)
If other action is required, for example a customer noticing an area of danger (step 164), such as spilled goods, the data is transferred from the robot to the robot management system 24 which updates the map highlighting this area of danger. A manager is notified at step 168 and the updated map is used to alter the course of travel to navigate around the area, step 170. The robot that identified the area navigates to that area, at step 172, and takes pictures allowing it to alert the managers to the type of danger. It is determined at step 174 whether the problem has been cleared and if not, the robot remains in that location, calling for extra assistance if needed, and warning customers and others of the danger.
When the problem has been cleared, at step 178, the robot sends data updating the maps and interrogates the robot management system 24 for further tasks or returns to its docking station to recharge. If at step 162 no further action is required then the robot checks the map, at step 182 see if area 5 is still busy. If that area is still busy, at step 182 the robot can navigate to area 5 to assist informing other robots that it is on the way. It should be noted that managers can intervene in the robot management system 24 at any time and redirect robots as they see fit. If area 5 is no longer busy the robot undertakes, at step 184, similar to that set out at step 178, and assesses for further tasks or returns to a docking station to recharge.
Turning to figure 5 (which is split into three parts, figure 5A, 5B and 5C) this figure represents an interaction between a robot 12 and a patient in a hospital environment. In a similar way to the retail environment, a plurality of robots are deployed in a hospital at step 200. It is normal that the robots are deployed from and are based around areas where high levels of interaction with patients are required. This would for example be around the reception area or around nurse stations. Docking stations and recharging points can also conveniently be located here.
At step 202 when a patient comes to reception there is either a robot at the reception desk, and not otherwise engaged in other interactions, or not (step 204). If robot is available the robot, at step 206, approaches the patient and asks "How can I help you?" The patient replies providing information to the robot about the appointment that the patient is due to attend (step 208). This data is recorded, at step 210, to analyse how early patients arrive to attend appointments. The location at which the information is provided is also used to determine where patients are not arriving at the correct locations allowing improvements to signage to be considered. At step 212 whilst receiving this information the robot, using its cameras, it gathers images and processes these using facial analytics to determine the mood of the patient.
At the same time, the facial and head motion analytics are used to estimate the pulse. It is determined at step 214 whether the output of the facial analytics indicates a patient inside or outside expected parameters or in other words, is there a problem detected? If no problem is detected the robot queries the hospital record system to determine whether the doctor in question is still with a patient. If so, the patient is asked to wait until the doctor is free. The patient replies at step 220 and later, once the doctor is free, the robot asks the patient to follow them guiding them to the doctor's room, at step 222, and handing them over to the doctor's care.
If a problem is identified at step 214, for example, if the patient's pulse rate is above or below expected acceptable levels or if the patient appears particularly agitated then, at step 224, the robot will notify nurses suggesting that an intervention may be required before the patient sees the doctor. Because the robot is now engaged in an extended interaction with a patient a request is made, at step 226 that the robot management system 24 dispatches another robot to assist patients in the reception waiting area.
Now returning to the question at step 204 whether a robot is in reception, if a robot is not located there then the robot is alternatively engaged in checking patients who have been admitted to hospital at step 228. All of the data gathered by the robot can be collated centrally by the robot management system 24 and maps produced, 230. These maps would typically include those described above but could in addition to include patient data, either obtained from observations of the patient, from data gathered from diagnostic devices which have sampled the patient or from access to patient records. A further addition to this data is a map indicating insights which have been derived from patient data or from interactions with patients which suggest an improving picture for a patient or otherwise. The robot management system, at step 232, determines prioritised work for the robots. This can either be specific tasks or identifying an area of the hospital where work needs to be prioritised. In this example the robot management system 24 has identified area 3 as a priority and the robot moves to that area and start interacting with each patient. This involves running through a standard series of diagnostic and/or consultation questions as well as undertaking the facial recognition and pulsed determination mentioned previously at step 212. It is again determined whether characteristics measured are outside expected parameters and if doing so indicate a problem at step 238. If no problems are detected with any patients at step 240 the robot reports back to the robot management system and requests the next prioritised job. However, if problems are identified at step 242 the robot advises a member of staff, such as an on-call nurse, sending a report together with the location of the patient and their medical records. At step 244 a video call is started between the nurse and the patient whilst a locally available member of staff is called to the room containing the patient. At step 246 while the nurse is talking to the patient the robot continues to analyse the patient reading parameters such as their pulse which can be displayed to the on- call nurse until at step 248 the patient is handed over to the care of a physically present member of medical staff.
It will be appreciated by persons skilled in the art that the above embodiments have been described by way of example only and not in any limitative sense, and that various alterations and modifications are possible without departure from the scope of the protection which is defined by the appended claims. For example, the robot management system 24 is described above as being a cloud-based processing system. However, any suitable alternative location for this management system would be acceptable including, but not limited to, closed local networks, the location of the management system on one of the robots or distributing the processing across some or of the robots' CPUs.

Claims

Claims
1. A robot fleet and control system therefor, comprising:- a plurality of robots for interacting with users and having a plurality of sensors for gathering data relating to said interactions and to locations and at least one communication device for sending data; and at least one data processor for processing data from said plurality of robots together with said location data and creating output data therefrom.
2. A robot fleet and control system according to claim 1, wherein said processor also processes environmental data to create said output data.
3. A robot fleet and control system according to claim 2, wherein said environmental data is gathered by said robots.
4. A robot fleet and control system according to claim 2 or 3, wherein said environmental data comprises product location data.
5. A robot fleet and control system according to any preceding claims, wherein said output data comprises instructions to at least one of said plurality of robots.
6. A robot fleet and control system according to any preceding claim wherein said output data comprises a map.
7. A method for operating a control system for a robot fleet, the robots for interacting with users and having a plurality of sensors for gathering data relating to said interactions and to locations and having at least one communication device for sending data to a processor for processing said data, comprising the steps receiving data from a plurality sensors gathering data on a plurality of robots relating to interactions between said robots and users thereof and to robot locations; and processing said data together with said location data and creating output data therefrom.
8. A method according to claim 7, further comprising processing environmental data to create said output data.
9. A method according to claim 8, further comprising said robots gathering said environmental data.
10. A method according to claim 7 or 8, wherein said environmental data comprises product location data.
11. A method according to any of claims 7 to 9, further comprising using said output data to instruct at least one of said plurality of robots.
12. A method according to any of claims 7 to 10, further comprising outputting said data as a map.
13. A control system for a robot fleet, the robots for interacting with users and having a plurality of sensors for gathering data relating to said interactions and to locations and having at least one communication device for sending data to a processor for processing said data, comprising the steps first computer code for receiving data from a plurality sensors gathering data on a plurality of robots relating to interactions between said robots and users thereof and to robot locations; and second computer code for processing said data together with said location data and creating output data therefrom.
14. A control system according to claim 13, further comprising third computer code for processing environmental data to create said output data.
15. A control system according to claim 14, further comprising fourth computer code for said robots gathering said environmental data.
16. A control system according to claim 14 or 15, wherein said environmental data comprises product location data.
17. A control system according to any of claims 14 to 16, further comprising fifth computer code for using said output data to instruct at least one of said plurality of robots.
18. A control system according to any of claims 14 to 17, further comprising sixth computer code for outputting said data as a map.
PCT/GB2020/052361 2019-09-30 2020-09-30 A robot fleet and control system therefor Ceased WO2021064367A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2207501.4A GB2604520B (en) 2019-09-30 2020-09-30 A robot fleet and control system therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB201914058A GB201914058D0 (en) 2019-09-30 2019-09-30 A robot fleet and control system therefor
GB1914058.1 2019-09-30

Publications (1)

Publication Number Publication Date
WO2021064367A1 true WO2021064367A1 (en) 2021-04-08

Family

ID=68539051

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2020/052361 Ceased WO2021064367A1 (en) 2019-09-30 2020-09-30 A robot fleet and control system therefor

Country Status (2)

Country Link
GB (2) GB201914058D0 (en)
WO (1) WO2021064367A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115194781A (en) * 2022-05-11 2022-10-18 华南理工大学 Nursing robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020165638A1 (en) * 2001-05-04 2002-11-07 Allen Bancroft System for a retail environment
US20170225336A1 (en) * 2016-02-09 2017-08-10 Cobalt Robotics Inc. Building-Integrated Mobile Robot
WO2017201490A1 (en) * 2016-05-19 2017-11-23 Simbe Robotics Inc. Method for automatically generating a planogram that assigns products to shelving structures within a store
WO2018098490A1 (en) * 2016-11-28 2018-05-31 Brain Corporation Systems and methods for remote operating and/or monitoring of a robot
WO2018106719A1 (en) * 2016-12-05 2018-06-14 Fellow, Inc. Intelligent service robot and related systems and methods
US20190206400A1 (en) * 2017-04-06 2019-07-04 AIBrain Corporation Context aware interactive robot

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020165638A1 (en) * 2001-05-04 2002-11-07 Allen Bancroft System for a retail environment
US20170225336A1 (en) * 2016-02-09 2017-08-10 Cobalt Robotics Inc. Building-Integrated Mobile Robot
WO2017201490A1 (en) * 2016-05-19 2017-11-23 Simbe Robotics Inc. Method for automatically generating a planogram that assigns products to shelving structures within a store
WO2018098490A1 (en) * 2016-11-28 2018-05-31 Brain Corporation Systems and methods for remote operating and/or monitoring of a robot
WO2018106719A1 (en) * 2016-12-05 2018-06-14 Fellow, Inc. Intelligent service robot and related systems and methods
US20190206400A1 (en) * 2017-04-06 2019-07-04 AIBrain Corporation Context aware interactive robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115194781A (en) * 2022-05-11 2022-10-18 华南理工大学 Nursing robot

Also Published As

Publication number Publication date
GB201914058D0 (en) 2019-11-13
GB202207501D0 (en) 2022-07-06
GB2604520B (en) 2024-08-28
GB2604520A (en) 2022-09-07

Similar Documents

Publication Publication Date Title
US12165189B2 (en) Associating digital activities with positions in physical retail stores
US11436621B2 (en) Selecting available assignments in retail stores for users based on external assignments
US20230038289A1 (en) Cashier interface for linking customers to virtual data
US10311400B2 (en) Intelligent service robot and related systems and methods
US9796093B2 (en) Customer service robot and related systems and methods
JP2020502649A (en) Intelligent service robot and related systems and methods
US12154459B2 (en) Customized presentation of items on electronic visual displays in retail stores based on availability of products
JP2019049785A (en) Robot management system and commodity proposing method
WO2021064367A1 (en) A robot fleet and control system therefor
JP7476881B2 (en) Information processing device, information processing method, and program
Sugimoto et al. Situation-based Proactive Human-Robotic Systems Interaction and Collaboration in Future Convenience Stores

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20796882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 202207501

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20200930

122 Ep: pct application non-entry in european phase

Ref document number: 20796882

Country of ref document: EP

Kind code of ref document: A1