[go: up one dir, main page]

WO2015069320A2 - System and method for mobile identification and tracking in location systems - Google Patents

System and method for mobile identification and tracking in location systems Download PDF

Info

Publication number
WO2015069320A2
WO2015069320A2 PCT/US2014/038806 US2014038806W WO2015069320A2 WO 2015069320 A2 WO2015069320 A2 WO 2015069320A2 US 2014038806 W US2014038806 W US 2014038806W WO 2015069320 A2 WO2015069320 A2 WO 2015069320A2
Authority
WO
WIPO (PCT)
Prior art keywords
information
location
determining
signature
group
Prior art date
Application number
PCT/US2014/038806
Other languages
French (fr)
Other versions
WO2015069320A3 (en
Inventor
Thomas B. Gravely
Martin C. Alles
Andrew E. BECK
Original Assignee
Andrew Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Andrew Llc filed Critical Andrew Llc
Publication of WO2015069320A2 publication Critical patent/WO2015069320A2/en
Publication of WO2015069320A3 publication Critical patent/WO2015069320A3/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/01Determining conditions which influence positioning, e.g. radio environment, state of motion or energy consumption
    • G01S5/012Identifying whether indoors or outdoors
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/02Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using radio waves
    • G01S5/0257Hybrid positioning
    • G01S5/0263Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems
    • G01S5/0264Hybrid positioning by combining or switching between positions derived from two or more separate positioning systems at least one of the systems being a non-radio wave positioning system
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S2205/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S2205/01Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations specially adapted for specific applications
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S2205/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S2205/01Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations specially adapted for specific applications
    • G01S2205/02Indoor

Definitions

  • Wireless E91 1 calls must be located within an average accuracy range of approximately 100 meters and with latency not exceeding 30 seconds.
  • Network operators have installed wireless location systems that employ a variety of positioning technologies to establish a caller's location.
  • a serving wireless network detects the E911 call, identifies the mobile, and directly or indirectly launches a location request to the wireless location system of the serving carrier. Once a target mobile's location coordinates are determined, the location system typically sends that information through the serving network for delivery to or retrieval by the PSAP to which the network delivered the emergency call.
  • Carriers must report to federal and local agencies whether they comply with the caller location accuracy criteria. In the U.S., wireless carriers currently are permitted to perform all location accuracy compliance testing outdoors.
  • FCC Federal Communications Commission
  • Certain key performance attributes/issues of currently available in- place "macro-area" E91 1 Phase II location systems include: (a) a design that produces mobile location accuracy of approximately 50m to 100m; (b) non- autonomous operation, i.e., individual target mobiles are located only as directed by wireless networks based on 911 call initiation; (c) time delay - location of target mobile occurs within 30 seconds; and (d) significant performance degradation if target mobile is located within a structure.
  • OTT over-the-top
  • non-carrier enterprises examples include, but are not limited to, applications hosted on smartphones, supported by location capabilities resident within the mobile device itself, supported by individual non-carrier entities such as mobile device makers, application content providers, mapping services, and are not dependent whatsoever on any carrier's location facilities.
  • OTT over-the-top
  • examples include, but are not limited to, applications hosted on smartphones, supported by location capabilities resident within the mobile device itself, supported by individual non-carrier entities such as mobile device makers, application content providers, mapping services, and are not dependent whatsoever on any carrier's location facilities.
  • each of these examples lacks the basic location accuracy and response time demanded by emerging indoor-specific LBAs.
  • Figure 1 is a flow chart for determining a signature of an object according to an embodiment of the present subject matter.
  • the wireless device is tracked using the determined signature.
  • information is provided to the object.
  • Figure 2 is a flow chart for determining a signature of an object according to another embodiment of the present subject matter.
  • the wireless device is tracked using the determined signature.
  • information is provided to the object.
  • Figure 3 is a block diagram for an exemplary system for determining a signature of an object according to an embodiment of the present subject matter.
  • LBAs Large retail establishments, for instance, would like to help customers navigate within stores to the department or to specific products that they seek, independent of store staff. Many would like to track customer routes within stores where such information may be used later to facilitate product placement or advertising campaigns.
  • convention facility managers and sports complex managers envision similar services, including guiding patrons to convention booths, dining facilities, points of interest, and even to the level of individual vending machines. Retailers would like to identify and push promotional messages (on an opt-in or opt-out basis) to customers and account holders nearing or entering their establishments.
  • Some commercial property brokers and lessors envision being able to better allocate space according to tracked usage. Financial and credit organizations may wish to verify card holder physical presence at points of transaction.
  • Examples of potential users and applications include, but are not limited to:
  • the customer's mobile device pushes its ID to the store's identification and/or navigation system or is detected and ID'd by the store's system.
  • the store's system reports the ID and presence to a server, which is typically, but not necessarily, located on site.
  • the server/system pushes promotions, advertising, and/or discount information to the customer. Real-time or post- visit store trail mapping and display time analysis is also facilitated for current or later use by the store.
  • the Navigation and Targeted Promotion functionality may be utilized in combination or separately.
  • Mall/Shopping Center Targeted Promotion A customer nears a store in the mall/shopping center.
  • the customer's mobile device pushes its ID to the mall's identification and/or navigation system or is detected and ID'd by the mall's system.
  • the mall's system reports the ID and presence to a server, which is typically, but not necessarily, located on site.
  • the server/system pushes promotions, advertising, and/or discount information and store location(s) to the customer.
  • the promotion, advertising, and/or discount information may only be for stores in the customer's vicinity.
  • Real-time or post- visit trail mapping and display time analysis is also facilitated for current or later use by the mall.
  • individual shopper mall navigation is supported on a by-store and/or by-product category and/or by-brand basis.
  • Sports or Entertainment Facility Attendee Navigation this is similar to Convention Venue-Navigation described above. Event attendees receive an optional mobile device app in advance of event or upon entry to the venue.
  • the mobile device app lists all booths, displays, seminars, dining, vending, and other facilities, and provides on-demand personal navigation to these locations.
  • the customer's mobile device pushes its ID to the store's identification and/or navigation system or is detected and ID'd by the store's system.
  • the store's system reports the ID and presence to a server which is typically, but not necessarily, located on site.
  • the server/system accesses customer account information for store personnel assignment for service and/or follow-up on past, recent, or pending purchase or account activity.
  • An identification and/or navigation system locates, ID's, and tracks and reports building tenants' movements and where, when groups of tenants gather, etc.
  • the system provides office planners, real estate managers and other authorized users with information for space planning to support functional units, meeting room placement and sizing, etc.
  • the system additionally supports person locator and personal navigation applications.
  • Emergency Location Individual - A person uses their mobile device to call 911 or other designated number/code for emergency response.
  • An indoor location system locates the caller or locates and ID's the caller. The indoor system reports/pushes this information to a macro location system, is queried by the macro system for pertinent information, or reports the information to a database from which emergency caller information is retrieved.
  • Transaction ID Verification Near-Field Location A system locates and identifies mobile devices of persons in close vicinity of and using credit and debit cards for transactions, e.g., at point-of-sale (“POS”) devices, ATMs, etc.
  • Identification and/or location Technologies for use indoors include, but are not limited to:
  • RF Ranging (which may be based on Wi-Fi access points or dedicated infrastructure);
  • Proximity Detection (which may be based on distributed antenna systems (“DAS”) antennas, Wi-Fi access points, or dedicated infrastructure);
  • DAS distributed antenna systems
  • RF Pattern Matching (which may be based on Wi-Fi access points, small cells, or dedicated infrastructure);
  • RFID radio frequency identification technology
  • Pseudolites (which may be based on ground transmitters/signal sources that emulate the signal structure of GPS satellites);
  • Sound Detection (which may be infrasonic, ultrasonic, or audible scrambled signals);
  • GPS Global Positioning System
  • GNSS Global Navigation Satellite Systems
  • GPS The U.S. GNSS system
  • GPS is commonly deployed in a variety of devices, such as handheld GPS receivers that provide latitude and longitude as well as maps for navigation.
  • GNSS other than GPS are becoming a reality, including Galileo, GLONASS, and Beidou.
  • GPS satellites in Earth orbit transmit data signals that can be received by GPS receivers located in devices such as wireless phones.
  • the position of the device is calculated using a time-based approach from the data received from multiple GPS satellites.
  • the position calculation can be made in the device itself or by a remote server to which the device transmits the received/measured GPS data.
  • GPS signals are relatively weak and signal attenuation frequently renders GPS positioning unusable if the GPS device is within a building or even under dense tree foliage.
  • Assisted-GPS in which a central server sends certain GPS almanac and ephemeris data to a GPS device via a wireless data path, can be employed as a means to reduce time-to-first-fix (“TTFF”) and to help improve overall location performance.
  • TTFF time-to-first-fix
  • IR location systems determine position of objects based on sensed presence. Each object to be tracked requires a proprietary emitter that periodically transmits an IR beacon containing a unique code. Specialized IR receivers placed throughout a facility detect the beacons and determine the position of the object based on the known location of the detecting IR receiver. Because IR signals don't penetrate opaque materials such as walls and ceilings, an IR tracking system may require multiple receivers in each room to assure full coverage of the area. Since the IR path is essentially line of sight, problems can occur if the tracked asset itself blocks the view from the IR Tag to the reader.
  • Location by means of proximity detection simply provides an indication that a mobile device has been detected within range of a sensor or receiver. If the target mobile is detected, the location system typically reports as the mobile's location the location of the detecting antenna or zone. With the exception of the special case of near- field detection, proximity detection alone may not provide indoor- navigation-level accuracy. However, a combination of proximity and other methods may achieve objective performance. Proximity detection can provide a basis for geo-fencing applications.
  • GPS/GNSS signals are so attenuated or otherwise compromised as to be unusable indoors
  • a system of local, ground-based GPS-like signal sources could be employed to overcome that handicap and be useful to GPS-equipped devices located within structures.
  • Being able to deploy one's own (indoor) pseudolite positioning system, independent of GPS, could, theoretically, leverage GPS capabilities embedded in smartphones and other mobile devices to provide reliable high-accuracy indoor location.
  • pseudo-satellites pseudolites
  • small transceivers that are not GPS satellites, but that perform functions common to those satellites.
  • Pseudolites have recently gained more attention in the context of indoor location.
  • use of GPS frequencies is, for reasons that are readily understandable, highly protected and restricted by the U.S. Government.
  • An RFID location system includes RFID scanners installed throughout a facility that interrogate either active (radio transceivers) or passive tags that attach to objects.
  • Battery-powered active tags allow up to a twenty foot range between the scanner and the tags.
  • Passive tags have no batteries, but typically must be relatively close to the scanner (within inches or a few feet). Because of the limited range of passive tags, active RFID tags are the type usually found in positioning systems.
  • a centralized server stores the unique tag codes that the scanners collect and the server is able to identify and display the location of each tag according to which scanner detects a tag.
  • RFID systems determine position based only on the presence of the object in a particular area, so the accuracy of an active RFID system is dependent on the number and positioning of the scanners. Scanner repositioning may be necessitated due to changing floor layout or walls.
  • RFID systems that operate in the same frequency band as wireless LANs can pose RF interference issues with the LAN.
  • RFPM RF pattern matching
  • the mobile device to be located reports the RF characteristics of source signals, for instance from multiple cell sites, it observes at its current location, and the location system attempts to match those characteristics against a pre-established database of RF characteristics previously measured and recorded at each of many specific geographic points.
  • the location system determines as the target mobile's location the geographic point that yields the closest match of the mobile's reported RF characteristics and the pre-observed RF characteristics.
  • RFPM positioning accuracy is best in environments where the RF sources are densely deployed and poor where signal sources are sparse.
  • Devices to be located include a radio module that on a scheduled basis transmits an RF signal containing a unique identification code.
  • Sensors modified Wi-Fi access points or separate sensors installed in the area of interest receive the coded information and locate the device by means of proximity or other methods.
  • Some proprietary sensor networks calculate a device's location using measurements of signal strength or signal travel time from the tracked device to multiple sensors. Signal attenuation and multipath can negatively impact performance.
  • Wi-Fi-based sensor network tracking solutions can allow the access points to carry typical data traffic associated with Wi-Fi users, a cost advantage over dedicated (non- Wi-Fi) location systems. Since the device transmit schedule impacts battery life, the transmit schedule must be carefully managed to balance battery life against tracking and accuracy requirements.
  • the measured travel time or arrival time of mobile RF signals can be used in various ways to calculate a mobile's location and/or distance from a known reference point such as a cell tower, as is known in the art.
  • Timing-based location methods are familiar as uplink time difference of arrival (“U-TDOA”), observed time difference of arrival (“O-TDOA”), round trip time (“RTT”), timing advance (“TA”), multiple range estimation location (“MREL”), etc., and for years have been used to support "macro-area” emergency caller location and other applications. More recently, timing methods have been explored for indoor location and these development efforts continue.
  • Wi-Fi devices provides an attractive technology basis for location solutions.
  • a Wi-Fi-based solution can leverage the large base and economies of scale of installed networks and end user devices.
  • a Wi-Fi-based location system might support any type of location- aware application that involves PDAs, laptops, bar code scanners, voice-over-IP phones and other 802.11-enabled devices.
  • Proximity-oriented Wi-Fi location has been utilized for several years to support macro-area LBAs; in this usage a Wi-Fi access point ("AP") detected by a mobile device is referenced against a large (remote) database of geographic locations/addresses of APs.
  • AP Wi-Fi access point
  • Wi-Fi location has been used for location on a large-area, "macro-positioning" basis that generally yields accuracy to the level of a building or address.
  • many established vendors and startups have in the past year taken up the challenge of highly accurate indoor location using Wi-Fi access points as reference positions.
  • Wi-Fi-based RFPM for indoor positioning
  • vendors are exploring the use of other location technologies, such as timing-based position calculation using Wi-Fi access point references.
  • the video feed shows a large number of users, it may not be that easy to differentiate the movement of one particular user versus another, no matter how finely the video feed is examined in time.
  • the video feed may show only what we will hereinafter refer to as "blobs", i.e., objects that cannot be clearly characterized as person A rather than person B, but simply as a vague shape conforming to the outline of a human being when viewed from the vantage point of a video camera that is typically placed at ceiling level.
  • a video based location tracking system must have a user identification component as well as a location determining and tracking component. Since we are particularly interested in mobile device users, the identification component can occur in some interaction with the mobile, whereas the location determining and tracking component is independent of whether the user has or does not have an active mobile device in his possession. Additionally, the mobile device itself can be involved in the location determination and tracking, quite irrespective of whether there is a purely video location tracking system in operation.
  • any of the concepts developed here and applicable to locating and tracking human users indoors may equally well be applied to humans outdoors as well as other objects in different settings. Examples of interest could include tracking poachers in a game preserve observed from drones, cars on roads, boats on a lake, train cars in a switchyard, aircraft at an airport, inventory items, etc. While certain embodiments described below are discussed with respect to a store or other particular setting, those of skill in the art will readily understand that the described techniques and principles are not limited to use in a store and are applicable to other indoor and/or outdoor settings consistent with the present disclosure.
  • the video network can generally locate a user to within a few meters, and since identity is maintained by continuously comparing the image with the reference (where "continuously” may mean “often”, “periodically” with a small amount of time between comparisons, or within other appropriate time/space constraints to as to be able to distinguish user M from other objects), this information can be very specific, even to the degree of, for example, telling the user to turn around and look for an item on a particular , shelf behind her.
  • the LE makes use of its video network to obtain characterizing pictures, for example facial pictures which are re-matched with a reference image(s) to establish identity.
  • characterizing pictures for example facial pictures which are re-matched with a reference image(s) to establish identity.
  • a very well thought out, planned, and positioned network of video cameras is needed to achieve this end.
  • Other characteristics such as size, gait, etc. are also potentially applicable to the re-identification problem.
  • temporary characteristics that generally remain fixed at least for the duration of this store visit, such as the color of the shirt or jacket worn by the user can also be used.
  • the LE can maintain separate location tracks for both objects diverging from blob AB until such time as when identity can be re-established using better positioned cameras.
  • identity can be re-established using better positioned cameras.
  • This maintenance of two separate tracks can easily be generalized to allow for multiple divergences and multiple unresolved candidate track maintenance over the short haul.
  • the LE system can maintain a large number of viable tracks for each user until these tracks can be resolved to produce the correct track.
  • the initial identification is achieved using video.
  • a case of interest is where the user M and the location determining entity LE share an application that compares stored images of the user to the images available on the video feed. If a good match is established, LE has acquired the identity of M.
  • T the non-video technology mix used to re-acquire identity
  • T could be any one of the technology types used in location as detailed earlier, or a combination of one or more of them.
  • T implicitly has some form of identity of the user.
  • This concept can even be activated and applied to users that have a currently-known identity. If such a user is the only occupant (only blob) in some sub- region S in which T places him, then the identity of this user as derived from T must match whatever identity that blob has been currently assigned. If that is not true, it must mean that the user identity has not been derived correctly. It can be noted that for each possible discrete set of measurement values in T, it is possible to define such a sub- region. Thus, the location space can be thought of as having a potentially very large number of possibly overlapping sub-regions, each of which maps to a set of observations in T. An algorithm running in the background can examine when any one of these sub- regions shows a single blob in the video imagery. At every such time, user identity can be confirmed.
  • each blob is a candidate for one of several identities. This concept is elaborated on further below.
  • T can help re-acquire identity
  • T is used to communicate to the user to raise his hand.
  • raising his hand allows the video component to fully identify the user.
  • a suggestion conveyed via T to the user such as "turn around so we can help you better,” can lead the user to perform some action which the video imagery observes in S so as to identify the user.
  • these induced user behaviors can be made local to movements or rotations of the mobile device itself. Such movements could lead to information being passed back to the LE, such as information from a MEMS
  • the application that the user has signed up for to provide location specific information may have specific user interaction built in.
  • One example of this might be where the phone displays something to the user which is taken as a signal to wave the phone. This action can then be picked up by the video network to identify the user. If MEMS (or other) data is being reported back to the LE, further verification of whether this user did in fact wave his phone will also be available.
  • a general rule that can be distilled from the above example is to observe the variation in the Wi-Fi signals associated with a particular known identity, and then to use, for example, the general principles of signal attenuation with distance to associate the proper blob, thereby associating the proper human with the Wi-Fi identity.
  • the Wi-Fi signals themselves have very poor location resolution on their own but due to the independent movement of the users, the changes in the signals can allow us to estimate what identity should go with what blob, or human-like object.
  • Variations in the signal power caused not simply by distance but by angle, as might occur where a transmitting antenna has a certain beam shape (where this shape is known by the LE) can also be matched to the movement of blobs.
  • the store may insert a few judiciously-placed
  • Wi-Fi access points into the overall identification/location/tracking system. For example, if one had such a Wi-Fi access point in a passageway of a store where the signal increases dramatically and falls away similarly with distance, every user passing through is easily matched provided the video network provides even some low grained coverage of this passage. Alternatively, if the store is crowded and many users pass through the passageway at about the same time, we would be able to limit the identity ambiguity to the number of such users; that is, we would have groups of users who match collectively to a group of identities and then sort out the individual identities at a later time using the principles discussed herein.
  • Wi-Fi signals can substitute for Wi-Fi signals and be applied towards identity discovery in a similar manner.
  • Such a technique only needs to have some form of low resolution mapping of the signals or other measurable feature in the region of interest as well as some variation with location that can be differentiated. Assume, for example, that there is a window in the store where a passing user is able to pick up some distant cellular transmitter. If the user's mobile device reports this observation of the distant transmitter to a location application, and this event of the user passing the window is observable by a video network, we once again have a means of assigning identity.
  • the principle incorporates the variation of the entire observable set (or parts thereof) of measurements using technique T, due to the movement of a user within a venue, and correlating that variation with observed blobs in a video system.
  • a blob in a video can be associated to an identity.
  • one immediate advantage to the LE is that if any one of the group is identified, then by observing the common or nearby location of other video observed humans (or blobs), the LE can assign that group a set of identities. So, if there were a total of "N" identities that had at some stage not yet been assigned, and this group was of size "M", where M is at most N, then even without explicit assignment of individual identity to every member of the group the LE can at least temporarily assign the M identities to the group while concerning itself with the problem of which objects to assign the remaining (N-M) identities. In other words, it simplifies the identity assignment problem.
  • the unit with connectivity to the device of interest may also see many other mobile devices that are in proximity to the unit. Thus, in R] it may see a set Si of mobile IDs. Similarly in R 2 it may see a set S of mobile IDs, and in R 3 it may see a set S 3 of mobile IDs. Since the image of interest was in each of these regions, the mobile device ID for that image must be in each of these sets. Therefore, the image of interest must be in the intersection of these sets.
  • the identity of the image in so far as it can be linked to the identity of the mobile device associated with the image, can be sequentially narrowed down by taking the intersection of set Si with set S 2 , and then the intersection of that result with S 3 and so on.
  • the identity at least in terms of the mobile device associated with the image, has been discovered.
  • the object may be one or more individual persons, a group of people, a vehicle, a unit in an inventory, etc.
  • the object may be located outside or may be located in an enclosure, such as, but not limited to, a store, a shopping mall, a sports arena, a convention hall, etc.
  • the signature of the object includes a first and a second set of information.
  • the reference identification may be a set of measureable data such as, but not limited to, a picture, a video still or video clip, a phone number of a wireless device associated with the object, a MAC address for a wireless device associated with the object, infrared map, a mobile device application, etc.
  • the reference identification for the object should be sufficient to distinguish the object from another object that is within a particular space, sub-region, etc., that is being monitored.
  • a first set of information for the object is determined using a first monitoring system.
  • the first set of information includes location information for the object.
  • a second set of information for the object is determined using a second monitoring system.
  • the second set of information includes identification information for the object, such as discussed above.
  • the second set of information is compared to the reference identification determined at block 110. If the second set of information and the reference identification match, within some predetermined tolerance level, then an identification of the object has been made.
  • the signature of the object is determined based at least in part on the first set of information and the second set of information.
  • the object is tracked based at least in part on the determined signature of the object.
  • the object is provided with a third set of information such as, but not limited to, location information, navigation information, sales promotion information, advertising information, a mobile device application, etc.
  • the first monitoring system may be, but is not limited to, a video-based system, a sound-based system, an optics-based system, etc.
  • the second monitoring system may be, but is not limited to, a local area network, a Wi-Fi network, an RF ranging system, a proximity detection system, a RF pattern matching system, an RFID system, a pseudolite system, a GPS rebroadcast system, a GNSS rebroadcast system, a sound detection system, a light modulation system, a magnetic anomaly detection system, etc.
  • the first monitoring system and the second monitoring system may be any of the preceding systems or networks.
  • the object/user is associated with a wireless device that has a mobile application/software that requires the user (or someone else) to take one or more sense maps/reference IDs of the user (e.g., pictures, infrared maps, etc.) from different angles and face views and, optionally, extended time maps of the user (e.g., video, etc.) either stationary or performing one or more specific tasks (walking, jumping, packing boxes, etc.).
  • the sense map is stored on the user's wireless device/mobile application.
  • a sensing/monitoring network takes a similar sense map of the user in a possibly different place at a later time.
  • the user's wireless device/mobile application communicates with the sensing/monitoring network and compares one or more sense maps/reference IDs with the sense map taken by the sensing/monitoring network and if the comparison is within some predetermined tolerance, an identification of the user has been determined.
  • a particular monitored space such as a store
  • the same or a different sensing/monitoring network such as, for example, a video-based system
  • the signature of the user can be ascertained by combining the determined identification of the user with the user's location.
  • the user may be tracked throughout the store (or, perhaps, beyond depending on the reach of the video-based system) by the sensing/monitoring network (or a separate tracking system) and information may be pushed to the user such as the third set of information described above.
  • the signature of the object includes a first set of information and a second set of information.
  • the object may be one or more individual persons, a group of people, a vehicle, a unit in an inventory, etc.
  • the object may be located outside or may be located in an enclosure, such as, but not limited to, a store, a shopping mall, a sports arena, a convention hall, etc.
  • the signature of the object includes a first and a second set of information.
  • a reference identification is determined for the object.
  • the reference identification may be a set of measureable data such as, but not limited to, a picture, a video still or video clip, a phone number of a wireless device associated with the object, a MAC address for a wireless device associated with the object, infrared map, a mobile device application, etc.
  • the reference identification for the object should be sufficient to distinguish the object from another object that is within a particular space, sub-region, etc., that is being monitored.
  • the first set of information for the object is determined using a first monitoring system.
  • the first set of information includes location information for the object.
  • a second set of information for the object is determined using a second monitoring system where the second set of information includes a first and a second portion.
  • the second set of information includes identification information for the object, such as discussed above. The first portion of the second set of information for the object is determined from a second monitoring system and the second portion of the second set of information is determined from a third monitoring system.
  • the first and second portions of the second set of information are each individually compared with the reference identification determined at block 210.
  • either the first or the second portion of the second set of information is selected based on the comparison.
  • the selection is based on which of the first and second portion more closely matches the reference identification.
  • the "closeness" matching may be based on which set of data of the first and second portions has fewer deviations from the reference identification set of data.
  • the matching may be based on which set of data of the first and second portions is within some predetermined tolerance level.
  • the signature of the object is determined based at least in part on the first set of information and the selected portion of the second set of information.
  • the object is tracked based at least in part on the determined signature of the object.
  • the object is provided with a third set of information such as, but not limited to, location information, navigation information, sales promotion information, advertising information, a mobile device application, etc.
  • the first monitoring system may be, but is not limited to, a video-based system, a sound-based system, an optics- based system, etc.
  • the second monitoring system may be, but is not limited to, a local area network, a Wi-Fi network, an RF ranging system, a proximity detection system, a RF pattern matching system, an RFID system, a pseudolite system, a GPS rebroadcast system, a GNSS rebroadcast system, a sound detection system, a light modulation system, a magnetic anomaly detection system, etc.
  • the first monitoring system and the second monitoring system may be any of the preceding systems or networks.
  • a block diagram 300 is depicted for a system 303 for determining a signature of an object 301 according to an embodiment of the present subject matter.
  • the signature of the object includes a first set of information and a second set of information.
  • the object may be one or more individual persons, a group of people, a vehicle, a unit in an inventory, etc.
  • the object is located in a location space 302 which may be located outside or in an enclosure, such as, but not limited to, a store, a shopping mall, a sports arena, a convention hall, etc.
  • the signature of the object includes a first and a second set of information.
  • the system 303 includes a first monitoring system 321 having at least a first sensor 322 and a second monitoring system 331 having at least a second sensor 332.
  • First monitoring system 321 determines the first set of information for the object 301.
  • Second monitoring system 322 determines the second set of information for the object 301.
  • the first monitoring system may be, but is not limited to, a video-based system, a sound-based system, an optics-based system, etc.
  • the first sensor 322 may be a video camera that may be part of a video camera network which covers location space 302 which may be part of a larger space in which object 301 is to be monitored and/or tracked.
  • the second monitoring system may be, but is not limited to, a local area network, a Wi-Fi network, an RF ranging system, a proximity detection system, a RF pattern matching system, an RFID system, a pseudolite system, a GPS rebroadcast system, a GNSS rebroadcast system, a sound detection system, a light modulation system, a magnetic anomaly detection system, etc.
  • the second sensor 332 may be a Wi-Fi access point that may be part of a Wi-Fi network which covers location space 302 which may be part of a larger space in which object 301 is to be monitored and/or tracked.
  • the first monitoring system and the second monitoring system may be any of the preceding systems or networks.
  • system 303 includes circuitry and/or software and/or a memory device 311 for storing a reference identification for the object 301.
  • System 303 also includes a processor 380.
  • Processor 380 includes a comparator for comparing the second set of information with the reference identification in device 311, circuitry /device and/or software for determining the signature of the object 301 based at least in part on the first set of information and the second set of information, circuitry/device and/or software for tracking the object 301 based at least in part on the determined signature of the object, and circuitry/device and/or software for providing a third set of information to the object 301.
  • the third set of information may be, but is not limited to, one or more of location information, navigation information, sales promotion information, advertising information, and a mobile device application.
  • MLDC Mobile Location by Dynamic Clustering
  • the MLDC system is used to determine a location of a mobile device associated with an object using network measurement reports or similar information (which may include calibration data), clustering the information and comparing it with information received from the object's mobile device, and determining a location of the object/mobile device based on the comparison.
  • an MLDC system places a mobile device associated with an object in some region ⁇ x,y,z ⁇ of location space.
  • a video-based monitoring system such as a video camera network (“VCN"), shows only one object in the ⁇ x,y,z ⁇ region where that object has coordinates (xv,yv,zv).
  • VCN video camera network
  • the MLDC system uses the (xv,yv,zv) coordinates as the location estimate of the object.
  • a system such as system 303 in Figure 3, uses a first type of observation system to locate and track an object, and a second type of observation system to refme the location estimate from the first type of system, where the second type of observation system on its own is incapable of tracking the user.
  • system 303 uses Wi-Fi information to locate a user and then refines that location estimate using a VCN, where the VCN on its own cannot track the user.
  • the MLDC system uses Wi-Fi data to place an object's mobile device in some region ⁇ x,y,z ⁇ of location space.
  • Data from a VCN shows only one object in the ⁇ x,y,z ⁇ region where that object has coordinates (xv,yv,zv).
  • the MLDC system uses the (xv,yv,zv) coordinates as its location estimate of the object.
  • the VCN on its own can only tell how many objects (e.g., blobs, human like images) exist in one or more regions of the location space. The VCN cannot provide the identity of the object.
  • a VCN cannot provide continuous coverage over the locatable region (e.g., if the region includes an area which is out of range of the video cameras in the VCN), a situation may arise where the VCN cannot track an object.
  • the MLDC may be able to track the object where the VCN cannot, and where both the MLDC and the VCN are capable of tracking the object, the MLDC may use information from the VCN to improve its location estimate of the object.
  • algorithm A could be a generic "blob-following" algorithm.
  • algorithm A fails and cannot recover. Any entry into a non- VCN-covered region is a potential failure point.
  • the tracking of the object was being undertaken in conjunction with and MLDC system or any other location/tracking scheme using, e.g., Wi-Fi or other RF signals, the latter technique could track the object into and out of the region R. After the object has cleared region R, the latter technique could then pass tracking control back to the VCN.
  • Such a scheme would have the potential to utilize the VCN while not exhibiting complete failure points in the location space.
  • a region in a 2-D or 3-D space may be calibrated where the space has one or more distinguishing features ⁇ F ⁇ with known co-ordinates and where a mobile device being calibrated has no knowledge of ground truth (i.e., where exactly it is in space).
  • the desired calibration data will include an observation set (i.e., measurements at a mobile device of whatever type) at a set of locations. These observations are assumed to be a function of the (x,y,z) coordinates of the 2-D or 3-D space and may sometimes be dependent on the first or higher derivatives with respect to time of the (x,y,z) coordinates, representing velocity, acceleration, etc.
  • a mobile device in the space records observation data and stores the observation data along with a time stamp. Any arbitrary form of motion of the mobile is permissible in order to obtain the observation data.
  • a VCN may observe and record the motion of the mobile device.
  • the VCN uses a clock with some known relationship to the clock used by the mobile device. On completion of the movement of the mobile device through the space, corresponding instants of time from the VCN and the mobile observation record are matched. Given the video record, the set ⁇ F ⁇ can permit accurate calculation of the (x,y,z) coordinates of the mobile device. Successive frames also permit the calculation of higher derivatives representing velocity and acceleration. Now the mobile location and higher derivatives can be matched to the observations. Calibration is achieved.
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible program carrier for execution by, or to control the operation of, data processing apparatus.
  • the tangible program carrier can be a computer readable medium.
  • the computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, or a combination of one or more of them.
  • processor encompasses all apparatus, devices, and machines for processing data, including by way of example, a programmable processor, a computer, or multiple processors or computers.
  • the processor can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a computer program does not necessarily correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer.
  • a processor will receive instructions and data from a read only memory or a random access memory or both.
  • the essential elements of a computer are a processor for performing instructions and one or more data memory devices for storing instructions and data.
  • a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant ("PDA”), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • Computer readable media suitable for storing computer program instructions and data include all forms data memory including non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD- ROM disks.
  • semiconductor memory devices e.g., EPROM, EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto optical disks e.g., CD ROM and DVD- ROM disks.
  • the processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer.
  • a display device e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor
  • keyboard and a pointing device e.g., a mouse or a trackball
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, input from the user can be received in any form, including acoustic, speech, or tactile input.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back end, middleware, or front end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a
  • Examples of communication networks include a local area network (“LAN”) and a wide area network (“WAN”), e.g., the Internet.
  • LAN local area network
  • WAN wide area network
  • the computing system can include clients and servers.
  • a client and server are generally remote from each other and typically interact through a
  • client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Navigation (AREA)
  • Alarm Systems (AREA)

Abstract

The present disclosure describes systems and methods for determining a signature of an object such as a person, group of people, vehicle, inventory item, etc. In an embodiment, a reference ID is determined for an object, a location and identification are each determined for the object, and a signature of the object is determined from the object's determined location and identification.

Description

SYSTEM AND METHOD FOR MOBILE IDENTIFICATION AND TRACKING IN LOCATION SYSTEMS
RELATED AND CO-PENDING APPLICATIONS
[001] This application claims priority to co-pending U.S. provisional patent application entitled "Mobile Identification and Tracking in Location Systems", Serial Number 61/829,793 filed 31 May 2013, the entirety of which is hereby
incorporated herein by reference.
BACKGROUND
[002] Recently, there has been a marked increase in interest among carriers and non-carriers in high-accuracy "indoor" location of wireless devices. New, previously undefined, "commercial" value-added location-based applications ("LBA") focused on an increasing population of smartphone users underlie this market shift. Such value- added applications require high-accuracy indoor location support that is not available from wireless location systems currently deployed. The location performance
requirements of many of the new applications are well beyond those required to meet emergency services (E911) regulations, and it is apparent that products focused primarily on "macro" area E91 1 caller location cannot meet the performance requirements for many of the commercial value-added applications identified to date.
[003] Two distinct sets of applications, emergency services (e.g., E911 caller location) and "commercial" value-added location-based services, drive the need for high-performance indoor mobile location. The location support requirements of these two basic application sets are disparate and one should not assume that a location system oriented to meeting the needs of one of these application sets will necessarily meet those of the other.
[004] "Macro-Area" Emergency Services Caller Location (e.g., E911, etc.)
[005] To meet U.S. and Canadian E911 regulations, wireless carriers must provide reliable and accurate caller location information for every 911 call placed from mobiles to emergency service bureaus ("PSAPs" - public safety answering points).
Wireless E91 1 calls must be located within an average accuracy range of approximately 100 meters and with latency not exceeding 30 seconds. Network operators have installed wireless location systems that employ a variety of positioning technologies to establish a caller's location. Typically in today's operating environment, a serving wireless network detects the E911 call, identifies the mobile, and directly or indirectly launches a location request to the wireless location system of the serving carrier. Once a target mobile's location coordinates are determined, the location system typically sends that information through the serving network for delivery to or retrieval by the PSAP to which the network delivered the emergency call. Carriers must report to federal and local agencies whether they comply with the caller location accuracy criteria. In the U.S., wireless carriers currently are permitted to perform all location accuracy compliance testing outdoors. However, the Federal Communications Commission ("FCC") is aware that the location technologies most prevalently in use for E911 mobile location generally suffer accuracy and/or reliability performance degradation when the target mobile is situated within a structure. The FCC has initiated a proceeding on the subject of indoor location performance and is expected to specifically address the issue, perhaps with modification of the E91 1 regulations.
[006] In countries other than the U.S. and Canada, emergency caller location regulation is scant. Such little regulation as may be in effect generally requires cell-ID-level location only. High-performance location systems are therefore not implemented.
[007] Certain key performance attributes/issues of currently available in- place "macro-area" E91 1 Phase II location systems include: (a) a design that produces mobile location accuracy of approximately 50m to 100m; (b) non- autonomous operation, i.e., individual target mobiles are located only as directed by wireless networks based on 911 call initiation; (c) time delay - location of target mobile occurs within 30 seconds; and (d) significant performance degradation if target mobile is located within a structure.
[008] Emerging Value-added Location Based Applications ("LBAs")
[009] Emerging commercial mobile LBAs are immensely diverse including, to name but a few examples, person-finder, social/interest, "push"
advertising/promotion, and micro personal navigation. The majority of such applications in use today are "over-the-top" ("OTT") in nature, i.e., the applications are provided by non-carrier enterprises. Examples include, but are not limited to, applications hosted on smartphones, supported by location capabilities resident within the mobile device itself, supported by individual non-carrier entities such as mobile device makers, application content providers, mapping services, and are not dependent whatsoever on any carrier's location facilities. However, each of these examples lacks the basic location accuracy and response time demanded by emerging indoor-specific LBAs.
[010] Accordingly, there is a need for the capability to perform high- accuracy indoor location of wireless devices, in a timely manner, to meet the needs of emerging LBAs. Such high performance indoor location capabilities require location accuracy at least an order of magnitude better than is the norm for "macro-area" emergency caller location. Many nascent applications falling into this category are "indoor navigation" oriented or require location accuracy and speed similar to that required by indoor personal navigation.
BRIEF DESCRIPTION OF THE DRAWINGS
[011] Figure 1 is a flow chart for determining a signature of an object according to an embodiment of the present subject matter. In a further embodiment, the wireless device is tracked using the determined signature. In a further embodiment, information is provided to the object.
[012] Figure 2 is a flow chart for determining a signature of an object according to another embodiment of the present subject matter. In a further embodiment, the wireless device is tracked using the determined signature. In a further embodiment, information is provided to the object.
[013] Figure 3 is a block diagram for an exemplary system for determining a signature of an object according to an embodiment of the present subject matter. DETAILED DESCRIPTION
[014] The following description of the present subject matter is provided as an enabling teaching of the present subject matter and its best, currently-known embodiment. Those skilled in the art will recognize that many changes can be made to the embodiments described herein while still obtaining the beneficial results of the present subject matter. It will also be apparent that for some embodiments, some of the desired benefits of the present subject matter can be obtained by selecting some of the features of the present subject matter without utilizing other features. Accordingly, those skilled in the art will recognize that many modifications and adaptations of the present subject matter are possible and may even be desirable in certain circumstances and are part of the present subject matter. Thus, the following description is provided as illustrative of the principles of the present subject matter and not in limitation thereof and may include modification thereto and permutations thereof. While the following exemplary discussion of embodiments of the present subject matter may be directed towards or reference specific methods and/or systems for determining a signature of a mobile device, and/or mobile device identification and tracking methods and/or systems, it is to be understood that the discussion is not intended to limit the scope of the present subject matter in any way and that the principles presented are equally applicable to other methods and/or systems for determining a signature, identification, or tracking of a mobile device.
[015] Those skilled in the art will further appreciate that many
modifications to the exemplary embodiments described herein are possible without departing from the spirit and scope of the present subject matter. Thus, the description is not intended and should not be construed to be limited to the examples given but should be granted the full breadth of protection afforded by the appended claims and equivalents thereto.
[016] With reference to the figures where like elements have been given like numerical designations to facilitate an understanding of the present subject matter, various embodiments for methods and/or systems for determining a signature of a mobile device, and/or mobile device identification and tracking methods and/or systems are described.
[017] Emerging commercial mobile LBAs for such uses as person-finding, social/interest purposes, "push" advertising/promotion services, and micro personal navigation systems require relatively quick, high-accuracy indoor location performance. Such high performance indoor location capabilities require location accuracy at least an order of magnitude better than is the norm for "macro-area" emergency caller location.
[018] Large retail establishments, for instance, would like to help customers navigate within stores to the department or to specific products that they seek, independent of store staff. Many would like to track customer routes within stores where such information may be used later to facilitate product placement or advertising campaigns. As another non-limiting example, convention facility managers and sports complex managers envision similar services, including guiding patrons to convention booths, dining facilities, points of interest, and even to the level of individual vending machines. Retailers would like to identify and push promotional messages (on an opt-in or opt-out basis) to customers and account holders nearing or entering their establishments. Some commercial property brokers and lessors envision being able to better allocate space according to tracked usage. Financial and credit organizations may wish to verify card holder physical presence at points of transaction. Although there is some variation among the individual LBAs with respect to their specific location support requirements, in general LBAs can be considered to require very rapid location within, in an embodiment, an accuracy tolerance of roughly 3 meters.
[019] Examples of potential users and applications include, but are not limited to:
[020] Large Retail Store Navigation - a customer enters the store. The customer's mobile device pushes its ID to the store's identification and/or navigation system or is detected and ID'd by the store's system. An App and/or the store's system provides one or more of a map, directions, and navigation information to departments or products of interest to the customer. Real-time or post-visit store trail mapping and display time analysis is also facilitated for current or later use by the store.
[021] Large Retail Store Targeted Promotion - A customer enters the store. The customer's mobile device pushes its ID to the store's identification and/or navigation system or is detected and ID'd by the store's system. The store's system reports the ID and presence to a server, which is typically, but not necessarily, located on site. The server/system pushes promotions, advertising, and/or discount information to the customer. Real-time or post- visit store trail mapping and display time analysis is also facilitated for current or later use by the store. Those of skill in the art will readily understand that the Navigation and Targeted Promotion functionality may be utilized in combination or separately.
[022] Mall/Shopping Center Targeted Promotion - A customer nears a store in the mall/shopping center. The customer's mobile device pushes its ID to the mall's identification and/or navigation system or is detected and ID'd by the mall's system. The mall's system reports the ID and presence to a server, which is typically, but not necessarily, located on site. The server/system pushes promotions, advertising, and/or discount information and store location(s) to the customer. In an embodiment, the promotion, advertising, and/or discount information may only be for stores in the customer's vicinity. Real-time or post- visit trail mapping and display time analysis is also facilitated for current or later use by the mall. Additionally, in an embodiment, individual shopper mall navigation is supported on a by-store and/or by-product category and/or by-brand basis.
[023] Convention Venue Attendee Navigation - A convention attendee receives a mobile device app upon registration which lists all booths, displays, seminars, dining, vending, and other facilities, and provides on-demand personal navigation to these locations.
[024] Convention Venue Attendee Finder - A convention attendee receives a mobile device app upon registration which facilitates locating and navigating to other attendees.
[025] Sports or Entertainment Facility Attendee Navigation - this is similar to Convention Venue-Navigation described above. Event attendees receive an optional mobile device app in advance of event or upon entry to the venue. The mobile device app lists all booths, displays, seminars, dining, vending, and other facilities, and provides on-demand personal navigation to these locations.
[026] Small/Medium Retail Customer Service - A customer enters a store.
The customer's mobile device pushes its ID to the store's identification and/or navigation system or is detected and ID'd by the store's system. The store's system reports the ID and presence to a server which is typically, but not necessarily, located on site. The server/system accesses customer account information for store personnel assignment for service and/or follow-up on past, recent, or pending purchase or account activity.
[027] Space Utilization Analysis - An identification and/or navigation system locates, ID's, and tracks and reports building tenants' movements and where, when groups of tenants gather, etc. The system provides office planners, real estate managers and other authorized users with information for space planning to support functional units, meeting room placement and sizing, etc. The system additionally supports person locator and personal navigation applications.
[028] Emergency Location - Individual - A person uses their mobile device to call 911 or other designated number/code for emergency response. An indoor location system locates the caller or locates and ID's the caller. The indoor system reports/pushes this information to a macro location system, is queried by the macro system for pertinent information, or reports the information to a database from which emergency caller information is retrieved. [029] Transaction ID Verification Near-Field Location - A system locates and identifies mobile devices of persons in close vicinity of and using credit and debit cards for transactions, e.g., at point-of-sale ("POS") devices, ATMs, etc.
[030] In addition to the basic location/navigation capabilities common to all of the above scenarios, many of the use cases described above include the ability to identify the customer/visitor to thereby provide an opportunity for unique service and use enhancements to the particular identified customer/visitor.
[031] Identification and/or location Technologies for use indoors include, but are not limited to:
[032] RF Ranging (which may be based on Wi-Fi access points or dedicated infrastructure);
[033] Proximity Detection (which may be based on distributed antenna systems ("DAS") antennas, Wi-Fi access points, or dedicated infrastructure);
[034] RF Pattern Matching (which may be based on Wi-Fi access points, small cells, or dedicated infrastructure);
[035] RFID (radio frequency identification technology);
[036] Pseudolites (which may be based on ground transmitters/signal sources that emulate the signal structure of GPS satellites);
[037] GPS rebroadcast;
[038] Sound Detection (which may be infrasonic, ultrasonic, or audible scrambled signals);
[039] Light modulation; and [040] Magnetic anomaly detection.
[041] Certain ones of the above technologies are briefly described below:
[042] Global Positioning System ("GPS")/Global Navigation Satellite Systems ("GNSS")
[043] The U.S. GNSS system, named GPS, is commonly deployed in a variety of devices, such as handheld GPS receivers that provide latitude and longitude as well as maps for navigation. GNSS other than GPS are becoming a reality, including Galileo, GLONASS, and Beidou. GPS satellites in Earth orbit transmit data signals that can be received by GPS receivers located in devices such as wireless phones. The position of the device is calculated using a time-based approach from the data received from multiple GPS satellites. The position calculation can be made in the device itself or by a remote server to which the device transmits the received/measured GPS data.
Reception of data from at least four satellite signals is required to accurately calculate a position. GPS signals are relatively weak and signal attenuation frequently renders GPS positioning unusable if the GPS device is within a building or even under dense tree foliage. Assisted-GPS ("A-GPS"), in which a central server sends certain GPS almanac and ephemeris data to a GPS device via a wireless data path, can be employed as a means to reduce time-to-first-fix ("TTFF") and to help improve overall location performance.
[044] Infrared ("IR")
[045] A type of light modulation system, IR location systems determine position of objects based on sensed presence. Each object to be tracked requires a proprietary emitter that periodically transmits an IR beacon containing a unique code. Specialized IR receivers placed throughout a facility detect the beacons and determine the position of the object based on the known location of the detecting IR receiver. Because IR signals don't penetrate opaque materials such as walls and ceilings, an IR tracking system may require multiple receivers in each room to assure full coverage of the area. Since the IR path is essentially line of sight, problems can occur if the tracked asset itself blocks the view from the IR Tag to the reader.
[046] Proximity Detection
[047] Location by means of proximity detection simply provides an indication that a mobile device has been detected within range of a sensor or receiver. If the target mobile is detected, the location system typically reports as the mobile's location the location of the detecting antenna or zone. With the exception of the special case of near- field detection, proximity detection alone may not provide indoor- navigation-level accuracy. However, a combination of proximity and other methods may achieve objective performance. Proximity detection can provide a basis for geo-fencing applications.
[048] Pseudolites
[049] If GPS/GNSS signals are so attenuated or otherwise compromised as to be unusable indoors, a system of local, ground-based GPS-like signal sources could be employed to overcome that handicap and be useful to GPS-equipped devices located within structures. Being able to deploy one's own (indoor) pseudolite positioning system, independent of GPS, could, theoretically, leverage GPS capabilities embedded in smartphones and other mobile devices to provide reliable high-accuracy indoor location. Such is the idea behind pseudo-satellites (pseudolites), small transceivers that are not GPS satellites, but that perform functions common to those satellites. Pseudolites have recently gained more attention in the context of indoor location. However, use of GPS frequencies is, for reasons that are readily understandable, highly protected and restricted by the U.S. Government.
[050] PvFID Location
[051] An RFID location system includes RFID scanners installed throughout a facility that interrogate either active (radio transceivers) or passive tags that attach to objects. Battery-powered active tags allow up to a twenty foot range between the scanner and the tags. Passive tags have no batteries, but typically must be relatively close to the scanner (within inches or a few feet). Because of the limited range of passive tags, active RFID tags are the type usually found in positioning systems. A centralized server stores the unique tag codes that the scanners collect and the server is able to identify and display the location of each tag according to which scanner detects a tag. RFID systems determine position based only on the presence of the object in a particular area, so the accuracy of an active RFID system is dependent on the number and positioning of the scanners. Scanner repositioning may be necessitated due to changing floor layout or walls. RFID systems that operate in the same frequency band as wireless LANs can pose RF interference issues with the LAN.
[052] RF Pattern Matching
[053] While actualization of RF pattern matching ("RFPM") can be complex, the underlying premise is not: the mobile device to be located reports the RF characteristics of source signals, for instance from multiple cell sites, it observes at its current location, and the location system attempts to match those characteristics against a pre-established database of RF characteristics previously measured and recorded at each of many specific geographic points. The location system determines as the target mobile's location the geographic point that yields the closest match of the mobile's reported RF characteristics and the pre-observed RF characteristics. Experience has confirmed the theory that for RFPM to be effective the target mobile must measure RF emissions from multiple distinct signal sources or, conversely, the mobile's signal must be observed at multiple sites. For this reason, RFPM positioning accuracy is best in environments where the RF sources are densely deployed and poor where signal sources are sparse.
[054] RF Sensor Networks
[055] Devices to be located include a radio module that on a scheduled basis transmits an RF signal containing a unique identification code. Sensors (modified Wi-Fi access points or separate sensors) installed in the area of interest receive the coded information and locate the device by means of proximity or other methods. Some proprietary sensor networks calculate a device's location using measurements of signal strength or signal travel time from the tracked device to multiple sensors. Signal attenuation and multipath can negatively impact performance. Wi-Fi-based sensor network tracking solutions can allow the access points to carry typical data traffic associated with Wi-Fi users, a cost advantage over dedicated (non- Wi-Fi) location systems. Since the device transmit schedule impacts battery life, the transmit schedule must be carefully managed to balance battery life against tracking and accuracy requirements.
[056] RF Timing-based Location
[057] The measured travel time or arrival time of mobile RF signals can be used in various ways to calculate a mobile's location and/or distance from a known reference point such as a cell tower, as is known in the art. Timing-based location methods are familiar as uplink time difference of arrival ("U-TDOA"), observed time difference of arrival ("O-TDOA"), round trip time ("RTT"), timing advance ("TA"), multiple range estimation location ("MREL"), etc., and for years have been used to support "macro-area" emergency caller location and other applications. More recently, timing methods have been explored for indoor location and these development efforts continue. It is apparent that highly accurate indoor location of a mobile device cannot be accomplished by "macro-area" systems, such as those deployed for E911 location in cellular networks, due to the inherent limits of the location methods themselves and because of additional error induced by the presence of repeater and DAS frequently encountered in indoor environments. However, specialized indoor location systems based on RF timing technology (including RF ranging) can be designed so as to work with Wi-Fi and other "indoor" RF facilities.
[058] Wi-Fi-based Positioning
[059] The proliferation of wireless Wi-Fi LANs, smartphones, and other
Wi-Fi devices provides an attractive technology basis for location solutions. A Wi-Fi- based solution can leverage the large base and economies of scale of installed networks and end user devices. A Wi-Fi-based location system might support any type of location- aware application that involves PDAs, laptops, bar code scanners, voice-over-IP phones and other 802.11-enabled devices. Proximity-oriented Wi-Fi location has been utilized for several years to support macro-area LBAs; in this usage a Wi-Fi access point ("AP") detected by a mobile device is referenced against a large (remote) database of geographic locations/addresses of APs. To date, Wi-Fi location has been used for location on a large-area, "macro-positioning" basis that generally yields accuracy to the level of a building or address. However, many established vendors and startups have in the past year taken up the challenge of highly accurate indoor location using Wi-Fi access points as reference positions. In addition to Wi-Fi-based RFPM for indoor positioning, vendors are exploring the use of other location technologies, such as timing-based position calculation using Wi-Fi access point references.
[060] Video aided Location determination and tracking
[061] All of the venues (stores, stadiums, conference centers, etc.) that have been referred to above are very likely to use video monitoring of some or all of the premises. Often the motivation to do so is one of security. Most large stores that may have a financial benefit from being able to track customers have video cameras monitoring their premises. Recent events such as the Boston marathon bombings of April 2013, where the arrest of the perpetrators were at least partly due to the use of camera images, will lead to further penetration of video monitoring for both outdoors and indoor locations. Considering large cities for example, London has an estimated 8000 video cameras actively monitoring public areas with a resultant high density of cameras. One can reasonably expect the density of video cameras in US cities to follow suit in the months and years to come.
[062] It can be intuitively observed that video offers a highly promising method of tracking a mobile user. With one or more properly positioned cameras producing a sequence of images of a covered area, a particular moving object in the field of view can be tracked provided one has access to the video feed and also is in possession of the appropriate algorithms. In the most optimal scenario, where the monitoring entity can always differentiate between the mobile user of interest and other users, it is readily apparent that even given a sequence of still pictures taken at a fine enough time resolution, the monitoring entity can determine the location and movement of the user of interest.
[063] The location of an image in a picture or video feed can be
determined from positional knowledge of other features in the picture. This is the general principle for deriving location in such cases. If there is a feature set {F} with known locations that is also seen in the picture, it is generally possible to estimate the location of a different object by comparing its position in the picture with those of {F}.
[064] Assume that one has access to a sequence of images and that in each image the mobile user/device of interest, M, can be identified. Also assume that these images were obtained at times tl5 12, t3,...,tN. Now consider that one has access to another image taken at time tK where tK is inside the interval [t2,t3] and in which image the user M cannot be clearly identified. Given the images at t2 and t3, it is possible to determine a most likely path that M traversed in moving from his position in the image at time t2 to his position in the image at time t3. Now if there is some sort of human-like object on this determined path, it is highly probable that the location of this object is the actual location of M at tK. This tells us that it is not necessary to be able to identify M at each instant of time; rather, occasional identification combined with an algorithm that follows an object of even vaguely human shape is sufficient to track M. Similarly, if there is a set of some basic shapes, not necessarily human, the same principle holds. Examples include cars on a road network or robots stocking a warehouse.
[065] Of course one can come up with many situations where this rather simplistic view of video tracking may fail. For example, if the video feed shows a large number of users, it may not be that easy to differentiate the movement of one particular user versus another, no matter how finely the video feed is examined in time. The video feed may show only what we will hereinafter refer to as "blobs", i.e., objects that cannot be clearly characterized as person A rather than person B, but simply as a vague shape conforming to the outline of a human being when viewed from the vantage point of a video camera that is typically placed at ceiling level.
[066] One can also visualize many situations where the region of interest in location space, namely the region over which one wishes to track a user, is not fully covered by the video camera network. A simple example of this is what we may refer to as "the washroom problem". Consider that a user M has been identified and is being tracked. User M now enters a washroom. Assume, as is generally the case due to privacy reasons, that here is no video coverage inside the washroom. Hence, there are potentially an unknown number of persons in that washroom. When someone exits the washroom, barring a re-identification of the person which may be possible if there are well positioned video cameras at the exit from the washroom, it is not possible to determine when M leaves the washroom. This problem can be generalized to state that every sub-region where video coverage is non-existent or is very poor is a potential failure point for a purely video based tracking algorithm.
[067] Another situation that can emerge occurs when there is a high density of users in the location space. In such situations, a purely blob-following algorithm (i.e., where user identity is not being confirmed at all instances) may fail when two users come close to each other and then diverge in space. For example if A and B converge into a single blob AB, and then move apart, the objects moving away from the blob AB cannot be categorized as to which one is A and which one is B. Of course, as the complexity of the blob following algorithm is increased, it may be able to use more information as characteristics of the individual blobs; shirt color could be one such characteristic.
[068] Thus we see that a simple video tracking algorithm could be greatly enhanced if it were possible to maintain user identity. In particular it would be very useful if user identity were to be maintained without a need for very high resolution, high camera density, expensive video networks.
[069] We can therefore argue that a video based location tracking system must have a user identification component as well as a location determining and tracking component. Since we are particularly interested in mobile device users, the identification component can occur in some interaction with the mobile, whereas the location determining and tracking component is independent of whether the user has or does not have an active mobile device in his possession. Additionally, the mobile device itself can be involved in the location determination and tracking, quite irrespective of whether there is a purely video location tracking system in operation.
[070] Additionally we note that any of the concepts developed here and applicable to locating and tracking human users indoors, may equally well be applied to humans outdoors as well as other objects in different settings. Examples of interest could include tracking poachers in a game preserve observed from drones, cars on roads, boats on a lake, train cars in a switchyard, aircraft at an airport, inventory items, etc. While certain embodiments described below are discussed with respect to a store or other particular setting, those of skill in the art will readily understand that the described techniques and principles are not limited to use in a store and are applicable to other indoor and/or outdoor settings consistent with the present disclosure.
[071] Identification and Tracking using only Video
[072] One viable method of providing the identification aspect of the problem can be achieved if the entity that determines and tracks the user location, hereinafter denoted "LE", can use the video feed itself to determine identity. A particular case of interest would be where the user M and the location determining entity LE share an application that compares stored images of the user M to the images available on the video feed. If a good match is established, LE has now acquired the identity of M.
[073] Let us consider the example of a large store which has well positioned video cameras at entry points to the store in addition to a network of video cameras that cover all aisles of the store. If the user and the store share an application, then, preferably when the user enters the store, but possibly at any later time and at any other location while within the store, the LE can compare images (stored vs. from the video feed) and establish identity. From this point onwards the user can be tracked by the LE using the network of video cameras for which we assume that there is some form of communication established between the store and M via an application. The store can then provide location specific information (e.g., directions, navigation, advertisements, sales information, discounts, coupons, etc., as discussed above) to the user if it so desires. Since the video network can generally locate a user to within a few meters, and since identity is maintained by continuously comparing the image with the reference (where "continuously" may mean "often", "periodically" with a small amount of time between comparisons, or within other appropriate time/space constraints to as to be able to distinguish user M from other objects), this information can be very specific, even to the degree of, for example, telling the user to turn around and look for an item on a particular , shelf behind her.
[074] Whenever an ambiguity arises, where it is unclear that the user being tracked is M, the LE makes use of its video network to obtain characterizing pictures, for example facial pictures which are re-matched with a reference image(s) to establish identity. Clearly, a very well thought out, planned, and positioned network of video cameras is needed to achieve this end. Other characteristics such as size, gait, etc. are also potentially applicable to the re-identification problem. In addition, temporary characteristics that generally remain fixed at least for the duration of this store visit, such as the color of the shirt or jacket worn by the user can also be used.
[075] When a situation arises such as the merged blob problem considered earlier, the LE can maintain separate location tracks for both objects diverging from blob AB until such time as when identity can be re-established using better positioned cameras. Thus, there could be generally short interruptions in user identity which do not necessarily cause complete failure. This maintenance of two separate tracks can easily be generalized to allow for multiple divergences and multiple unresolved candidate track maintenance over the short haul. Thus, the LE system can maintain a large number of viable tracks for each user until these tracks can be resolved to produce the correct track.
[076] Initial Identification using Video and tracking using a technology mix
[077] In this situation, the initial identification is achieved using video. As in the previous discussion, a case of interest is where the user M and the location determining entity LE share an application that compares stored images of the user to the images available on the video feed. If a good match is established, LE has acquired the identity of M.
[078] Consider again the example of a store which has well positioned video cameras at entry points. If the user and the store share an application, then, when the user enters the store, the LE can compare images (stored vs. from the video feed) and establish identity. From this point onwards the user can be tracked by the LE using the network of video cameras. Of course this establishment of identity using video can be achieved elsewhere in the store as well, though it may be preferable if this can be achieved at entry since then, at least ideally, there are no breaks in identity during the visit.
[079] When the identity is ambiguous or lost, identity must be re-acquired.
Once again, maintaining multiple candidate tracks is a viable option until reacquisition of identity allows for determination of the correct track. Let us generically consider that the non-video technology mix used to re-acquire identity is denoted by "T". Then, T could be any one of the technology types used in location as detailed earlier, or a combination of one or more of them. We note that any such type T implicitly has some form of identity of the user.
[080] One way to reacquire identity is using user uniqueness in a sub- region. To understand this, consider a sub-region "S" in the location space where one sees, using the video monitoring, a single blob. In addition, let us assume that this sub- region S is the exact region of ambiguity within which the user M can be placed using T. Then it immediately follows that the blob is in fact M and hence user identity is reacquired. Another way to understand this is to interpret S as the sub-region in which T places the user M at some given time. Then, if the video system sees a single human-like blob in S at the same time, that blob must be the user M.
[081] This concept can even be activated and applied to users that have a currently-known identity. If such a user is the only occupant (only blob) in some sub- region S in which T places him, then the identity of this user as derived from T must match whatever identity that blob has been currently assigned. If that is not true, it must mean that the user identity has not been derived correctly. It can be noted that for each possible discrete set of measurement values in T, it is possible to define such a sub- region. Thus, the location space can be thought of as having a potentially very large number of possibly overlapping sub-regions, each of which maps to a set of observations in T. An algorithm running in the background can examine when any one of these sub- regions shows a single blob in the video imagery. At every such time, user identity can be confirmed.
[082] To understand the interaction of video imagery and technique T better, it is useful to reflect on the fact that video, when it is operational (ID established, image not too crowded, sufficient coverage of the location space, etc.), is potentially the highest resolution location solution. Technique T, on the other hand, is potentially low resolution with larger ambiguity, and yet T rarely, if ever, loses identity.
[083] In addition to cases such as described above where there is a single blob in a sub-region, it is possible to extend this analysis to more than one blob.
Consider that, at time t1? there are two users whose measurements fit within the same initial measurement set provided by T and the two users are located in the same sub- region. The video system can be used to track both of these users and, at time t2 when the users move out of the sub-region, if their T measurements diverge then irrespective of how many other users may be nearby at time t2, it will be possible to identify which T measurements go with which user. In other words, identity can be derived for each of the two users. The same principle can be extended to multiple blobs in the same sub-region: each blob is a candidate for one of several identities. This concept is elaborated on further below.
[084] Another way in which T can help re-acquire identity is to instigate user behavior that can be recognized in the video image. Hypothetically, imagine that T is used to communicate to the user to raise his hand. In a crowded sub-region S within which T is able to locate the user, raising his hand allows the video component to fully identify the user. So, it is conceivable that a suggestion conveyed via T to the user such as "turn around so we can help you better," can lead the user to perform some action which the video imagery observes in S so as to identify the user.
[085] Additionally, these induced user behaviors can be made local to movements or rotations of the mobile device itself. Such movements could lead to information being passed back to the LE, such as information from a MEMS
(microelectricalmechanical system), accelerometer, etc., associated with the mobile device, which can then match the image to the user by correlating the event information in time. It is also conceivable to make changes to the display on the mobile device which may force the user to change his orientation. For example, if the text is forced to rotate by 90 degrees the user is very likely to rotate the phone so she can read the message. This rotation can be picked up in the video imagery to identify the user of interest.
Alternately, the application that the user has signed up for to provide location specific information may have specific user interaction built in. One example of this might be where the phone displays something to the user which is taken as a signal to wave the phone. This action can then be picked up by the video network to identify the user. If MEMS (or other) data is being reported back to the LE, further verification of whether this user did in fact wave his phone will also be available.
[086] Initial Identification and tracking using a technology mix
[087] Consider a hypothetical situation where the entryways to a store each have a Wi-Fi access point that is local to that entryway and nowhere else. Then as a user passes through that entryway, his mobile device may register the presence of this Wi-Fi access point. An application on his mobile can then inform a video monitoring system of this particular user's entry into the store as well as which entryway was used. By proper association of the entry event with video imagery, the user image is then given an identity. From this point onward a mix of the video network and the Wi-Fi signals can be used to locate and track the user. The entry event, as observed via the Wi-Fi signals, also provides location information.
[088] Instead of this Wi-Fi signal being completely restricted to the entryway, it is also conceivable that the same approach may be taken if the signal is at its highest possible amplitude at the entry point. This concept is in essence quite similar to the "single member in a sub-region" concept discussed previously to resolve ambiguities. Essentially, a non- video technique which possesses user identity is matched to a blob that is then tagged with that identity. Note that in this type of situation, high resolution imagery is not essential. Further, we note that while an entryway is a preferred place for establishing identity, the same principle may be applied elsewhere in the store, e.g., entry to a particular department, aisle, section, or any location in the store. [089] Having established initial identity, any of the previously-discussed location and tracking techniques can then be used.
[090] Discovering and Maintaining Mobile Identity in a Video Tracking
System
[091] Here we elaborate further on a very important aspect of a video based location determination and tracking system. This is the question of how the mobile users being observed in the video feed or the video network can be identified, and how this identity can be maintained. In particular we are interested in the case where the video network is low cost, simple, and does not utilize complex image recognition.
[092] Consider a rudimentary video based tracking scheme. One could argue that the simplest algorithm applicable to such a case, without complex facial recognition or feature recognition algorithms, and pertinent to a very basic set-up of video cameras, is the "blob-following algorithm" described above. Assume, for example, that we have a picture in which there are several human-like objects. If we have yet another picture from the same fixed camera at a time, for example, one second later, the pictures can be compared to see which if any of these objects have shifted in position. We could then argue that every moving object is a human being; if one wished to be more specific, the video could be examined for whether there are legs, arms, a head, etc. associable with the object. Now as time passes, examination of the pictures in the video feed permits us to "follow" these blobs. Intuitively, as for example in a store, most people walk and cannot suddenly transport themselves from one place to another. Thus, these blobs have a relatively continuous movement that can be tracked with an appropriate algorithm.
[093] We thus have a method to follow these blobs around but what we do not have is knowledge of who is represented by each blob. In other words, user identity is missing. Without user identity, none of the revenue generating or security providing applications described above can be fully functional. Secondly, for any of the
applications to function there has to be some means of communication with the user, such as via the user's mobile device.
[094] Let us now assume that at least some of these blobs represent people with active mobiles in their possession. Assuming that we are in a store, for example, it is reasonable to expect that the mobile devices have a Wi-Fi connection established with the store's network. Thus, it is possible to know some form of identity for the mobile devices that are in this area and these mobile signals could either be monitored at the Wi-Fi access points or the mobile could be recording and reporting back the observed Wi-Fi signals to an application. The identity of these mobiles could be the MAC address of these devices, for example. We also note that the mobile devices of interest need not only be phones but could be any other mobile device, such as a tablet computer.
[095] So, in the context of the video-observed blobs, what we have is an association problem. We have some number "B" of moving blobs and we have some number of relevant identities "I". The question now is how to assign the identity values in I to the correct blobs in B. [096] In an embodiment, this association can be effected by monitoring the variation over time of the Wi-Fi signals. In a very simple case let us assume that B=3, with blobs Bls B2, and B3. Also assume that 1=2, with identities I] and I2. Also assume that there are two Wi-Fi access points of relevance, and W2, and that the location of such points is completely known to the locating entity. So the question now is, which blob to assign to each of the identities in I. Suppose that when we observe the signals of the first identity we notice that the receive signal strength of Wi continues to increase in the measurements for l At the same or nearly the same time in the video feed, let us argue that we see one of the blobs, B2, moving towards W]. If none of the other blobs is moving towards Wl 5 it is quite clear that the identity Ij must pair with blob B2. So, now we've coupled one of the identities with one of the blobs and by following this blob around using the video tracking algorithm we can provide that user with all the location related information from which she could possibly benefit.
[097] A general rule that can be distilled from the above example is to observe the variation in the Wi-Fi signals associated with a particular known identity, and then to use, for example, the general principles of signal attenuation with distance to associate the proper blob, thereby associating the proper human with the Wi-Fi identity. What is very nice about this approach is that the Wi-Fi signals themselves have very poor location resolution on their own but due to the independent movement of the users, the changes in the signals can allow us to estimate what identity should go with what blob, or human-like object. Variations in the signal power caused not simply by distance but by angle, as might occur where a transmitting antenna has a certain beam shape (where this shape is known by the LE) can also be matched to the movement of blobs. In general, we want to associate variations in the observations of the Wi-Fi signals, or any other technique T, with movements as seen in the video feed.
[098] So, having identified one of the blobs, the problem is now to assign the single remaining identity to one of the two remaining blobs. As time passes, and the users move independently we can repeat the general principle applied earlier to similarly find the correct blob to match with the remaining identity. Clearly, this method can be applied to a larger number of blobs and a larger number of identities. In fact, if one considers the sub-regions applicable to any Wi-Fi signal set as discussed above, for any such signal set, the number of identities we need to consider is how many blobs we see in that sub-region. So, if there is a sub-region S where it is not possible to differentiate between the Wi-Fi signatures, one can examine that sub-region in the video feed to ascertain how many blobs are seen there. This is then the number that has to be properly matched to the observed identities with a particular Wi-Fi signature. The problem is clearly resolvable over time unless the users are immobile.
[099] In an embodiment, the store may insert a few judiciously-placed
Wi-Fi access points into the overall identification/location/tracking system. For example, if one had such a Wi-Fi access point in a passageway of a store where the signal increases dramatically and falls away similarly with distance, every user passing through is easily matched provided the video network provides even some low grained coverage of this passage. Alternatively, if the store is crowded and many users pass through the passageway at about the same time, we would be able to limit the identity ambiguity to the number of such users; that is, we would have groups of users who match collectively to a group of identities and then sort out the individual identities at a later time using the principles discussed herein.
[100] We also note that some other mobile location-related measure other than Wi-Fi signals can substitute for Wi-Fi signals and be applied towards identity discovery in a similar manner. Such a technique only needs to have some form of low resolution mapping of the signals or other measurable feature in the region of interest as well as some variation with location that can be differentiated. Assume, for example, that there is a window in the store where a passing user is able to pick up some distant cellular transmitter. If the user's mobile device reports this observation of the distant transmitter to a location application, and this event of the user passing the window is observable by a video network, we once again have a means of assigning identity. In general, the principle incorporates the variation of the entire observable set (or parts thereof) of measurements using technique T, due to the movement of a user within a venue, and correlating that variation with observed blobs in a video system. Thus, a blob in a video can be associated to an identity.
[101] Those of skill in the art will readily understand that these principles are not restricted to Wi-Fi and/or video. For example, the role of the video system could be replaced by infra-red sensors.
[102] Identification and location in relation to groups of users
[103] In this section we consider the situation that can emerge if a group of users moves together as a unit. An example of this would be a family that is attending an event in a stadium. Two possible scenarios are worth mentioning. One is where the group self-defines itself as a group in some manner as a precursor to, as in this example, being in a stadium. One way this could happen is if the group shared an application which when activated informed the LE that this group of users would like to be treated as one unit that is close together in physical location. This could be envisaged in a situation where a father is at a concert with his young children and wants to be sure that they are all nearby, and in addition wants to be alerted when they are not, and if necessary then locate the missing children. The second way this could happen is where the group did not provide any such a priori knowledge but was observed by the LE to be moving together as a group; an observation that typically can only happen after observing the movements of many users over some extended time.
[104] In the first case, one immediate advantage to the LE is that if any one of the group is identified, then by observing the common or nearby location of other video observed humans (or blobs), the LE can assign that group a set of identities. So, if there were a total of "N" identities that had at some stage not yet been assigned, and this group was of size "M", where M is at most N, then even without explicit assignment of individual identity to every member of the group the LE can at least temporarily assign the M identities to the group while concerning itself with the problem of which objects to assign the remaining (N-M) identities. In other words, it simplifies the identity assignment problem. This same principle can also be used in the second case, but only after determining in some algorithmic manner and over some extended time that there is indeed such a group in the video field of view. [105] Having identified such a group, divergence within the group location can also be useful. Clearly in the first mentioned case, if a child were to be separated from her parents, the LE would now be able to issue an alert, possibly to one of the parents or perhaps to a pre-designated mobile device. All the LE would need to recognize that one of the group is separated is that one of the identities in that group is presenting a very different set of observations, as for instance a much different Wi-Fi signature. In fact, one need not even know the location of the group or the location of any member of the group; the simple fact that one observation type was providing very different data for any two identities in that group could be sufficient to trigger a flag. If the situation were an emergency, the location of that particular identity could be assigned the highest priority in the LE algorithms.
[106] Using one type of observation to calibrate another type of observation
[107] Consider the situation where one is observing the movement of an object using a video network. If there are several features in the space being observed whose locations are known, then the position of the object may be calculated in reference to these known features. Now at any instant during the movement, if observations of yet another type are also made, as for instance power measurements on a signal transmitted by the object (or the object's mobile device), then it is possible to associate the calculated position of the object with the second set of observations, in this case, the power measurements. [108] Thus generally speaking, one could achieve a pseudo-ground truth for one type of observed measurement by using another type of observation to establish location.
[109] Identity Determination for an Intermittently-Observed Image
[110] Consider the situation where a video observed image cannot be resolved in terms of its identity. This may be a result of the quality of the image, or it may be that the image does not have a good match in a database of images that are available for image pattern matching. Also, consider that the image is one of a mobile device user, and that her device is active at those times where the image is observed in the video image field. Further consider that the image cannot be continuously followed using video imaging but sporadically shows up only in certain select images. But we do assume that the image is such that it can be locally recognized. That is, if examined in relation to other images being observed by the same video camera at the same time, the image can be separated out.
[Ill] Let us define some disjoint regions in location space Rls R2, R3, etc. where the image is observed. As an example, let us says this image is seen on a floor of a shopping mall, then later in a parking garage, and subsequently in a subway train, and so on. Assume that each time the image is seen, the mobile device is also observed by some unit of a system that has connectivity to the mobile. For example, the unit could be a Wi- Fi access point, a different one at each of R R2, R3. Or if the locations R1 ? R2, R3 are relatively closely-spaced, the same access point may be used. In such cases, the unit that has this connectivity may be able to extract some form of mobile identifier such as the MAC address of the device.
[112] Now in each of these regions Rl5 R2, R3, the unit with connectivity to the device of interest may also see many other mobile devices that are in proximity to the unit. Thus, in R] it may see a set Si of mobile IDs. Similarly in R2 it may see a set S of mobile IDs, and in R3 it may see a set S3 of mobile IDs. Since the image of interest was in each of these regions, the mobile device ID for that image must be in each of these sets. Therefore, the image of interest must be in the intersection of these sets.
[113] Thus, the identity of the image, in so far as it can be linked to the identity of the mobile device associated with the image, can be sequentially narrowed down by taking the intersection of set Si with set S2, and then the intersection of that result with S3 and so on. When one of the intersections has a membership of one, the identity, at least in terms of the mobile device associated with the image, has been discovered.
[114] We can also note that taking such intersections, even if it does not result in a final unambiguous result, can reduce the size of the ambiguity of the user's identity. As for example, if the ambiguity is initially the size of set Si, but after intersection with set S2 this ambiguity is smaller than the size of either set, the ambiguity of the user's identity has been reduced. As more intersections are taken, the ambiguity may continue to decrease.
[115] Referring now to Figure 1, flow chart 100 is presented for determining a signature of an object according to an embodiment of the present subject matter. As discussed above, the object may be one or more individual persons, a group of people, a vehicle, a unit in an inventory, etc. The object may be located outside or may be located in an enclosure, such as, but not limited to, a store, a shopping mall, a sports arena, a convention hall, etc. In an embodiment, the signature of the object includes a first and a second set of information.
[116] At block 1 10, a reference identification is determined for the object.
The reference identification may be a set of measureable data such as, but not limited to, a picture, a video still or video clip, a phone number of a wireless device associated with the object, a MAC address for a wireless device associated with the object, infrared map, a mobile device application, etc. Typically, the reference identification for the object should be sufficient to distinguish the object from another object that is within a particular space, sub-region, etc., that is being monitored.
[117] At block 120, a first set of information for the object is determined using a first monitoring system. In an embodiment, the first set of information includes location information for the object. At block 130, a second set of information for the object is determined using a second monitoring system. In an embodiment, the second set of information includes identification information for the object, such as discussed above. At block 140, the second set of information is compared to the reference identification determined at block 110. If the second set of information and the reference identification match, within some predetermined tolerance level, then an identification of the object has been made. [118] At block 150, the signature of the object is determined based at least in part on the first set of information and the second set of information. In a further embodiment, at block 160, the object is tracked based at least in part on the determined signature of the object. In a still further embodiment, at block 170, the object is provided with a third set of information such as, but not limited to, location information, navigation information, sales promotion information, advertising information, a mobile device application, etc.
[119] In an embodiment, the first monitoring system may be, but is not limited to, a video-based system, a sound-based system, an optics-based system, etc. In another embodiment, the second monitoring system may be, but is not limited to, a local area network, a Wi-Fi network, an RF ranging system, a proximity detection system, a RF pattern matching system, an RFID system, a pseudolite system, a GPS rebroadcast system, a GNSS rebroadcast system, a sound detection system, a light modulation system, a magnetic anomaly detection system, etc. In a further embodiment, the first monitoring system and the second monitoring system may be any of the preceding systems or networks.
[120] As a non-limiting example, consider the case where the object/user is associated with a wireless device that has a mobile application/software that requires the user (or someone else) to take one or more sense maps/reference IDs of the user (e.g., pictures, infrared maps, etc.) from different angles and face views and, optionally, extended time maps of the user (e.g., video, etc.) either stationary or performing one or more specific tasks (walking, jumping, packing boxes, etc.). The sense map is stored on the user's wireless device/mobile application. A sensing/monitoring network takes a similar sense map of the user in a possibly different place at a later time. The user's wireless device/mobile application communicates with the sensing/monitoring network and compares one or more sense maps/reference IDs with the sense map taken by the sensing/monitoring network and if the comparison is within some predetermined tolerance, an identification of the user has been determined. When the user is located in a particular monitored space, such as a store, by the same or a different sensing/monitoring network (such as, for example, a video-based system), the signature of the user can be ascertained by combining the determined identification of the user with the user's location. Thus, the user may be tracked throughout the store (or, perhaps, beyond depending on the reach of the video-based system) by the sensing/monitoring network (or a separate tracking system) and information may be pushed to the user such as the third set of information described above.
[121] With attention now drawn to Figure 2, a flow chart 200 is presented for determining a signature of an object according to an embodiment of the present subject matter. The signature of the object includes a first set of information and a second set of information. As discussed above, the object may be one or more individual persons, a group of people, a vehicle, a unit in an inventory, etc. The object may be located outside or may be located in an enclosure, such as, but not limited to, a store, a shopping mall, a sports arena, a convention hall, etc. In an embodiment, the signature of the object includes a first and a second set of information. [122] At block 210, a reference identification is determined for the object.
The reference identification may be a set of measureable data such as, but not limited to, a picture, a video still or video clip, a phone number of a wireless device associated with the object, a MAC address for a wireless device associated with the object, infrared map, a mobile device application, etc. Typically, the reference identification for the object should be sufficient to distinguish the object from another object that is within a particular space, sub-region, etc., that is being monitored.
[123] At block 220, the first set of information for the object is determined using a first monitoring system. In an embodiment, the first set of information includes location information for the object. At block 230, a second set of information for the object is determined using a second monitoring system where the second set of information includes a first and a second portion. In an embodiment, the second set of information includes identification information for the object, such as discussed above. The first portion of the second set of information for the object is determined from a second monitoring system and the second portion of the second set of information is determined from a third monitoring system.
[124] ' At block 240, the first and second portions of the second set of information are each individually compared with the reference identification determined at block 210. At block 245, either the first or the second portion of the second set of information is selected based on the comparison. In an embodiment, the selection is based on which of the first and second portion more closely matches the reference identification. As a non-limiting example, the "closeness" matching may be based on which set of data of the first and second portions has fewer deviations from the reference identification set of data. As another non-limiting example, the matching may be based on which set of data of the first and second portions is within some predetermined tolerance level. Once the selection is made, then an identification of the object has been determined.
[125] At block 250, the signature of the object is determined based at least in part on the first set of information and the selected portion of the second set of information. In a further embodiment, at block 260, the object is tracked based at least in part on the determined signature of the object. In a still further embodiment, at block 270, the object is provided with a third set of information such as, but not limited to, location information, navigation information, sales promotion information, advertising information, a mobile device application, etc.
[126] As discussed above, in an embodiment, the first monitoring system may be, but is not limited to, a video-based system, a sound-based system, an optics- based system, etc. In another embodiment, the second monitoring system may be, but is not limited to, a local area network, a Wi-Fi network, an RF ranging system, a proximity detection system, a RF pattern matching system, an RFID system, a pseudolite system, a GPS rebroadcast system, a GNSS rebroadcast system, a sound detection system, a light modulation system, a magnetic anomaly detection system, etc. In a further embodiment, the first monitoring system and the second monitoring system may be any of the preceding systems or networks. [127] Considering now Figure 3, a block diagram 300 is depicted for a system 303 for determining a signature of an object 301 according to an embodiment of the present subject matter. The signature of the object includes a first set of information and a second set of information. As discussed above, the object may be one or more individual persons, a group of people, a vehicle, a unit in an inventory, etc. The object is located in a location space 302 which may be located outside or in an enclosure, such as, but not limited to, a store, a shopping mall, a sports arena, a convention hall, etc. In an embodiment, the signature of the object includes a first and a second set of information.
[128] The system 303 includes a first monitoring system 321 having at least a first sensor 322 and a second monitoring system 331 having at least a second sensor 332. First monitoring system 321 determines the first set of information for the object 301. Second monitoring system 322 determines the second set of information for the object 301. As discussed above, in an embodiment, the first monitoring system may be, but is not limited to, a video-based system, a sound-based system, an optics-based system, etc. For example, the first sensor 322 may be a video camera that may be part of a video camera network which covers location space 302 which may be part of a larger space in which object 301 is to be monitored and/or tracked. In another embodiment, the second monitoring system may be, but is not limited to, a local area network, a Wi-Fi network, an RF ranging system, a proximity detection system, a RF pattern matching system, an RFID system, a pseudolite system, a GPS rebroadcast system, a GNSS rebroadcast system, a sound detection system, a light modulation system, a magnetic anomaly detection system, etc. For example, the second sensor 332 may be a Wi-Fi access point that may be part of a Wi-Fi network which covers location space 302 which may be part of a larger space in which object 301 is to be monitored and/or tracked. In a further embodiment, the first monitoring system and the second monitoring system may be any of the preceding systems or networks.
[129] Additionally, system 303 includes circuitry and/or software and/or a memory device 311 for storing a reference identification for the object 301. System 303 also includes a processor 380. Processor 380 includes a comparator for comparing the second set of information with the reference identification in device 311, circuitry /device and/or software for determining the signature of the object 301 based at least in part on the first set of information and the second set of information, circuitry/device and/or software for tracking the object 301 based at least in part on the determined signature of the object, and circuitry/device and/or software for providing a third set of information to the object 301. The third set of information may be, but is not limited to, one or more of location information, navigation information, sales promotion information, advertising information, and a mobile device application.
[130] Below are some additional non-limiting examples of the disclosed subject matter. Some of these examples make use of a Mobile Location by Dynamic Clustering ("MLDC") system, such as the one disclosed in U.S. Patent No. 8,526,968, the entirety of which is hereby incorporated herein by reference. In certain embodiments, the MLDC system is used to determine a location of a mobile device associated with an object using network measurement reports or similar information (which may include calibration data), clustering the information and comparing it with information received from the object's mobile device, and determining a location of the object/mobile device based on the comparison.
[131] As an example, an MLDC system places a mobile device associated with an object in some region {x,y,z} of location space. A video-based monitoring system, such as a video camera network ("VCN"), shows only one object in the{x,y,z} region where that object has coordinates (xv,yv,zv). The MLDC system then uses the (xv,yv,zv) coordinates as the location estimate of the object.
[132] As another example, a system, such as system 303 in Figure 3, uses a first type of observation system to locate and track an object, and a second type of observation system to refme the location estimate from the first type of system, where the second type of observation system on its own is incapable of tracking the user. In an embodiment, system 303 uses Wi-Fi information to locate a user and then refines that location estimate using a VCN, where the VCN on its own cannot track the user.
[133] As a further non-limiting example, the MLDC system uses Wi-Fi data to place an object's mobile device in some region {x,y,z} of location space. Data from a VCN shows only one object in the {x,y,z} region where that object has coordinates (xv,yv,zv). The MLDC system then uses the (xv,yv,zv) coordinates as its location estimate of the object. However, the VCN on its own can only tell how many objects (e.g., blobs, human like images) exist in one or more regions of the location space. The VCN cannot provide the identity of the object.
[134] In a still further example, if a VCN cannot provide continuous coverage over the locatable region (e.g., if the region includes an area which is out of range of the video cameras in the VCN), a situation may arise where the VCN cannot track an object. However, the MLDC may be able to track the object where the VCN cannot, and where both the MLDC and the VCN are capable of tracking the object, the MLDC may use information from the VCN to improve its location estimate of the object.
[135] In yet another non-limiting example, consider the case where an object is identified by a VCN when the object enters a region and the object is tracked using an algorithm A which does not rely on continuously updating the object
identification. For example, algorithm A could be a generic "blob-following" algorithm. Now, when the object enters a region R where the VCN is inactive (say, e.g., a washroom), algorithm A fails and cannot recover. Any entry into a non- VCN-covered region is a potential failure point. Now if the tracking of the object was being undertaken in conjunction with and MLDC system or any other location/tracking scheme using, e.g., Wi-Fi or other RF signals, the latter technique could track the object into and out of the region R. After the object has cleared region R, the latter technique could then pass tracking control back to the VCN. Such a scheme would have the potential to utilize the VCN while not exhibiting complete failure points in the location space.
[136] In still another non-limiting example using the principles discussed in the disclosed subject matter, a region in a 2-D or 3-D space may be calibrated where the space has one or more distinguishing features {F} with known co-ordinates and where a mobile device being calibrated has no knowledge of ground truth (i.e., where exactly it is in space). [137] The desired calibration data will include an observation set (i.e., measurements at a mobile device of whatever type) at a set of locations. These observations are assumed to be a function of the (x,y,z) coordinates of the 2-D or 3-D space and may sometimes be dependent on the first or higher derivatives with respect to time of the (x,y,z) coordinates, representing velocity, acceleration, etc. A mobile device in the space records observation data and stores the observation data along with a time stamp. Any arbitrary form of motion of the mobile is permissible in order to obtain the observation data.
[138] A VCN may observe and record the motion of the mobile device.
The VCN uses a clock with some known relationship to the clock used by the mobile device. On completion of the movement of the mobile device through the space, corresponding instants of time from the VCN and the mobile observation record are matched. Given the video record, the set {F} can permit accurate calculation of the (x,y,z) coordinates of the mobile device. Successive frames also permit the calculation of higher derivatives representing velocity and acceleration. Now the mobile location and higher derivatives can be matched to the observations. Calibration is achieved.
[139] Certain embodiments of the present disclosure may be implemented by a general purpose computer programmed in accordance with the principals discussed herein. It may be emphasized that the above-described embodiments, particularly any "preferred" embodiments, are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiments of the disclosure without departing substantially from the spirit and principles of the disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.
[140] Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible program carrier for execution by, or to control the operation of, data processing apparatus. The tangible program carrier can be a computer readable medium. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, or a combination of one or more of them.
[141] The term "processor" encompasses all apparatus, devices, and machines for processing data, including by way of example, a programmable processor, a computer, or multiple processors or computers. The processor can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
[142] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a standalone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
[143] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be
implemented as, special purpose logic circuitry, e.g., a field programmable gate array ("FPGA") or an application specific integrated circuit ("ASIC").
[144] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more data memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant ("PDA"), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, to name just a few.
[145] Computer readable media suitable for storing computer program instructions and data include all forms data memory including non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD- ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
[146] To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, input from the user can be received in any form, including acoustic, speech, or tactile input.
[147] Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a
communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), e.g., the Internet.
[148] The computing system can include clients and servers. A client and server are generally remote from each other and typically interact through a
communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
[149] While this specification contains many specifics, these should not be construed as limitations on the scope of the claimed subject matter, but rather as descriptions of features that may be specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
[150] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
[151] While some embodiments of the present subject matter have been described, it is to be understood that the embodiments described are illustrative only and that the scope of the invention is to be defined solely by the appended claims when accorded a full range of equivalence, many variations and modifications naturally occurring to those of skill in the art from a perusal hereof.

Claims

We Claim:
1. A method for determining a signature of an object where the signature includes a first set of information and a second set of information, the method comprising the steps of:
(a) determining a reference identification for the object;
(b) determining the first set of information for the object from a first monitoring system;
(c) determining the second set of information for the object from a second monitoring system;
(d) comparing the second set of information with the reference identification; and
(e) determining the signature of the object based at least in part on the first set of information and the second set of information.
2. The method of Claim 1 wherein the first set of information comprises location information for the object.
3. The method of Claim 2 wherein the second set of information comprises identification information for the object.
4. The method of Claim 1 wherein the first monitoring system is a selected from the group consisting of: video-based system, sound-based system, optics-based system, and combinations thereof.
5. The method of Claim 4 wherein the second monitoring system is selected from the group consisting of: local area network, Wi-Fi network, RF ranging system, proximity detection system, RF pattern matching system, RFID system, pseudolite system, GPS rebroadcast system, GNSS rebroadcast system, sound detection system, light modulation system, magnetic anomaly detection system, and combinations thereof.
6. The method of Claim 1 further comprising the step of:
(f) tracking the object based at least in part on the determined signature of the object.
7. The method of Claim 6 further comprising the step of:
(g) providing a third set of information to the object wherein the third set of information is selected from the group consisting of: location information, navigation information, sales promotion information, advertising information, a mobile device application, and combinations thereof.
8. The method of Claim 1 wherein the object is located outside.
9. The method of Claim 1 wherein the object is located in an enclosure selected from the group consistmg of: a store, a shopping mall, a sports arena, and a convention hall.
10. The method of Claim 1 wherein the object is selected from the group consisting of: a person, a group of people, a vehicle, and an item in an inventory of items.
1 1. A method for determining a signature of an object where the signature includes a first set of information and a second set of information, the method comprising the steps of:
(a) determining a reference identification for the object;
(b) determining the first set of information for the object from a first monitoring system;
(c) determining a first portion of the second set of information for the object from a second monitoring system and determining a second portion of the second set of information from a third monitoring system;
(d) comparing the first and second portions of the second set of information with the reference identification;
(e) selecting either the first or second portion of the second set of information based on the comparison; and
(f) determining the signature of the object based at least in part on the first set of information and the selected portion of the second set of information.
12. The method of Claim 11 wherein the first set of information comprises location information for the object.
13. The method of Claim 12 wherein the second set of information comprises identification information for the object.
14. The method of Claim 1 1 wherein the first monitoring system is a selected from the group consisting of: video-based system, sound-based system, optics-based system, and combinations thereof.
15. The method of Claim 14 wherein the second monitoring system is selected from the group consisting of: local area network, Wi-Fi network, RF ranging system, proximity detection system, RF pattern matching system, RFID system, pseudolite system, GPS rebroadcast system, GNSS rebroadcast system, sound detection system, light modulation system, magnetic anomaly detection system, and combinations thereof.
16. The method of Claim 11 further comprising the step of:
(f) tracking the object based at least in part on the determined signature of the object.
17. The method of Claim 16 further comprising the step of:
(g) providing a third set of information to the object wherein the third set of information is selected from the group consisting of: location information, navigation information, sales promotion information, advertising information, a mobile device application, and combinations thereof.
18. The method of Claim 11 wherein the object is located outside.
19. The method of Claim 1 1 wherein the object is located in an enclosure selected from the group consisting of: a store, a shopping mall, a sports arena, and a convention hall.
20. The method of Claim 11 wherein the object is selected from the group consisting of: a person, a group of people, a vehicle, and an item in an inventory of items.
21. A system for determining a signature of an object where the signature includes a first set of information and a second set of information, the method comprising the steps of:
circuitry for storing a reference identification for the object;
a first monitoring system for determining the first set of information for the object; a second monitoring system for determining the second set of information for the object;
a comparator for comparing the second set of information with the reference identification;
circuitry for determining the signature of the object based at least in part on the first set of information and the second set of information;
circuitry for tracking the object based at least in part on the determined signature of the object; and
circuitry for providing a third set of information to the object wherein the third set of information is selected from the group consisting of: location information, navigation information, sales promotion information, advertising information, a mobile device application, and combinations thereof.
PCT/US2014/038806 2013-05-31 2014-05-20 System and method for mobile identification and tracking in location systems WO2015069320A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201361829793P 2013-05-31 2013-05-31
US61/829,793 2013-05-31

Publications (2)

Publication Number Publication Date
WO2015069320A2 true WO2015069320A2 (en) 2015-05-14
WO2015069320A3 WO2015069320A3 (en) 2015-07-02

Family

ID=53042282

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2014/038806 WO2015069320A2 (en) 2013-05-31 2014-05-20 System and method for mobile identification and tracking in location systems

Country Status (1)

Country Link
WO (1) WO2015069320A2 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3169121A1 (en) * 2015-11-16 2017-05-17 Accenture Global Solutions Limited Telecommunication network signal analysis for matching a mobile device cellular identifier with a mobile device network identifier
CN107889215A (en) * 2017-12-01 2018-04-06 重庆邮电大学 Multistage positioning method and system based on mark management
CN110109087A (en) * 2019-05-07 2019-08-09 中国科学院声学研究所 A kind of irregular investigative range display methods of sonar and system
WO2020240690A1 (en) * 2019-05-28 2020-12-03 日本電信電話株式会社 Data analysis system and data analysis method
EP3771995A1 (en) * 2019-07-31 2021-02-03 Palantir Technologies Inc. Determining object geolocations based on heterogeneous data sources
EP3771994A1 (en) * 2019-07-31 2021-02-03 Palantir Technologies Inc. Determining geolocations of composite entities based on heterogeneous data sources
WO2021142017A1 (en) * 2020-01-06 2021-07-15 Misapplied Sciences, Inc. Transportation hub information system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1732247A4 (en) * 2004-03-03 2011-05-04 Nec Corp Positioning system, positioning method, and program thereof
KR100994840B1 (en) * 2009-11-27 2010-11-16 주식회사 케이티 Position determination method and system based on wlan rssi value
US8615254B2 (en) * 2010-08-18 2013-12-24 Nearbuy Systems, Inc. Target localization utilizing wireless and camera sensor fusion
KR20120072253A (en) * 2010-12-23 2012-07-03 한국전자통신연구원 Localization device and localization method
US8938257B2 (en) * 2011-08-19 2015-01-20 Qualcomm, Incorporated Logo detection for indoor positioning

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109691193B (en) * 2015-11-16 2021-02-09 埃森哲环球解决方案有限公司 Method and system for matching identifiers
US9973889B2 (en) 2015-11-16 2018-05-15 Accenture Global Solutions Limited Telecommunication network signal analysis for matching a mobile device cellular identifier with a mobile device network identifier
CN109691193A (en) * 2015-11-16 2019-04-26 埃森哲环球解决方案有限公司 Telecommunications network signal analysis for matching mobile device cellular identifiers to mobile device network identifiers
EP3169121A1 (en) * 2015-11-16 2017-05-17 Accenture Global Solutions Limited Telecommunication network signal analysis for matching a mobile device cellular identifier with a mobile device network identifier
CN107889215A (en) * 2017-12-01 2018-04-06 重庆邮电大学 Multistage positioning method and system based on mark management
CN107889215B (en) * 2017-12-01 2020-08-18 重庆邮电大学 Multilevel positioning method and system based on identification management
CN110109087A (en) * 2019-05-07 2019-08-09 中国科学院声学研究所 A kind of irregular investigative range display methods of sonar and system
WO2020240690A1 (en) * 2019-05-28 2020-12-03 日本電信電話株式会社 Data analysis system and data analysis method
JPWO2020240690A1 (en) * 2019-05-28 2020-12-03
JP7422755B2 (en) 2019-05-28 2024-01-26 日本電信電話株式会社 Data analysis system and data analysis method
EP3771995A1 (en) * 2019-07-31 2021-02-03 Palantir Technologies Inc. Determining object geolocations based on heterogeneous data sources
US11586660B2 (en) 2019-07-31 2023-02-21 Palantir Technologies Inc. Determining object geolocations based on heterogeneous data sources
EP3771994A1 (en) * 2019-07-31 2021-02-03 Palantir Technologies Inc. Determining geolocations of composite entities based on heterogeneous data sources
EP4343577A1 (en) * 2019-07-31 2024-03-27 Palantir Technologies Inc. Determining object geolocations based on heterogeneous data sources
US11966430B2 (en) 2019-07-31 2024-04-23 Palantir Technologies Inc. Determining geolocations of composite entities based on heterogeneous data sources
US12111862B2 (en) 2019-07-31 2024-10-08 Palantir Technologies Inc. Determining object geolocations based on heterogeneous data sources
US12361045B2 (en) 2019-07-31 2025-07-15 Palantir Technologies Inc. Determining geolocations of composite entities based on heterogeneous data sources
WO2021142017A1 (en) * 2020-01-06 2021-07-15 Misapplied Sciences, Inc. Transportation hub information system
US11315526B2 (en) 2020-01-06 2022-04-26 Misapplied Sciences, Inc. Transportation hub information system

Also Published As

Publication number Publication date
WO2015069320A3 (en) 2015-07-02

Similar Documents

Publication Publication Date Title
Farahsari et al. A survey on indoor positioning systems for IoT-based applications
EP3432653B1 (en) Method, system, and apparatus for determining and provisioning location information of wireless devices
Basiri et al. Indoor location based services challenges, requirements and usability of current solutions
US10750470B2 (en) Systems and methods for determining if a receiver is inside or outside a building or area
WO2015069320A2 (en) System and method for mobile identification and tracking in location systems
Hightower et al. Location systems for ubiquitous computing
US9420423B1 (en) RF beacon deployment and method of use
US9491584B1 (en) Hospitality venue navigation, guide, and local based services applications utilizing RF beacons
US9749780B2 (en) Method and apparatus for mobile location determination
CN106461752B (en) Adaptive location
US20130115969A1 (en) System and method for cell phone targeting and tracking
Varshavsky et al. � Location in Ubiquitous Computing
KR20080035955A (en) Mobile Positioning Service System and Method Using RFI and Communication Network
US20170055118A1 (en) Location and activity aware content delivery system
CA2940966C (en) Systems and methods for tracking, marketing, and/or attributing interest in one or more real estate properties
KR101981465B1 (en) A method and platform for sending a message to a communication device associated with a moving object
Chen et al. Centimeter-Level Indoor Positioning With Facing Direction Detection for Microlocation-Aware Services
US9002376B2 (en) Systems and methods for gathering information about discrete wireless terminals
Shien et al. A secure mobile crowdsensing (MCS) location tracker for elderly in smart city
Lin et al. A RFID-Based Personal Navigation System for Multi-story Indoor Environments
Matshego Asset tracking, monitoring and recovery system based on hybrid radio frequency identification and global positioning system technologies
US9295098B1 (en) Methods and systems for facilitating data communication
Habib A survey on location systems for ubiquitous computing
Ray Bernard Applying Indoor Positioning Systems: A Primer for Integrators and Security Specialists
Farooq et al. Smart Phone Based Indoor Environment Awareness System

Legal Events

Date Code Title Description
122 Ep: pct application non-entry in european phase

Ref document number: 14860518

Country of ref document: EP

Kind code of ref document: A2