US20190289263A1 - Notifications by a network-connected security system based on content analysis - Google Patents
Notifications by a network-connected security system based on content analysis Download PDFInfo
- Publication number
- US20190289263A1 US20190289263A1 US16/239,343 US201916239343A US2019289263A1 US 20190289263 A1 US20190289263 A1 US 20190289263A1 US 201916239343 A US201916239343 A US 201916239343A US 2019289263 A1 US2019289263 A1 US 2019289263A1
- Authority
- US
- United States
- Prior art keywords
- network
- content
- event
- detected
- electronic device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B25/00—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
- G08B25/01—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
- G08B25/10—Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
-
- G06K9/00718—
-
- G06K9/00771—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19654—Details concerning communication with a camera
- G08B13/19656—Network used to communicate with a camera, e.g. WAN, LAN, Internet
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0806—Configuration setting for initial configuration or provisioning, e.g. plug-and-play
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L41/00—Arrangements for maintenance, administration or management of data switching networks, e.g. of packet switching networks
- H04L41/08—Configuration management of networks or network elements
- H04L41/0803—Configuration setting
- H04L41/0813—Configuration setting characterised by the conditions triggering a change of settings
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L43/00—Arrangements for monitoring or testing data switching networks
- H04L43/08—Monitoring or testing based on specific metrics, e.g. QoS, energy consumption or environmental parameters
-
- G06K2009/00738—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/44—Event detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/183—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
Definitions
- Various embodiments concern computer programs and associated computer-implemented techniques for intelligently processing content generated by electronic devices such as security cameras, security lights, etc.
- Surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, or protecting people/items in a given environment.
- surveillance requires that the given environment be monitored by means of electronic devices such as security cameras, security lights, etc.
- electronic devices such as security cameras, security lights, etc.
- a variety of electronic devices may be distributed through the home environment to detect activities performed in/around the home.
- Wireless security cameras have proved to be very popular among modern consumers due to their low installation costs and flexible installation options. Moreover, many wireless security cameras can be mounted in locations that were previously unavailable to wired security cameras. Thus, consumers can readily set up home security systems for seasonal monitoring/surveillance (e.g., of pools, yards, garages, etc.).
- FIG. 1 is a diagram illustrating an example environment in which at least some operations described herein can be implemented
- FIG. 2 is a diagram illustrating various functional components of an example electronic device configured to monitor various aspects of a surveilled environment
- FIG. 3 is a diagram illustrating various functional components of an example base station associated with a network-connected security system configured to monitor various aspects of a surveilled environment;
- FIG. 4 is a plan view of a surveilled environment (e.g., a home) illustrating an example arrangement of devices associated with a network-connected security system;
- FIG. 5A is a diagram illustrating a network environment that includes a base station designed to receive content generated by one or more electronic devices arranged throughout a surveilled environment;
- FIG. 5B is a diagram illustrating a network environment that includes a security management platform that is supported by the network-accessible server system;
- FIG. 6 is an architecture flow diagram illustrating an environment including an analytics system for presenting security notifications at a client device based on analysis of content generated at electronic devices in a network-connected security system;
- FIG. 7 is a flow diagram illustrating an example process for detecting objects in captured image or video content
- FIG. 8 is a flow diagram illustrating an example process for classifying objects detected in captured image or video content
- FIG. 9 is a diagram illustrating how a distributed computing cluster can be utilized to process content
- FIG. 10 is a diagram illustrating how MapReduceTM can be utilized in combination with Apache HadoopTM in the distributed computing cluster depicted in FIG. 9 ;
- FIG. 11 is a diagram illustrating how content can be processed in batches
- FIG. 12 is a flow diagram illustrating an example process for presenting notifications at a client device based on analysis of content generated at electronic devices in a network-connected security system.
- FIG. 13 is a diagram illustrating an example of a computer processing system in which at least some operations described herein can be implemented.
- surveillance systems are connected to a computer server via a network. Some content generated by a security system may be examined locally (i.e., by the security system itself), while other content generated by the security system may be examined remotely (e.g., by the computer server).
- a network-connected surveillance system (also referred to as a “security system”) includes a base station and one or more electronic devices.
- the electronic device(s) can be configured to monitor various aspects of a surveilled physical environment (also referred to herein as a “surveilled environment”).
- security cameras may be configured to record video upon detecting movement, while security lights may be configured to illuminate the surveilled environment upon detecting movement.
- Different types of electronic devices can create different types of content.
- the security cameras may generate audio data and/or video data, while the security lights may generate metadata specifying a time at which each illumination event occurred, a duration of each illumination event, etc.
- the base station may be responsible for transmitting the content generated by the electronic device(s) to a network-accessible computer server.
- each electronic device may provide data to the base station, which in turn provides at least some of the data to the network-accessible computer server.
- security systems support features such as high-quality video recording, live video streaming, two-way audio transmission, cloud-based storage of recordings, instant alerts, etc. These features enable individuals to gain an in-depth understanding of what activities are occurring within the environment being surveilled.
- security systems having these features also experience drawbacks.
- the security system may alert an administrator (e.g., a home owner). Certain alerts are not necessary in that the detected event does not pose any security risk. For instance, if the administrator observes that motion detection is triggered by movement of a bird, the administrator may determine that an alert is not needed. Conversely, if the administrator observes that motion detection is triggered by movement of a coyote, the administrator may determine that an alert is needed. Similar conclusions may be drawn for other routine events (e.g., mail delivery by postal worker). Administrators may simply ignore those alerts that are not needed (e.g., by simply deleting the corresponding notifications); however, an abundance of false positive notifications will tend to reduce the effectiveness of the security system as the administrator becomes overwhelmed by notifications and is not able to respond effectively.
- an administrator e.g., a home owner.
- a base station may employ cloud-based analytics to verify content (e.g., a video clip) before generating a notification or initiating a peer-to-peer stream to deliver the content to a user device.
- content e.g., a video clip
- the base station may be required to contact the network-connected computer server, which can perform the processing needed to filter out unnecessary alerts.
- the network-connected computer server is one of multiple network-connected computer servers that form a server system.
- the server system may balance the load amongst the multiple network-connected computer servers (e.g., by intelligently distributing images for processing) to ensure the verification process is completed with low latency.
- Similar cloud-based analytics can be employed on content generated by electronic devices to visually detect intruders, audibly detect sounds indicative of a break-in (e.g., an unrecognized voice or a window breaking), audibly detect sounds indicative of catastrophic events such as fire or earthquakes, etc. Further, such analysis may be performed in real time (or near real time) as content is generated so that an administrator is able to quickly respond to notifications of detected events.
- references in this description to “an embodiment” or “one embodiment” means that the particular feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.
- the words “comprise” and “comprising” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”).
- the terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect.
- the coupling/connection can be physical, logical, or a combination thereof.
- devices may be electrically or communicatively coupled to one another despite not sharing a physical connection.
- module refers broadly to software components, hardware components, and/or firmware components. Modules are typically functional components that can generate useful data or other output(s) based on specified input(s). A module may be self-contained.
- a computer program may include one or more modules. Thus, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks.
- FIG. 1 is a block diagram illustrating an example environment in which the introduced technique for analysis of content can be implemented.
- the example environment 100 includes a network-connected security system that includes base station 105 and one or more electronic devices 110 such as cameras 110 a , audio recorder devices 110 b , security lights 110 c , or any other types of security devices.
- the base station 105 and the one or more electronic devices 110 can be connected to each other via a local network 125 .
- the local network 125 can be a local area network (LAN).
- the local network 125 is a WLAN, such as a home Wi-Fi, created by one or more wireless accesses points (APs) 120 .
- functionality associated with base station 105 and/or wireless AP 120 are implemented in software instantiated at a wireless networking device.
- the system may include multiple wireless networking devices as nodes, wherein each of the wireless networking devices is operable as a wireless AP 120 and/or base station 105 .
- the one or more electronic devices 110 and the base station 105 can be connected to each other wirelessly, e.g., over Wi-Fi, or using wired means.
- the base station 105 and the one or more electronic devices 110 can be connected to each other wirelessly via the one or more wireless APs 120 , or directly with each other without the wireless AP 120 , e.g., using Wi-Fi direct, Wi-Fi ad hoc or similar wireless connection technologies or via wired connections.
- the base station 105 can be connected to the local network 125 using a wired means or wirelessly.
- the one or more electronic devices 110 can be battery powered or powered from a wall outlet.
- the one or more electronic devices 110 can include one or more sensors such as motion sensors that can activate, for example, the capturing of audio or video, the encoding of captured audio or video, and/or transmission of an encoded audio or video stream when motion is detected.
- Cameras 110 a may capture video, encode the video as a video stream, and wirelessly transmit the video stream via local network 125 for delivery to a user device 102 .
- certain cameras may include integrated encoder components.
- the encoder component may be a separate device coupled to the camera 110 a .
- an analog camera may be communicatively coupled to the base station 105 and/or wireless AP 120 via an analog to digital encoder device (not shown in FIG. 1 ).
- the base station 105 and/or wireless APs 120 may include encoding components to encode and/or transcode video.
- Encoder components may include any combination of software and/or hardware configured to encode video information.
- Such encoders may be based on any number of different standards such as H.264, H.265, VP8, VP9, Daala, MJPEG, MPEG4, Windows Media Video (WMV), etc. for encoding video information.
- the video stream from a given camera 110 a may be one of several different formats such as .AVI, .MP4, .MOV, .WMA, .MKV, etc.
- the video stream can include audio as well if the camera 110 a includes or is communicatively coupled to an audio device 110 b (e.g., a microphone).
- cameras 110 a can include infrared (IR) light emitting diode (LED) sensors, which can provide night-vision capabilities.
- IR infrared
- LED light emitting diode
- audio recording devices 110 b may capture audio, encode the audio as an audio stream, and wirelessly transmit the audio stream via local network 125 for delivery to a user device 102 .
- certain audio recording devices may include integrated encoder components.
- the encoder component may be a separate device coupled to the audio recording device 110 b .
- an analog audio recording device may be communicatively coupled to the base station 105 and/or wireless AP 120 via an analog to digital encoder device (not shown in FIG. 1 ).
- the base station 105 and/or wireless APs 120 may include encoding components to encode and/or transcode audio.
- Encoder components may include any combination of software and/or hardware configured to encode audio information.
- Such encoders may be based on any number of different standards such as Free Lossless Audio Codec (FLAC), MPEG-4 Audio, Windows Media Audio (WMA), etc. for encoding audio information. Accordingly, depending on the codec used, the audio stream from a given camera 110 a may be one of several different formats such as .FLAC, .WMA, .AAC, etc.
- FLAC Free Lossless Audio Codec
- WMA Windows Media Audio
- the security system can include just a single type of electronic device (e.g., cameras 110 a ) or two or more different types of electronic devices 110 which can be installed at various locations of a building.
- the various electronic devices 110 of the security system may include varying features and capabilities.
- some electronic devices 110 may be battery powered while another may be powered from the wall outlet.
- some electronic devices 110 may connect wirelessly to the base station 105 while others rely on wired connections.
- electronic devices of a particular type (e.g., cameras 110 a ) included in the security system may also include varying features and capabilities.
- a first camera 110 a may include an integrated night vision, audio recording, and motion sensing capabilities while a second camera 100 a only includes video capture capabilities.
- the base station 105 can be a computer system that serves as a gateway to securely connect the one or more electronic devices 110 to an external network 135 , for example, via one or more wireless APs 120 .
- the external network 135 may comprise one or more networks of any type including packet switched communications networks, such as the Internet, World Wide Web portion of the Internet, extranets, intranets, and/or various other types of telecommunications networks such as cellular phone and data networks, plain old telephone system (POTS) networks, etc.
- packet switched communications networks such as the Internet, World Wide Web portion of the Internet, extranets, intranets, and/or various other types of telecommunications networks such as cellular phone and data networks, plain old telephone system (POTS) networks, etc.
- POTS plain old telephone system
- the base station 105 can provide various features such as long range wireless connectivity to the electronic devices 110 , a local storage device 115 , a siren, connectivity to network attached storage (NAS), and enhance battery life for certain electronic devices 110 , e.g., by configuring certain electronic devices 110 for efficient operation and/or by maintaining efficient communications between the base station 105 and such electronic devices 110 .
- the base station 105 can be configured to store the content (e.g., audio and/or video) captured by some electronic devices 110 in any of the local storage device 115 or a network-accessible storage 148 .
- the base station 105 can be configured to generate a sound alarm from the siren when an intrusion is detected by the base station 105 based on the video streams receive from cameras 110 / 112 .
- the base station 105 can create its own network within the local network 125 , so that the one or more electronic devices 110 do not overload or consume the network bandwidth of the local network 125 .
- the local network 125 can include multiple access points 120 to increase wireless coverage of the base station 105 , which may be beneficial or required in cases where the electronic devices 110 are wirelessly connected and are spread over a large area.
- the local network 125 can provide wired and/or wireless coverage to user devices (e.g., user device 102 ), for example, via APs 120 .
- a user device 102 can connect to the base station 105 , for example, via the local network 125 if located close to the base station 105 and/or wireless AP 120 .
- the user device 102 can connect to the base station 105 via network 135 (e.g., the Internet).
- the user device 102 can be any computing device that can connect to a network and play video content, such as a smartphone, a laptop, a desktop, a tablet personal computer (PC), or a smart TV.
- the base station 105 receives the request and in response to receiving the request, obtains the encoded stream(s) from one or more of the electronic devices 110 and transmits the encoded stream to the user device 102 for presentation.
- a playback application in the user device 102 decodes the encoded stream and plays the audio and/or video to the user 103 , for example, via speakers and/or a display of the user devices 102 .
- the base station 105 may include an encoding/transcoding component that performs a coding process on audio and/or video received from the electronic devices 110 before streaming to the user device 102 .
- a transcoder at the base station 105 transcodes a stream received from an electronic device 100 (e.g., a video stream from a camera 110 a ), for example, by decoding the encoded stream and re-encoding the stream into another format to generate a transcoded stream that it then streams to the user device 102 .
- the audio and/or video stream received at the user device 102 may be a real-time stream and/or a recorded stream.
- a transcoder may transcode an encoded stream received from an electronic device 110 and stream the transcoded stream to the user device 102 in real time or near real time (i.e., within several seconds) as the audio and/or video is captured at the electronic device 110 .
- audio and/or video streamed by base station 105 to the user device 102 may be retrieved from storage such as local storage 115 or a network-accessible storage 148 .
- the base station 105 can stream audio and/or video to the user device 102 in multiple ways.
- the base station 105 can stream to the user device 102 using peer-to-peer (P2P) streaming technique.
- P2P streaming when the playback application on the user device 102 requests the stream, the base station 105 and the user device 102 may exchange signaling information, for example via network 135 or a network-accessible server system 145 , to determine location information of the base station 105 and the user device 102 , to find a best path and establish a P2P connection to route the stream from the base station 105 to the user device 102 .
- the base station 105 After establishing the connection, the base station 105 streams the audio and/or video to the user device 102 , eliminating the additional bandwidth cost to deliver the audio and/or video stream from the base station 105 to a network-accessible server computer 146 in a network-accessible server system 145 and for streaming from the network-accessible server computer 146 to the user device 102 .
- a network-accessible server computer 146 in the network-accessible server system 145 may keep a log of available peer node servers to route streams and establish the connection between the user device 102 and other peers.
- the server 146 may function as a signaling server or can include signaling software whose function is to maintain and manage a list of peers and handle the signaling between the base station 105 and the user device 102 .
- the server 146 can dynamically select the best peers based on geography and network topology.
- the network-accessible server system 145 is a network of resources from a centralized third-party provider using Wide Area Networking (WAN) or Internet-based access technologies.
- the network-accessible server system 145 is configured as or operates as part of a cloud network, in which the network and/or computing resources are shared across various customers or clients. Such a cloud network is distinct, independent, and different from that of the local network 125 .
- the local network 125 may include a multi-band wireless network comprising one or more wireless networking devices (also referred to herein as nodes) that function as wireless APs 120 and/or a base station 105 .
- base station 105 may be implemented at a first wireless networking device that functions as a gateway and/or router. That first wireless networking device may also function as a wireless AP.
- Other wireless networking devices may function as satellite wireless APs that are wirelessly connected to each other via a backhaul link.
- the multiple wireless networking devices provide wireless network connections (e.g., using Wi-Fi) to one or more wireless client devices such as one or more wireless electronic devices 110 or any other devices such as desktop computers, laptop computers, tablet computers, mobile phones, wearable smart devices, game consoles, smart home devices, etc.
- the wireless networking devices together provide a single wireless network (e.g., network 125 ) configured to provide broad coverage to the client devices.
- the system of wireless networking devices can dynamically optimize the wireless connections of the client devices without the need of reconnecting.
- An example of the multi-band wireless networking system is the NETGEAR® Orbi® system.
- Such systems are exemplified in U.S. patent application Ser. No. 15/287,711, filed Oct. 6, 2016, and Ser. No. 15/271,912, filed Sep. 21, 2016, now issued as U.S. Pat. No. 9,967,884 both of which are hereby incorporated by reference in their entireties for all purposes.
- the wireless networking devices of a multi-band wireless networking system can include radio components for multiple wireless bands, such as 2.5 GHz frequency band, low 5 GHz frequency band, and high 5 GHz frequency band.
- at least one of the bands can be dedicated to the wireless communications among the wireless networking devices of the system.
- Such wireless communications among the wireless networking devices of the system are referred to herein as “backhaul” communications.
- Any other bands can be used for wireless communications between the wireless networking devices of the system and client devices such as cameras 110 connecting to the system.
- the wireless communications between the wireless networking devices of the system and client devices are referred to as “fronthaul” communications.
- FIG. 2 shows a high-level functional block diagram illustrating the architecture of an example electronic device 200 (e.g., similar to electronic devices 110 described with respect to FIG. 1 ) that monitors various aspects of a surveilled environment.
- the electronic device 200 may generate content while monitoring the surveilled environment, and then transmit the content to a base station for further processing.
- the electronic device 200 (also referred to as a “recording device”) can include one or more processors 202 , a communication module 204 , an optical sensor 206 , a motion sensing module 208 , a microphone 210 , a speaker 212 , a light source 214 , and one or more storage modules 216 .
- the processor(s) 202 can execute instructions stored in the storage module(s) 216 , which can be any device or mechanism capable of storing information.
- a single storage module includes multiple computer programs for performing different operations (e.g., image recognition, noise reduction, filtering), while in other embodiments each computer program is hosted within a separate storage module.
- the communication module 204 can manage communication between various components of the electronic device 200 .
- the communication module 204 can also manage communications between the electronic device 200 and a base station, another electronic device, etc.
- the communication module 204 may facilitate communication with a mobile phone, tablet computer, wireless access point (WAP), etc.
- WAP wireless access point
- the communication module 204 may facilitate communication with a base station responsible for communicating with a network-connected computer server; more specifically, the communication module 204 may be configured to transmit content generated by the electronic device 200 to the base station for processing.
- the base station may examine the content itself or transmit the content to the network-connected computer server for examination.
- the optical sensor 206 can be configured to generate optical data related to the surveilled environment.
- optical sensors include charged-coupled devices (CCDs), complementary metal-oxide-semiconductors (CMOSs), infrared detectors, etc.
- CCDs charged-coupled devices
- CMOSs complementary metal-oxide-semiconductors
- the optical sensor 206 is configured to generate a video recording of the surveilled environment responsive to, for example, determining that movement has been detected within the surveilled environment.
- the optical data generated by the optical sensor 206 is used by the motion sensing module 208 to determine whether movement has occurred.
- the motion sensing module 208 may also consider data generated by other components (e.g., the microphone) as input.
- an electronic device 200 may include multiple optical sensors of different types (e.g., visible light sensors and/or IR sensors for night vision).
- the microphone 210 can be configured to record sounds within the surveilled environment.
- the electronic device 200 may include multiple microphones.
- the microphones may be omnidirectional microphones designed to pick up sound from all directions.
- the microphones may be directional microphones designed to pick up sounds coming from a specific direction. For example, if the electronic device 200 is intended to be mounted in a certain orientation (e.g., such that the camera 208 is facing a doorway), then the electronic device 200 may include at least one microphone arranged to pick up sounds originating from near the point of focus.
- the speaker 212 can be configured to convert an electrical audio signal into a corresponding sound that is projected into the surveilled environment. Together with the microphone 210 , the speaker 212 enables an individual located within the surveilled environment to converse with another individual located outside of the surveilled environment. For example, the other individual may be a homeowner who has a computer program (e.g., a mobile application) installed on her mobile phone for monitoring the surveilled environment.
- a computer program e.g., a mobile application
- the light source 214 can be configured to illuminate the surveilled environment.
- the light source 214 may illuminate the surveilled environment responsive to a determination that movement has been detected within the surveilled environment.
- the light source 214 may generate metadata specifying a time at which each illumination event occurred, a duration of each illumination event, etc. This metadata can be examined by the processor(s) 202 and/or transmitted by the communication module 204 to the base station for further processing.
- electronic devices 110 may be configured as different types of devices such as cameras 110 a , audio recording devices 110 b , security lights 110 c , and other types of devices. Accordingly, embodiments of the electronic device 200 may include some or all of these components, as well as other components not shown here. For example, if the electronic device 200 is a security camera 110 a , then some components (e.g., the microphone 210 , speaker 212 , and/or light source 214 ) may not be included. As another example, if the electronic device 200 is a security light 110 c , then other components (e.g., the camera 208 , microphone 210 , and/or speaker 212 ) may not be included.
- FIG. 3 is a high-level functional block diagram illustrating an example base station 300 configured to process content generated by electronic devices (e.g., electronic device 200 of FIG. 2 ) and forward the content to other computing devices such as a network-connected computer server, etc.
- electronic devices e.g., electronic device 200 of FIG. 2
- other computing devices such as a network-connected computer server, etc.
- the base station 300 can include one or more processors 302 , a communication module 304 , and one or more storage modules 306 .
- a single storage module includes multiple computer programs for performing different operations (e.g., image recognition, noise reduction, filtering), while in other embodiments each computer program is hosted within a separate storage module.
- the base station 300 may include a separate storage module for each electronic device within its corresponding surveillance environment, each type of electronic device within its corresponding surveillance environment, etc.
- Such a categorization enables the base station 300 to readily identify the content/data generated by security cameras, security lights, etc.
- the content/data generated by each type of electronic device may be treated differently by the base station 300 .
- the base station 300 may locally process sensitive content/data but transmit less sensitive content/data for processing by a network-connected computer server.
- the base station 300 processes content/data generated by the electronic devices, for example, to analyze the content to understand what events are occurring within the surveilled environment, while in other embodiments the base station 300 transmits the content/data to a network-connected computer server responsible for performing such analysis.
- the communication module 304 can manage communication with electronic device(s) within the surveilled environment and/or the network-connected computer server. In some embodiments, different communication modules handle these communications.
- the base station 300 may include one communication module for communicating with the electronic device(s) via a short-range communication protocol, such as Bluetooth® or Near Field Communication, and another communication module for communicating with the network-connected computer server via a cellular network or the Internet.
- a short-range communication protocol such as Bluetooth® or Near Field Communication
- FIG. 4 depicts a network security system that includes a variety of electronic devices configured to collectively monitor a surveilled environment 400 (e.g., the interior and exterior of a home).
- the variety of electronic devices includes multiple security lights 402 a - b , multiple external security cameras 404 a - b , and multiple internal security cameras 406 a - b .
- the network security system could include any number of security lights, security cameras, and other types of electronic devices.
- Some or all of these electronic devices are communicatively coupled to a base station 408 that can be located in or near the surveilled environment 400 .
- Each electronic device can be connected to the base station 408 via a wired communication channel or a wireless communication channel.
- FIG. 5A illustrates an example network environment 500 a that includes a base station 502 designed to receive content generated by one or more electronic devices arranged throughout a surveilled environment.
- the base station 502 can transmit at least some of the content to a network-accessible server system 506 .
- the network-accessible server system 506 may supplement the content based on information inferred from content uploaded by other base stations corresponding to other surveilled environments.
- the base station 502 and the network-accessible server system 506 can be connected to one another via a computer network 504 a .
- the computer network 504 a may include a personal area network (PAN), local area network (LAN), wide area network (WAN), metropolitan area network (MAN), cellular network, the Internet, or any combination thereof.
- PAN personal area network
- LAN local area network
- WAN wide area network
- MAN metropolitan area network
- cellular network the Internet, or any combination thereof.
- FIG. 5B illustrates an example network environment 500 b that includes a security management platform 508 that is supported by the network-accessible server system 506 .
- Users can interface with the security management platform 508 via an interface 510 .
- a homeowner may examine content generated by electronic devices arranged proximate her home via the interface 510 .
- the security management platform 508 may be responsible for parsing content/data generated by electronic device(s) arranged throughout a surveilled environment to detect occurrences of events within the surveilled environment.
- the security management platform 508 may also be responsible for creating interfaces through which an individual can view content (e.g., video clips and audio clips), initiate an interaction within someone located in the surveilled environment, manage preferences, etc.
- the security management platform 508 may reside in a network environment 500 b .
- the security management platform 508 may be connected to one or more networks 504 b - c .
- networks 504 b - c can include PANs, LANs, WANs, MANs, cellular networks, the Internet, etc.
- the security management platform 508 can be communicatively coupled to computing device(s) over a short-range communication protocol, such as Bluetooth® or NFC.
- the interface 510 is preferably accessible via a web browser, desktop application, mobile application, or over-the-top (OTT) application. Accordingly, the interface 510 may be viewed on a personal computer, tablet computer, personal digital assistant (PDA), mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness accessory), network-connected (“smart”) electronic device, (e.g., a television or home assistant device), virtual/augmented reality system (e.g., a head-mounted display), or some other electronic device.
- PDA personal digital assistant
- smart network-connected
- smartt e.g., a television or home assistant device
- virtual/augmented reality system e.g., a head-mounted display
- a security system may detect too many false instances of motion because it relies on a signal generated by an overly sensitive passive infrared sensor (PIS).
- PIS passive infrared sensor
- a security system may detect too many false instances of audio because it relies on an overly sensitive audio sensor (which is configured to prompt recording by the security camera).
- a network-connected security system can be configured to filter those notifications deemed likely to be unnecessary.
- the “filtering” of notifications may include receiving notifications and only forwarding a portion of the received notifications deemed necessary to a user.
- “filtering” notifications may refer to detecting multiple events that would otherwise result in notifications to a user (e.g., detected motion) and only generating notifications based on a subset of the detected events for presentation to a user.
- the base station can apply an algorithm that allows it to detect objects included in a video clip.
- the base station can apply an algorithm that allows it to detect the scene depicted in the video clip. The base station can then remove undesired motion from the video clip.
- the base station can ignore those movements that are indicative of events the corresponding individual does not wish to be notified about. Thereafter, the base station can generate notifications only for those events that survive the “filtering” process. Such action ensures that the corresponding individual is only notified about significant events.
- FIG. 6 shows a flow diagram of a technique for processing content generated by electronic devices 110 before generating a notification and/or initiating a stream for delivery to a client device 102 .
- Some or all of the steps described with respect to FIG. 6 may be executed at least in part by an analytics system 604 deployed on a base station 105 , at a network-accessible server system 145 , at one or more electronic devices 110 , or any combination thereof.
- the analytics system 604 depicted in FIG. 6 refers to a functional entity that may include hardware and/or software elements at any one or more of the components depicted in the example operation environment 100 depicted in FIG. 1 .
- the embodiment is described in the context of a security camera, those skilled in the art will recognize that similar techniques could also be employed with other types of electronic devices.
- one or more security cameras 110 a generate content 602 , for example, by capturing video and encoding the captured video into digital information.
- the content 602 may include, for example, one or more digital files including the encoded video.
- the content 602 is then fed into an analytics system 604 for processing according to the introduced technique.
- the step of feeding the content 602 into the analytics system 604 may include a camera 110 a transmitting the generated content 602 over a computer network (e.g., a wired or wireless local network 125 ) to a base station 105 .
- the base station 105 may then forward the received content 602 to a network-accessible server system 145 that implements the analytics system 604 .
- the camera 110 a and/or base station 105 may include processing components that implement at least a portion of the analytics system 604 .
- content 602 is fed into the analytics system 604 continually as it is generated.
- the camera 110 a may generate a digital video stream that is transmitted to the analytics system 604 for processing by way of the base station 105 .
- content 602 is continually generated by the camera 110 a .
- a camera 110 a that is powered by a wall outlet may continually capture view, encode the captured video into a digital stream, and transmit that digital stream for processing by the analytics system.
- the camera 110 a may be configured to generate content 602 at periodic intervals and/or in response to detecting certain conditions or events.
- the camera 110 a may be equipped with, or in communication with, a motion detector that triggers the capturing and encoding of video when motion in the surveilled environment is detected.
- the camera 110 a may begin generating content 602 by capturing video and encoding the captured video.
- the video camera 110 a may transmit small portions of content (e.g., short video clips or still images) at period intervals (e.g., every few seconds).
- Generating content 602 at periodic intervals and/or in response to detected events may conserve energy at the camera 110 a , which may be particularly beneficial for battery-powered cameras 110 a . Generating content 602 at periodic intervals and/or in response to detected events may also reduce resource requirements to process the content, for example, when generating notifications. For example, in the case of a surveilled environment, the system may be configured based on an assumption that the video of the surveilled environment is of no interest to an administrator unless the video captures an object in motion.
- content 602 is fed into the analytics system 604 periodically (e.g., daily, weekly, or monthly) or in response to detected events. For example, even if the content 602 is continually generated, such content 602 may be held in storage (e.g., at local storage 115 or a NAS 148 ) before being released (periodically or in response to detected events) for analysis by the analytics system 604 .
- content 602 may be held in storage (e.g., at local storage 115 or a NAS 148 ) before being released (periodically or in response to detected events) for analysis by the analytics system 604 .
- the analytics system 604 processes the received content 602 to perform the notification filtering technique described herein.
- the analytics system 604 may process the received content 602 to detect whether an event has occurred that necessitates a notification to a user.
- processing of the received content 602 may be carried out by processors located at the base station 105 , a network-accessible server system 145 , or any combination thereof.
- processing of content 602 may include a content recognition process 606 to gain some level of understanding of the information captured in the content 602 .
- the content recognition process 606 may apply computer vision techniques to detect physical objects captured in the content 602 .
- FIG. 7 shows a flow diagram that illustrates an example high-level process 700 for image processing-based object detection that involves, for example, processing content 602 to detect identifiable feature points (step 702 ), identifying putative point matches (step 704 ), and detecting an object based on the putative point matches (step 706 ).
- the content recognition process 606 may further classify such detected objects. For example, given one or more classes of objects (e.g., humans, buildings, cars, animals, etc.), the content recognition process 606 may process the video content 602 to identify instances of various classes of physical objects occurring in the captured video of the surveilled environment.
- classes of objects e.g., humans, buildings, cars, animals, etc.
- the content recognition process 606 may employ deep learning-based video recognition to classify detected objects.
- raw image data is input as a matrix of pixels.
- a first representational layer may abstract the pixels and encode edges.
- a second layer may compose and encode arrangements of edges, for example, to detect objects.
- a third layer may encode identifiable features such as a nose and eyes.
- a fourth layer may recognize that the image includes a face based on the arrangement of identifiable features.
- FIG. 8 shows a flow diagram that illustrates an example high-level process 800 applied by a Haar Cascade classifier, specifically for classifying an object in a piece of content 602 as a face.
- the content 602 (or a portion thereof) is fed into a first level process 802 which determines whether an object that can be classified as a face is present in the content 602 . If, based on the processing at the first stage 802 , it is determined that content 602 does not include an object that can be classified as a face, that object is immediately eliminated as an instance of a face.
- the process 800 proceeds to the next stage 804 for further processing. Similar processes are applied at each stage 804 , 806 and so on to some final stage 808 .
- each stage in the example process 800 may apply increasing levels of processing which requiring increasingly more computational resources.
- a benefit of this cascade technique is that objects that are not faces are immediately eliminated as such at higher stages with relatively little processing.
- the content To be classified as a particular type of object (e.g., face detection), the content must pass each of the stages 802 - 808 of the classifier.
- example Haar Cascade classifier process 800 depicted in FIG. 8 is for classifying detected objects as faces, however, similar classifiers may be trained to detect other classes of objects (e.g., car, building, cat, tree, etc.).
- the content recognition process 606 may also include distinguishing between instances of detected objects.
- a grouping method may be applied to associate pixels corresponding to a particular class of objects to a particular instance of that class by selecting pixels that are substantially similar to certain other pixels corresponding to that instance, pixels that are spatially clustered, pixel clusters that fit an appearance-based model for the object class, etc.
- this process may involve applying a deep learning (e.g., through applying a convolutional neural network) to distinguish individual instances of detected objects.
- Some example techniques that can be applied for identifying multiple objects include Regions with Convolutional Neural Network Features (RCNN), Fast RCNN, Single Shot Detector (SDD), You Only Look Once (Yolo), etc.
- the content recognition process 606 may also include recognizing the identity of detected objects (e.g., specific people).
- the analytics system 604 may receive inputs (e.g., captured images/video) to learn the appearances of instances of certain objects (e.g., specific people) by building machine-learning appearance-based models. Instance segmentations identified based on processing of content 602 can then be compared against such appearance-based models to resolve unique identities for one or more of the detected objects.
- Identity recognition can be particularly useful in this context as it would allow the system to ignore the detection of certain known individuals in captured image (e.g., members of a household) while focusing notifications on unknown individuals and/or known unwanted individuals that more likely pose a security threat.
- the content recognition process 606 may also include fusing information related to detected objects to gain a semantic understanding of the captured scene.
- the content recognition process 606 may include fusing semantic information associated with a detected object with geometry and/or motion information of the detected object to infer certain information regarding the scene.
- Information that may be fused may include, for example, an object's category (i.e., class), identity, location, shape, size, scale, pixel segmentation, orientation, inter-class appearance, activity, and pose.
- the content recognition 606 process may fuse information pertaining to one or more detected objects to determine that a clip of video is capturing a known person (e.g., a neighbor) walking their dog past a house.
- the same process may be applied to another clip to determine that the other clip is capturing an unknown individual peering into a window of a surveilled house.
- the analytics system 604 can then use such information to generate notifications only for the scene that presents a heightened security risk (i.e., the unknown person looking in the window) despite motion being detected in both.
- labeled image data may be input to train a neural network (or other machine-learning based models) as part of the content recognition process 606 .
- security experts may input previously captured video from a number of different sources as examples of certain classes of objects (e.g., car, building, cat, tree, etc.) to inform the content recognition process 606 .
- Event detection may include detecting recognizable events (e.g., a person walking to the front door) and analyzing certain specifics regarding the detected event (e.g., person's identity, time of day, proximity to other detected events, and other contextual information) to determine if the detected event is indicative of a security threat that warrants a notification.
- recognizable events e.g., a person walking to the front door
- certain specifics regarding the detected event e.g., person's identity, time of day, proximity to other detected events, and other contextual information
- determining that a detected event is indicative of a security threat may include comparing data associated with the detected event against a database of data associated with candidate threats, for example, defined based on input from industry security experts.
- the analytics system 604 may detect an event characterized by an unknown individual approaching the doorway to a residence. The analytics system 604 may then compare this semantic information regarding the detected event to a database of candidate threats. Based on the comparison, the analytics system 604 may identify a particular candidate threat that matches (within some threshold level of certainty) the detected event.
- the process of comparing may include generating, by the analytics system 604 , a pattern matching score (e.g., a value between 0 and 10) and identifying the detected event as indicative of the particular candidate security threat if the generated score satisfies a threshold criterion (e.g., above 7 on a scale of 0-10).
- a pattern matching score e.g., a value between 0 and 10
- a threshold criterion e.g., above 7 on a scale of 0-10
- the analytics system 604 may employ machine learning to analyze received content 602 to determine if the content is indicative of a security threat.
- the analytics system 604 may apply machine-learning-based behavioral analytics to learn the behavior of objects captured in video images and identify when the behavior of such objects as indicative of a security threat. Applying a machine-learning-based approach may be beneficial in certain instances as it may alleviate the need to develop complex threat detection rules that rely on preexisting knowledge and that are prone to incorrectly identifying unexpected or rare behavior.
- the analytics system 604 may apply a notification generation process 610 to generate one or more notifications for delivery to an administrative user at a user device 102 .
- notification generation 610 may include generating one or more notifications 614 , for example, in the form of messages that are then transmitted over a computer network to a user device 102 associated with an administrative user. Notifications may include emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts within interface 510 , or any other communications medium appropriate for delivery at user device 102 .
- Notifications may include emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts within interface 510 , or any other communications medium appropriate for delivery at user device 102 .
- notification generation may include transmitting processed and/or filtered content 616 for delivery at the client device.
- the analytics system may process the received video content 602 and forward content 616 based on the processing to the user device 102 .
- Processed content 616 may include, for example, a shortened video clip that specifically depicts the activity upon which the security threat was identified.
- Processed content 616 may also include transformations to the original content to highlight the activity upon which the security threat was identified.
- the analytics system 604 may be configured to process content 602 to remove detected motion from the content that is not indicative of a security threat.
- Processed content 616 may be transmitted to the user device 102 in real time (or near real time) as the content 602 is generated by the camera 110 a .
- a camera 110 a may transmit content 602 in the form of a continuous stream of video to analytics system 604 .
- the analytics system 604 may process the received video stream as it is received and only forward portions of the video stream (i.e., processed/filtered content 616 ) as events are detected that are indicative of a security threat. This processing may occur in real time or near real time as the video is captured at the camera 110 a so that an administrator user can effectively respond to the security threats.
- processed/filtered content 616 may represent time-shifted recordings that are delivered to the user device 102 after the events underlying the recordings have already occurred.
- an administrator user that does not want to be bothered with the delivery of live streams throughout the day may elect instead to, for example, review the recordings for the day once at the end of the day.
- one or more of the electronic devices 110 may be configured to individually analyze certain content (e.g., captured video) and generate notifications.
- an analytics system 604 operating apart from such an electronic device may be configured to process such notifications as content 602 as part of a notification filtering process. Notifications received by the analytics system 604 for processing may be referred to as provisional notifications in that they are subject to filtering processes which may result in being forwarded to a user device or discarded/ignored.
- the content 602 depicted in FIG. 6 as being input to analytics system 604 may include a provisional notification from a camera 110 a .
- the camera 110 a may be configured to independently analyze captured video to detect motion and generate notifications based on the detected motion.
- the processing of content 602 by the analytics system 604 to detect an event may include analyzing the received provisional notification to interpret or otherwise identify an event (e.g., motion detection) as detected at the camera 110 a .
- the process of causing a notification to be presented to a user may include generating a new notification based on the received provisional notification and/or simply forwarding the received provisional notification for delivery to a user device 102 .
- the analytics system 604 may consider user feedback 618 provided by one or more users. For example, if a user indicates that a certain video clip is undesirable, uninteresting, or otherwise not worthy of a notification, then the analytics system 604 may reduce/eliminate notifications related to future video clips having similar characteristics (e.g., generated at the same time, generated based on the same trigger, including the same visual/audible objects, etc.).
- the analytics system 604 may also consider feedback provided by a cohort that includes the corresponding user.
- the cohort may include users that share a characteristic in common, such as geographical location, notification frequency, etc. For example, if the analytics system 604 considers feedback from users within a neighborhood, the analytics system 604 may know to filter those notifications pertaining to a cat that lives in the neighborhood. As another example, if the analytics system 604 considers feedback from users within a city, the analytics system 604 may know to filter those notifications pertaining to events triggered by weather (e.g., wind or rain).
- weather e.g., wind or rain
- FIG. 9 illustrates how various inputs such as content 602 (e.g., video clips, keystrokes) and session metadata may be received from base station(s) 105 , for example, via a network-accessible server 146 , and fed into a distributed computing cluster 902 .
- input data from a development cycle 904 such as ticketing/monitoring information and/or information stored in a knowledge base may also be input to the distributed computing cluster 902 .
- the distributed computing cluster 902 may represent a logical entity that includes sets of host machines (not shown in FIG. 9 ) that run instances of services configured for distributed processing of data.
- the distributed computing cluster 902 may comprise an Apache HadoopTM deployment.
- Apache HadoopTM is an open-source software framework for reliable, scalable and distributed processing of large data sets across clusters of commodity machines.
- Examples of services/utilities that can be deployed in an Apache HadoopTM cluster include the Apache HadoopTM Distributed File System (HDFS), MapReduceTM, Apache HadoopTM YARN, and/or the like.
- the host computing devices comprising the computing cluster 802 can include physical and/or virtual machines that run instances of roles for the various services/utilities.
- the ApacheTM HDFS service can have the following example roles: a NameNode, a secondary NameNode, DataNode, and balancer.
- one service may run on multiple host machines.
- Apache HadoopTM software utilities can be employed to facilitate the development of filtering algorithm(s), the acquisition of data pertaining to surveilled environments, and the application of the filtering algorithm(s) to improve real-time analytics.
- the Apache HadoopTM software utilities may consider content 602 (e.g., video clips) generated by electronic devices deployed in surveilled environments, as well as user feedback specifying which notifications are desired.
- the Apache HadoopTM software utilities can also develop a classification model for classifying content by training a supervised machine learning algorithm.
- Various machine learning and/or artificial intelligence technologies can be employed to facilitate the development of the classification model.
- FIG. 10 illustrates how MapReduceTM can be utilized in combination with Apache HadoopTM in a distributed computing cluster 902 to process various sources of information.
- MapReduceTM is a programming model for processing/generating big data sets with a parallel, distributed algorithm on a cluster.
- MapReduceTM usually splits an input data set (e.g., content 602 comprising video clips) into independent chunks that are processed by the map tasks in a parallel manner.
- the framework sorts the outputs of the map tasks, which are then input to the reduce tasks.
- the output of the reduce tasks may be a classification of the content or an event determination that can be utilized by the analytics system 604 to generate notifications and/or filter content being delivered to a user device 102 .
- FIG. 11 illustrates how content 602 can be processed in batches by the analytics system 604 .
- video clips generated by security cameras may be processed in groups.
- all of the video clips corresponding to a certain segment of surveilled environments e.g., a particular group of homes
- video clips may be collected every 15 minutes, 30 minutes, 60 minutes, 120 minutes, etc.
- each batch of video clips can be processed.
- notifications can be generated by the analytics system 604 and transmitted substantially simultaneously.
- users may periodically receive reports including one or more notifications rather than a steady stream of notifications throughout the day. Users may be permitted to manually specify the cadence at which they receive these reports.
- FIG. 12 shows a flow chart of an example process 1200 for filtering and/or generating notifications based on analysis of content, according to some embodiments.
- One or more steps of the example process 1200 may be performed by any one or more of the components of the example computer system 1300 described with respect to FIG. 13 .
- the example process 1200 depicted in FIG. 12 may be represented in instructions stored in memory that are then executed by a processing unit.
- the process 1200 described with respect to FIG. 12 is an example provided for illustrative purposes and is not to be construed as limiting. Other processes may include more or fewer steps than depicted while remaining within the scope of the present disclosure. Further, the steps depicted in example process 1200 may be performed in a different order than is shown.
- Example process 1200 begins at step 1202 with receiving content 602 generated by an electronic device 110 located in a physical environment (e.g., a surveilled environment 400 ).
- the electronic device 110 may be one of several electronic devices 110 associated with a network-connected security system.
- the content 602 is received at step 1202 via a computer network, from a base station 105 associated with the network-connected security system.
- the network-connected security system is a video surveillance system and the electronic device 110 is a network-connected video camera 110 a .
- the content 602 may include video files.
- the optical parameter may be any of an optical parameter, an image processing parameter, or an encoding parameter.
- the content 602 received from the electronic device 110 at step 1202 may include a provisional notification generated by the electronic device 110 .
- the electronic device 110 may include processing resources to detect an event based on sensory information (e.g., video) and generate a notification based on the detected event.
- Example process 1200 continues at step 1204 with processing the received content 602 to detect an event in the surveilled physical environment.
- the processing of content at step 1204 to detect an event may include processing the video to detect one or more instances of physical objects in the physical environment and analyzing data associated with the detected one or more instances of physical objects to detect a scene captured by the video camera 110 a .
- a content recognition process 606 may apply computer vision techniques to, for example, detect various instances of physical objects, resolve object identities, and may fuse various sources of information to gain a semantic understanding of a scene captured by a video camera.
- the event detected at step 1204 may be based on this scene understanding.
- the step of detecting the event at step 1204 may include processing the received provisional notification (e.g., by reading or interpreting a message included in the notification) and identifying the event (as detected by the electronic device 110 ) based on the processing.
- processing the received provisional notification e.g., by reading or interpreting a message included in the notification
- identifying the event as detected by the electronic device 110
- Example process 1200 continues at step 1206 with determining if the detected event satisfies a specified criterion.
- a purpose of processing content generated by a network-connected security system may be to determine if a notification to a user is necessary.
- a notification is generally understood to be necessary when an event has occurred in a surveilled environment is abnormal, or more specifically indicative of a security risk.
- the specified criterion may therefore differ in various embodiments, but is generally established based on a need to selectively notify a user of activity in the surveilled environment that may be of interest to the user whether that activity is merely outside of a normal baseline or more specifically indicative of a security risk or threat.
- the step of determining if the detected event satisfies a specified criterion includes comparing data associated with the detected event against a database of data associated with a plurality of candidate security threats, generating a pattern matching score based on the detected event and a particular candidate security threat of the plurality of candidate security threats and identifying the detected event as indicative of the particular candidate security threat if the generated pattern matching score satisfies a threshold criterion.
- pattern matching scores may be generated on a scale of 0-10 with a threshold criterion set at 7 to indicate that a detected event is indicative of a particular candidate security threat based on the comparison.
- steps 1204 and 1206 may include transmitting the content 602 to another computing system for processing.
- steps 1204 and 1206 may include transmitting, by the base station 105 , the content 602 , via an external network, to an external computing system such as a network-accessible server system 145 to process the content 602 to detect an event and determine if the detected event satisfies a specified criterion.
- Example process 1200 concludes at step 1208 with causing a notification to be presented at a user device 102 communicatively coupled to the network-connected security system if the detected event satisfies the specified criterion.
- the notification is presented at the user device 102 in real time or near real time as the content 602 is generated by the electronic device 110 .
- a notification may be presented at a user device 102 within seconds or fractions of a second a portion of video content is captured at a camera 110 a associated with a network-connected security system.
- the latency between content generation and presentation of the notification will depend on certain limitations in the system (e.g., processing resources, network speed, etc.).
- the notification presented at the user device 102 at step 1208 may include an alert message (e.g., emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts within interface 510 ) informing a user of the event.
- step 1208 may include causing the alert message to be transmitted, via a computer network, to the client device 102 .
- the notification presented at the user device 102 at step 1208 may include at least a portion of the content 602 generated by an electronic device 110 .
- step 1208 may include causing at least a portion of the content 602 received from the electronic device 110 to be transmitted, via a computer network, to the client device 102 .
- causing the at least a portion of the content 602 to be transmitted to the client device 102 may include initiating a peer-to-peer connection between the electronic device 110 and the client device 102 and causing the portion of content 602 to be transmitted via the peer-to-peer connection.
- step 1208 may include forwarding the received provisional notification if the event (e.g., as detected by the electronic device 110 ) satisfies the specified criterion.
- FIG. 13 is a block diagram illustrating an example of a computer system 1300 in which at least some operations described herein can be implemented.
- some components of the computer system 1300 may be hosted any one or more of the devices described with respect to operating environment 100 in FIG. 1 such as electronic devices 110 , base station 105 , APs 120 , local storage 115 , network-accessible server system 145 and user devices 102 .
- the computer system 1300 may include one or more central processing units (“processors”) 1302 , main memory 1306 , non-volatile memory 1310 , network adapter 1312 (e.g., network interface), video display 318 , input/output devices 1320 , control device 1322 (e.g., keyboard and pointing devices), drive unit 1324 including a storage medium 1326 , and signal generation device 1330 that are communicatively connected to a bus 1316 .
- the bus 1316 is illustrated as an abstraction that represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers.
- the bus 1316 can include a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as “Firewire”).
- PCI Peripheral Component Interconnect
- ISA HyperTransport or industry standard architecture
- SCSI small computer system interface
- USB universal serial bus
- I2C IIC
- IEEE Institute of Electrical and Electronics Engineers
- the computer system 1300 may share a similar computer processor architecture as that of a desktop computer, tablet computer, personal digital assistant (PDA), mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), virtual/augmented reality systems (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by the computer system 1300 .
- PDA personal digital assistant
- mobile phone e.g., a watch or fitness tracker
- game console e.g., a watch or fitness tracker
- music player e.g., a watch or fitness tracker
- network-connected (“smart”) device e.g., a television or home assistant device
- virtual/augmented reality systems e.g., a head-mounted display
- another electronic device capable of executing a set of instructions (s
- main memory 1306 non-volatile memory 1310 , and storage medium 1326 (also called a “machine-readable medium”) are shown to be a single medium, the term “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets of instructions 1328 .
- the term “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the computer system 1300 .
- routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”).
- the computer programs typically comprise one or more instructions (e.g., instructions 1304 , 1308 , 1328 ) set at various times in various memory and storage devices in a computing device.
- the instruction(s) When read and executed by the one or more processors 1302 , the instruction(s) cause the computer system 1300 to perform operations to execute elements involving the various aspects of the disclosure.
- machine-readable storage media such as volatile and non-volatile memory devices 1310 , floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS), Digital Versatile Disks (DVDs)), and transmission-type media such as digital and analog communication links.
- recordable-type media such as volatile and non-volatile memory devices 1310 , floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS), Digital Versatile Disks (DVDs)), and transmission-type media such as digital and analog communication links.
- CD-ROMS Compact Disk Read-Only Memory
- DVDs Digital Versatile Disks
- the network adapter 1312 enables the computer system 1300 to mediate data in a network 1314 with an entity that is external to the computer system 1300 through any communication protocol supported by the computer system 1300 and the external entity.
- the network adapter 1312 can include a network adapter card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, a bridge router, a hub, a digital media receiver, and/or a repeater.
- the network adapter 1312 may include a firewall that governs and/or manages permission to access/proxy data in a computer network and tracks varying levels of trust between different machines and/or applications.
- the firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications (e.g., to regulate the flow of traffic and resource sharing between these entities).
- the firewall may additionally manage and/or have access to an access control list that details permissions including the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand.
- programmable circuitry e.g., one or more microprocessors
- software and/or firmware special-purpose hardwired (i.e., non-programmable) circuitry, or a combination of such forms.
- Special-purpose circuitry can be in the form of one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc.
- ASICs application-specific integrated circuits
- PLDs programmable logic devices
- FPGAs field-programmable gate arrays
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- Software Systems (AREA)
- Emergency Management (AREA)
- Business, Economics & Management (AREA)
- Environmental & Geological Engineering (AREA)
- Alarm Systems (AREA)
- Telephonic Communication Services (AREA)
Abstract
Description
- This application is entitled to the benefit and/or right of priority of U.S. Provisional Application No. 62/644,847 (Attorney Docket No. 110729-8095.US00), titled, “ELASTIC PROCESSING FOR VIDEO ANALYSIS AND NOTIFICATION ENHANCEMENTS,” filed Mar. 19, 2018, the contents of which are hereby incorporated by reference in their entirety for all purposes. This application is therefore entitled to a priority date of Mar. 19, 2018.
- Various embodiments concern computer programs and associated computer-implemented techniques for intelligently processing content generated by electronic devices such as security cameras, security lights, etc.
- Surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, or protecting people/items in a given environment. Generally, surveillance requires that the given environment be monitored by means of electronic devices such as security cameras, security lights, etc. For example, a variety of electronic devices may be distributed through the home environment to detect activities performed in/around the home.
- Wireless security cameras have proved to be very popular among modern consumers due to their low installation costs and flexible installation options. Moreover, many wireless security cameras can be mounted in locations that were previously unavailable to wired security cameras. Thus, consumers can readily set up home security systems for seasonal monitoring/surveillance (e.g., of pools, yards, garages, etc.).
- Various features and characteristics of the technology will become more apparent to those skilled in the art from a study of the Detailed Description in conjunction with the drawings. Embodiments of the technology are illustrated by way of example and not limitation in the drawings, in which like references may indicate similar elements.
-
FIG. 1 is a diagram illustrating an example environment in which at least some operations described herein can be implemented; -
FIG. 2 is a diagram illustrating various functional components of an example electronic device configured to monitor various aspects of a surveilled environment; -
FIG. 3 is a diagram illustrating various functional components of an example base station associated with a network-connected security system configured to monitor various aspects of a surveilled environment; -
FIG. 4 is a plan view of a surveilled environment (e.g., a home) illustrating an example arrangement of devices associated with a network-connected security system; -
FIG. 5A is a diagram illustrating a network environment that includes a base station designed to receive content generated by one or more electronic devices arranged throughout a surveilled environment; -
FIG. 5B is a diagram illustrating a network environment that includes a security management platform that is supported by the network-accessible server system; -
FIG. 6 is an architecture flow diagram illustrating an environment including an analytics system for presenting security notifications at a client device based on analysis of content generated at electronic devices in a network-connected security system; -
FIG. 7 is a flow diagram illustrating an example process for detecting objects in captured image or video content; -
FIG. 8 is a flow diagram illustrating an example process for classifying objects detected in captured image or video content; -
FIG. 9 is a diagram illustrating how a distributed computing cluster can be utilized to process content; -
FIG. 10 is a diagram illustrating how MapReduce™ can be utilized in combination with Apache Hadoop™ in the distributed computing cluster depicted inFIG. 9 ; -
FIG. 11 is a diagram illustrating how content can be processed in batches; -
FIG. 12 is a flow diagram illustrating an example process for presenting notifications at a client device based on analysis of content generated at electronic devices in a network-connected security system; and -
FIG. 13 is a diagram illustrating an example of a computer processing system in which at least some operations described herein can be implemented. - The drawings depict various embodiments for the purpose of illustration only. Those skilled in the art will recognize that alternative embodiments may be employed without departing from the principles of the technology. Accordingly, while specific embodiments are shown in the drawings, the technology is amenable to various modifications.
- Surveillance is the monitoring of behavior, activities, or other changing information for the purpose of influencing, managing, or protecting people/items in a given environment. Surveillance often requires that the given environment be monitored by means of various electronic devices such as security cameras, security lights, etc. In some instances, surveillance systems (also referred to as “security systems”) are connected to a computer server via a network. Some content generated by a security system may be examined locally (i.e., by the security system itself), while other content generated by the security system may be examined remotely (e.g., by the computer server).
- Generally, a network-connected surveillance system (also referred to as a “security system”) includes a base station and one or more electronic devices. The electronic device(s) can be configured to monitor various aspects of a surveilled physical environment (also referred to herein as a “surveilled environment”). For example, security cameras may be configured to record video upon detecting movement, while security lights may be configured to illuminate the surveilled environment upon detecting movement. Different types of electronic devices can create different types of content. Here, for example, the security cameras may generate audio data and/or video data, while the security lights may generate metadata specifying a time at which each illumination event occurred, a duration of each illumination event, etc.
- The base station, meanwhile, may be responsible for transmitting the content generated by the electronic device(s) to a network-accessible computer server. Thus, each electronic device may provide data to the base station, which in turn provides at least some of the data to the network-accessible computer server.
- Nowadays, security systems support features such as high-quality video recording, live video streaming, two-way audio transmission, cloud-based storage of recordings, instant alerts, etc. These features enable individuals to gain an in-depth understanding of what activities are occurring within the environment being surveilled. However, security systems having these features also experience drawbacks.
- For example, once an event is detected by an electronic device, the security system may alert an administrator (e.g., a home owner). Certain alerts are not necessary in that the detected event does not pose any security risk. For instance, if the administrator observes that motion detection is triggered by movement of a bird, the administrator may determine that an alert is not needed. Conversely, if the administrator observes that motion detection is triggered by movement of a coyote, the administrator may determine that an alert is needed. Similar conclusions may be drawn for other routine events (e.g., mail delivery by postal worker). Administrators may simply ignore those alerts that are not needed (e.g., by simply deleting the corresponding notifications); however, an abundance of false positive notifications will tend to reduce the effectiveness of the security system as the administrator becomes overwhelmed by notifications and is not able to respond effectively.
- Introduced herein is a technique for analyzing content generated by electronic devices in a network-connected security system to detect events and generate notifications based on the detected events that addresses the challenges discussed above. For example, a base station may employ cloud-based analytics to verify content (e.g., a video clip) before generating a notification or initiating a peer-to-peer stream to deliver the content to a user device. To verify the content, the base station may be required to contact the network-connected computer server, which can perform the processing needed to filter out unnecessary alerts. In some embodiments, the network-connected computer server is one of multiple network-connected computer servers that form a server system. The server system may balance the load amongst the multiple network-connected computer servers (e.g., by intelligently distributing images for processing) to ensure the verification process is completed with low latency. Similar cloud-based analytics can be employed on content generated by electronic devices to visually detect intruders, audibly detect sounds indicative of a break-in (e.g., an unrecognized voice or a window breaking), audibly detect sounds indicative of catastrophic events such as fire or earthquakes, etc. Further, such analysis may be performed in real time (or near real time) as content is generated so that an administrator is able to quickly respond to notifications of detected events.
- References in this description to “an embodiment” or “one embodiment” means that the particular feature, function, structure, or characteristic being described is included in at least one embodiment. Occurrences of such phrases do not necessarily refer to the same embodiment, nor are they necessarily referring to alternative embodiments that are mutually exclusive of one another.
- Unless the context clearly requires otherwise, the words “comprise” and “comprising” are to be construed in an inclusive sense rather than an exclusive or exhaustive sense (i.e., in the sense of “including but not limited to”). The terms “connected,” “coupled,” or any variant thereof is intended to include any connection or coupling between two or more elements, either direct or indirect. The coupling/connection can be physical, logical, or a combination thereof. For example, devices may be electrically or communicatively coupled to one another despite not sharing a physical connection.
- The term “based on” is also to be construed in an inclusive sense rather than an exclusive or exhaustive sense. Thus, unless otherwise noted, the term “based on” is intended to mean “based at least in part on.”
- The term “module” refers broadly to software components, hardware components, and/or firmware components. Modules are typically functional components that can generate useful data or other output(s) based on specified input(s). A module may be self-contained. A computer program may include one or more modules. Thus, a computer program may include multiple modules responsible for completing different tasks or a single module responsible for completing all tasks.
- When used in reference to a list of multiple items, the word “or” is intended to cover all of the following interpretations: any of the items in the list, all of the items in the list, and any combination of items in the list.
- The sequences of steps performed in any of the processes described here are exemplary. However, unless contrary to physical possibility, the steps may be performed in various sequences and combinations. For example, steps could be added to, or removed from, the processes described here. Similarly, steps could be replaced or reordered. Thus, descriptions of any processes are intended to be open-ended.
-
FIG. 1 is a block diagram illustrating an example environment in which the introduced technique for analysis of content can be implemented. Theexample environment 100 includes a network-connected security system that includesbase station 105 and one or moreelectronic devices 110 such ascameras 110 a,audio recorder devices 110 b,security lights 110 c, or any other types of security devices. - The
base station 105 and the one or moreelectronic devices 110 can be connected to each other via alocal network 125. Thelocal network 125 can be a local area network (LAN). In some embodiments, thelocal network 125 is a WLAN, such as a home Wi-Fi, created by one or more wireless accesses points (APs) 120. In some embodiments, functionality associated withbase station 105 and/orwireless AP 120 are implemented in software instantiated at a wireless networking device. In other words, the system may include multiple wireless networking devices as nodes, wherein each of the wireless networking devices is operable as awireless AP 120 and/orbase station 105. The one or moreelectronic devices 110 and thebase station 105 can be connected to each other wirelessly, e.g., over Wi-Fi, or using wired means. Thebase station 105 and the one or moreelectronic devices 110 can be connected to each other wirelessly via the one ormore wireless APs 120, or directly with each other without thewireless AP 120, e.g., using Wi-Fi direct, Wi-Fi ad hoc or similar wireless connection technologies or via wired connections. Further, thebase station 105 can be connected to thelocal network 125 using a wired means or wirelessly. - The one or more
electronic devices 110 can be battery powered or powered from a wall outlet. In some embodiments, the one or moreelectronic devices 110 can include one or more sensors such as motion sensors that can activate, for example, the capturing of audio or video, the encoding of captured audio or video, and/or transmission of an encoded audio or video stream when motion is detected. -
Cameras 110 a may capture video, encode the video as a video stream, and wirelessly transmit the video stream vialocal network 125 for delivery to auser device 102. In some embodiments, certain cameras may include integrated encoder components. Alternatively, or in addition, the encoder component may be a separate device coupled to thecamera 110 a. For example, an analog camera may be communicatively coupled to thebase station 105 and/orwireless AP 120 via an analog to digital encoder device (not shown inFIG. 1 ). In some embodiments, thebase station 105 and/orwireless APs 120 may include encoding components to encode and/or transcode video. Encoder components may include any combination of software and/or hardware configured to encode video information. Such encoders may be based on any number of different standards such as H.264, H.265, VP8, VP9, Daala, MJPEG, MPEG4, Windows Media Video (WMV), etc. for encoding video information. Accordingly, depending on the codec used, the video stream from a givencamera 110 a may be one of several different formats such as .AVI, .MP4, .MOV, .WMA, .MKV, etc. The video stream can include audio as well if thecamera 110 a includes or is communicatively coupled to anaudio device 110 b (e.g., a microphone). In some embodiments,cameras 110 a can include infrared (IR) light emitting diode (LED) sensors, which can provide night-vision capabilities. - Similarly,
audio recording devices 110 b may capture audio, encode the audio as an audio stream, and wirelessly transmit the audio stream vialocal network 125 for delivery to auser device 102. In some embodiments, certain audio recording devices may include integrated encoder components. Alternatively, or in addition, the encoder component may be a separate device coupled to theaudio recording device 110 b. For example, an analog audio recording device may be communicatively coupled to thebase station 105 and/orwireless AP 120 via an analog to digital encoder device (not shown inFIG. 1 ). In some embodiments, thebase station 105 and/orwireless APs 120 may include encoding components to encode and/or transcode audio. Encoder components may include any combination of software and/or hardware configured to encode audio information. Such encoders may be based on any number of different standards such as Free Lossless Audio Codec (FLAC), MPEG-4 Audio, Windows Media Audio (WMA), etc. for encoding audio information. Accordingly, depending on the codec used, the audio stream from a givencamera 110 a may be one of several different formats such as .FLAC, .WMA, .AAC, etc. - Although the
example environment 100 illustrates various types ofelectronic devices 110 a-d, the security system can include just a single type of electronic device (e.g.,cameras 110 a) or two or more different types ofelectronic devices 110 which can be installed at various locations of a building. The variouselectronic devices 110 of the security system may include varying features and capabilities. For example, someelectronic devices 110 may be battery powered while another may be powered from the wall outlet. Similarly, someelectronic devices 110 may connect wirelessly to thebase station 105 while others rely on wired connections. In some embodiments, electronic devices of a particular type (e.g.,cameras 110 a) included in the security system may also include varying features and capabilities. For example, in a given security system, afirst camera 110 a may include an integrated night vision, audio recording, and motion sensing capabilities while a second camera 100 a only includes video capture capabilities. - The
base station 105 can be a computer system that serves as a gateway to securely connect the one or moreelectronic devices 110 to anexternal network 135, for example, via one ormore wireless APs 120. Theexternal network 135 may comprise one or more networks of any type including packet switched communications networks, such as the Internet, World Wide Web portion of the Internet, extranets, intranets, and/or various other types of telecommunications networks such as cellular phone and data networks, plain old telephone system (POTS) networks, etc. - The
base station 105 can provide various features such as long range wireless connectivity to theelectronic devices 110, alocal storage device 115, a siren, connectivity to network attached storage (NAS), and enhance battery life for certainelectronic devices 110, e.g., by configuring certainelectronic devices 110 for efficient operation and/or by maintaining efficient communications between thebase station 105 and suchelectronic devices 110. Thebase station 105 can be configured to store the content (e.g., audio and/or video) captured by someelectronic devices 110 in any of thelocal storage device 115 or a network-accessible storage 148. Thebase station 105 can be configured to generate a sound alarm from the siren when an intrusion is detected by thebase station 105 based on the video streams receive fromcameras 110/112. - In some embodiments, the
base station 105 can create its own network within thelocal network 125, so that the one or moreelectronic devices 110 do not overload or consume the network bandwidth of thelocal network 125. In some embodiments, thelocal network 125 can includemultiple access points 120 to increase wireless coverage of thebase station 105, which may be beneficial or required in cases where theelectronic devices 110 are wirelessly connected and are spread over a large area. - In some embodiments the
local network 125 can provide wired and/or wireless coverage to user devices (e.g., user device 102), for example, viaAPs 120. In theexample environment 100 depicted inFIG. 1 , auser device 102 can connect to thebase station 105, for example, via thelocal network 125 if located close to thebase station 105 and/orwireless AP 120. Alternatively, theuser device 102 can connect to thebase station 105 via network 135 (e.g., the Internet). Theuser device 102 can be any computing device that can connect to a network and play video content, such as a smartphone, a laptop, a desktop, a tablet personal computer (PC), or a smart TV. - In an example embodiment, when a
user 103 sends a request (e.g., from user device 102), to access content (e.g., audio and/or video) captured by any of theelectronic devices 110, thebase station 105 receives the request and in response to receiving the request, obtains the encoded stream(s) from one or more of theelectronic devices 110 and transmits the encoded stream to theuser device 102 for presentation. Upon receiving the encoded stream at theuser device 102, a playback application in theuser device 102 decodes the encoded stream and plays the audio and/or video to theuser 103, for example, via speakers and/or a display of theuser devices 102. - As previously mentioned, in some embodiments, the
base station 105 may include an encoding/transcoding component that performs a coding process on audio and/or video received from theelectronic devices 110 before streaming to theuser device 102. In an example embodiment, a transcoder at thebase station 105 transcodes a stream received from an electronic device 100 (e.g., a video stream from acamera 110 a), for example, by decoding the encoded stream and re-encoding the stream into another format to generate a transcoded stream that it then streams to theuser device 102. - The audio and/or video stream received at the
user device 102 may be a real-time stream and/or a recorded stream. For example, in some embodiments, a transcoder may transcode an encoded stream received from anelectronic device 110 and stream the transcoded stream to theuser device 102 in real time or near real time (i.e., within several seconds) as the audio and/or video is captured at theelectronic device 110. Alternatively, or in addition, audio and/or video streamed bybase station 105 to theuser device 102 may be retrieved from storage such aslocal storage 115 or a network-accessible storage 148. - The
base station 105 can stream audio and/or video to theuser device 102 in multiple ways. For example, thebase station 105 can stream to theuser device 102 using peer-to-peer (P2P) streaming technique. In P2P streaming, when the playback application on theuser device 102 requests the stream, thebase station 105 and theuser device 102 may exchange signaling information, for example vianetwork 135 or a network-accessible server system 145, to determine location information of thebase station 105 and theuser device 102, to find a best path and establish a P2P connection to route the stream from thebase station 105 to theuser device 102. After establishing the connection, thebase station 105 streams the audio and/or video to theuser device 102, eliminating the additional bandwidth cost to deliver the audio and/or video stream from thebase station 105 to a network-accessible server computer 146 in a network-accessible server system 145 and for streaming from the network-accessible server computer 146 to theuser device 102. In some embodiments, a network-accessible server computer 146 in the network-accessible server system 145 may keep a log of available peer node servers to route streams and establish the connection between theuser device 102 and other peers. In such embodiments, instead of streaming content, theserver 146 may function as a signaling server or can include signaling software whose function is to maintain and manage a list of peers and handle the signaling between thebase station 105 and theuser device 102. In some embodiments, theserver 146 can dynamically select the best peers based on geography and network topology. - In some embodiments, the network-
accessible server system 145 is a network of resources from a centralized third-party provider using Wide Area Networking (WAN) or Internet-based access technologies. In some embodiments, the network-accessible server system 145 is configured as or operates as part of a cloud network, in which the network and/or computing resources are shared across various customers or clients. Such a cloud network is distinct, independent, and different from that of thelocal network 125. - In some embodiments, the
local network 125 may include a multi-band wireless network comprising one or more wireless networking devices (also referred to herein as nodes) that function aswireless APs 120 and/or abase station 105. For example, with respect to theexample environment 100 depicted inFIG. 1 ,base station 105 may be implemented at a first wireless networking device that functions as a gateway and/or router. That first wireless networking device may also function as a wireless AP. Other wireless networking devices may function as satellite wireless APs that are wirelessly connected to each other via a backhaul link. The multiple wireless networking devices provide wireless network connections (e.g., using Wi-Fi) to one or more wireless client devices such as one or more wirelesselectronic devices 110 or any other devices such as desktop computers, laptop computers, tablet computers, mobile phones, wearable smart devices, game consoles, smart home devices, etc. The wireless networking devices together provide a single wireless network (e.g., network 125) configured to provide broad coverage to the client devices. The system of wireless networking devices can dynamically optimize the wireless connections of the client devices without the need of reconnecting. An example of the multi-band wireless networking system is the NETGEAR® Orbi® system. Such systems are exemplified in U.S. patent application Ser. No. 15/287,711, filed Oct. 6, 2016, and Ser. No. 15/271,912, filed Sep. 21, 2016, now issued as U.S. Pat. No. 9,967,884 both of which are hereby incorporated by reference in their entireties for all purposes. - The wireless networking devices of a multi-band wireless networking system can include radio components for multiple wireless bands, such as 2.5 GHz frequency band, low 5 GHz frequency band, and high 5 GHz frequency band. In some embodiments, at least one of the bands can be dedicated to the wireless communications among the wireless networking devices of the system. Such wireless communications among the wireless networking devices of the system are referred to herein as “backhaul” communications. Any other bands can be used for wireless communications between the wireless networking devices of the system and client devices such as
cameras 110 connecting to the system. The wireless communications between the wireless networking devices of the system and client devices are referred to as “fronthaul” communications. -
FIG. 2 shows a high-level functional block diagram illustrating the architecture of an example electronic device 200 (e.g., similar toelectronic devices 110 described with respect toFIG. 1 ) that monitors various aspects of a surveilled environment. As further described below, theelectronic device 200 may generate content while monitoring the surveilled environment, and then transmit the content to a base station for further processing. - The electronic device 200 (also referred to as a “recording device”) can include one or
more processors 202, acommunication module 204, anoptical sensor 206, amotion sensing module 208, amicrophone 210, aspeaker 212, alight source 214, and one ormore storage modules 216. - The processor(s) 202 can execute instructions stored in the storage module(s) 216, which can be any device or mechanism capable of storing information. In some embodiments, a single storage module includes multiple computer programs for performing different operations (e.g., image recognition, noise reduction, filtering), while in other embodiments each computer program is hosted within a separate storage module.
- The
communication module 204 can manage communication between various components of theelectronic device 200. Thecommunication module 204 can also manage communications between theelectronic device 200 and a base station, another electronic device, etc. For example, thecommunication module 204 may facilitate communication with a mobile phone, tablet computer, wireless access point (WAP), etc. As another example, thecommunication module 204 may facilitate communication with a base station responsible for communicating with a network-connected computer server; more specifically, thecommunication module 204 may be configured to transmit content generated by theelectronic device 200 to the base station for processing. As further described below, the base station may examine the content itself or transmit the content to the network-connected computer server for examination. - The optical sensor 206 (also referred to as “image sensors”) can be configured to generate optical data related to the surveilled environment. Examples of optical sensors include charged-coupled devices (CCDs), complementary metal-oxide-semiconductors (CMOSs), infrared detectors, etc. In some embodiments, the
optical sensor 206 is configured to generate a video recording of the surveilled environment responsive to, for example, determining that movement has been detected within the surveilled environment. In other embodiments, the optical data generated by theoptical sensor 206 is used by themotion sensing module 208 to determine whether movement has occurred. Themotion sensing module 208 may also consider data generated by other components (e.g., the microphone) as input. Thus, anelectronic device 200 may include multiple optical sensors of different types (e.g., visible light sensors and/or IR sensors for night vision). - The
microphone 210 can be configured to record sounds within the surveilled environment. Theelectronic device 200 may include multiple microphones. In such embodiments, the microphones may be omnidirectional microphones designed to pick up sound from all directions. Alternatively, the microphones may be directional microphones designed to pick up sounds coming from a specific direction. For example, if theelectronic device 200 is intended to be mounted in a certain orientation (e.g., such that thecamera 208 is facing a doorway), then theelectronic device 200 may include at least one microphone arranged to pick up sounds originating from near the point of focus. - The
speaker 212, meanwhile, can be configured to convert an electrical audio signal into a corresponding sound that is projected into the surveilled environment. Together with themicrophone 210, thespeaker 212 enables an individual located within the surveilled environment to converse with another individual located outside of the surveilled environment. For example, the other individual may be a homeowner who has a computer program (e.g., a mobile application) installed on her mobile phone for monitoring the surveilled environment. - The
light source 214 can be configured to illuminate the surveilled environment. For example, thelight source 214 may illuminate the surveilled environment responsive to a determination that movement has been detected within the surveilled environment. Thelight source 214 may generate metadata specifying a time at which each illumination event occurred, a duration of each illumination event, etc. This metadata can be examined by the processor(s) 202 and/or transmitted by thecommunication module 204 to the base station for further processing. - As previously discussed with respect to
FIG. 1 ,electronic devices 110 may be configured as different types of devices such ascameras 110 a,audio recording devices 110 b,security lights 110 c, and other types of devices. Accordingly, embodiments of theelectronic device 200 may include some or all of these components, as well as other components not shown here. For example, if theelectronic device 200 is asecurity camera 110 a, then some components (e.g., themicrophone 210,speaker 212, and/or light source 214) may not be included. As another example, if theelectronic device 200 is asecurity light 110 c, then other components (e.g., thecamera 208,microphone 210, and/or speaker 212) may not be included. -
FIG. 3 is a high-level functional block diagram illustrating anexample base station 300 configured to process content generated by electronic devices (e.g.,electronic device 200 ofFIG. 2 ) and forward the content to other computing devices such as a network-connected computer server, etc. - The
base station 300 can include one ormore processors 302, acommunication module 304, and one ormore storage modules 306. In some embodiments, a single storage module includes multiple computer programs for performing different operations (e.g., image recognition, noise reduction, filtering), while in other embodiments each computer program is hosted within a separate storage module. Moreover, thebase station 300 may include a separate storage module for each electronic device within its corresponding surveillance environment, each type of electronic device within its corresponding surveillance environment, etc. - Such a categorization enables the
base station 300 to readily identify the content/data generated by security cameras, security lights, etc. The content/data generated by each type of electronic device may be treated differently by thebase station 300. For example, thebase station 300 may locally process sensitive content/data but transmit less sensitive content/data for processing by a network-connected computer server. - Thus, in some embodiments, the
base station 300 processes content/data generated by the electronic devices, for example, to analyze the content to understand what events are occurring within the surveilled environment, while in other embodiments thebase station 300 transmits the content/data to a network-connected computer server responsible for performing such analysis. - The
communication module 304 can manage communication with electronic device(s) within the surveilled environment and/or the network-connected computer server. In some embodiments, different communication modules handle these communications. For example, thebase station 300 may include one communication module for communicating with the electronic device(s) via a short-range communication protocol, such as Bluetooth® or Near Field Communication, and another communication module for communicating with the network-connected computer server via a cellular network or the Internet. -
FIG. 4 depicts a network security system that includes a variety of electronic devices configured to collectively monitor a surveilled environment 400 (e.g., the interior and exterior of a home). Here, the variety of electronic devices includes multiple security lights 402 a-b, multiple external security cameras 404 a-b, and multiple internal security cameras 406 a-b. However, those skilled in the art will recognize that the network security system could include any number of security lights, security cameras, and other types of electronic devices. Some or all of these electronic devices are communicatively coupled to abase station 408 that can be located in or near the surveilledenvironment 400. Each electronic device can be connected to thebase station 408 via a wired communication channel or a wireless communication channel. -
FIG. 5A illustrates an example network environment 500 a that includes abase station 502 designed to receive content generated by one or more electronic devices arranged throughout a surveilled environment. Thebase station 502 can transmit at least some of the content to a network-accessible server system 506. The network-accessible server system 506 may supplement the content based on information inferred from content uploaded by other base stations corresponding to other surveilled environments. - The
base station 502 and the network-accessible server system 506 can be connected to one another via acomputer network 504 a. Thecomputer network 504 a may include a personal area network (PAN), local area network (LAN), wide area network (WAN), metropolitan area network (MAN), cellular network, the Internet, or any combination thereof. -
FIG. 5B illustrates an example network environment 500 b that includes asecurity management platform 508 that is supported by the network-accessible server system 506. Users can interface with thesecurity management platform 508 via aninterface 510. For example, a homeowner may examine content generated by electronic devices arranged proximate her home via theinterface 510. - The
security management platform 508 may be responsible for parsing content/data generated by electronic device(s) arranged throughout a surveilled environment to detect occurrences of events within the surveilled environment. Thesecurity management platform 508 may also be responsible for creating interfaces through which an individual can view content (e.g., video clips and audio clips), initiate an interaction within someone located in the surveilled environment, manage preferences, etc. - As noted above, the
security management platform 508 may reside in a network environment 500 b. Thus, thesecurity management platform 508 may be connected to one ormore networks 504 b-c. Similar to network 504 a,networks 504 b-c can include PANs, LANs, WANs, MANs, cellular networks, the Internet, etc. Additionally, or alternatively, thesecurity management platform 508 can be communicatively coupled to computing device(s) over a short-range communication protocol, such as Bluetooth® or NFC. - The
interface 510 is preferably accessible via a web browser, desktop application, mobile application, or over-the-top (OTT) application. Accordingly, theinterface 510 may be viewed on a personal computer, tablet computer, personal digital assistant (PDA), mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness accessory), network-connected (“smart”) electronic device, (e.g., a television or home assistant device), virtual/augmented reality system (e.g., a head-mounted display), or some other electronic device. - As described above, one issue with security systems is the overabundance of alerts generated by the security systems. Consequently, individuals (e.g., administrators of the security systems) may lose interest in the security capabilities of these security systems due to too many undesired notifications.
- These undesired notifications may be derived from several different sources. For example, in some embodiments, a security system may detect too many false instances of motion because it relies on a signal generated by an overly sensitive passive infrared sensor (PIS). As another example, in some embodiments a security system may detect too many false instances of audio because it relies on an overly sensitive audio sensor (which is configured to prompt recording by the security camera).
- To reduce the quantity of notifications, a network-connected security system can be configured to filter those notifications deemed likely to be unnecessary. In some cases, the “filtering” of notifications may include receiving notifications and only forwarding a portion of the received notifications deemed necessary to a user. Alternatively, or in addition, “filtering” notifications may refer to detecting multiple events that would otherwise result in notifications to a user (e.g., detected motion) and only generating notifications based on a subset of the detected events for presentation to a user. For example, the base station can apply an algorithm that allows it to detect objects included in a video clip. Moreover, the base station can apply an algorithm that allows it to detect the scene depicted in the video clip. The base station can then remove undesired motion from the video clip. Said another way, the base station can ignore those movements that are indicative of events the corresponding individual does not wish to be notified about. Thereafter, the base station can generate notifications only for those events that survive the “filtering” process. Such action ensures that the corresponding individual is only notified about significant events.
-
FIG. 6 shows a flow diagram of a technique for processing content generated byelectronic devices 110 before generating a notification and/or initiating a stream for delivery to aclient device 102. Some or all of the steps described with respect toFIG. 6 may be executed at least in part by ananalytics system 604 deployed on abase station 105, at a network-accessible server system 145, at one or moreelectronic devices 110, or any combination thereof. In other words, theanalytics system 604 depicted inFIG. 6 refers to a functional entity that may include hardware and/or software elements at any one or more of the components depicted in theexample operation environment 100 depicted inFIG. 1 . Further, while the embodiment is described in the context of a security camera, those skilled in the art will recognize that similar techniques could also be employed with other types of electronic devices. - Initially, one or
more security cameras 110 a generatecontent 602, for example, by capturing video and encoding the captured video into digital information. Thecontent 602 may include, for example, one or more digital files including the encoded video. - The
content 602 is then fed into ananalytics system 604 for processing according to the introduced technique. In some embodiments, the step of feeding thecontent 602 into theanalytics system 604 may include acamera 110 a transmitting the generatedcontent 602 over a computer network (e.g., a wired or wireless local network 125) to abase station 105. Thebase station 105 may then forward the receivedcontent 602 to a network-accessible server system 145 that implements theanalytics system 604. Alternatively, or in addition, thecamera 110 a and/orbase station 105 may include processing components that implement at least a portion of theanalytics system 604. - In some embodiments,
content 602 is fed into theanalytics system 604 continually as it is generated. For example, in some embodiments, thecamera 110 a may generate a digital video stream that is transmitted to theanalytics system 604 for processing by way of thebase station 105. - In some embodiments,
content 602 is continually generated by thecamera 110 a. For example, acamera 110 a that is powered by a wall outlet may continually capture view, encode the captured video into a digital stream, and transmit that digital stream for processing by the analytics system. - Alternatively, the
camera 110 a may be configured to generatecontent 602 at periodic intervals and/or in response to detecting certain conditions or events. For example, thecamera 110 a may be equipped with, or in communication with, a motion detector that triggers the capturing and encoding of video when motion in the surveilled environment is detected. In response to receiving an indication of detected motion, thecamera 110 a may begin generatingcontent 602 by capturing video and encoding the captured video. As another illustrative example, instead of transmitting a continuous stream ofcontent 602, thevideo camera 110 a may transmit small portions of content (e.g., short video clips or still images) at period intervals (e.g., every few seconds). -
Generating content 602 at periodic intervals and/or in response to detected events may conserve energy at thecamera 110 a, which may be particularly beneficial for battery-poweredcameras 110 a.Generating content 602 at periodic intervals and/or in response to detected events may also reduce resource requirements to process the content, for example, when generating notifications. For example, in the case of a surveilled environment, the system may be configured based on an assumption that the video of the surveilled environment is of no interest to an administrator unless the video captures an object in motion. - In some embodiments,
content 602 is fed into theanalytics system 604 periodically (e.g., daily, weekly, or monthly) or in response to detected events. For example, even if thecontent 602 is continually generated,such content 602 may be held in storage (e.g., atlocal storage 115 or a NAS 148) before being released (periodically or in response to detected events) for analysis by theanalytics system 604. - Thereafter, the
analytics system 604 processes the receivedcontent 602 to perform the notification filtering technique described herein. For example, theanalytics system 604 may process the receivedcontent 602 to detect whether an event has occurred that necessitates a notification to a user. As previously mentioned, processing of the receivedcontent 602 may be carried out by processors located at thebase station 105, a network-accessible server system 145, or any combination thereof. - In some embodiments, processing of
content 602 may include acontent recognition process 606 to gain some level of understanding of the information captured in thecontent 602. For example, thecontent recognition process 606 may apply computer vision techniques to detect physical objects captured in thecontent 602.FIG. 7 shows a flow diagram that illustrates an example high-level process 700 for image processing-based object detection that involves, for example,processing content 602 to detect identifiable feature points (step 702), identifying putative point matches (step 704), and detecting an object based on the putative point matches (step 706). - The
content recognition process 606 may further classify such detected objects. For example, given one or more classes of objects (e.g., humans, buildings, cars, animals, etc.), thecontent recognition process 606 may process thevideo content 602 to identify instances of various classes of physical objects occurring in the captured video of the surveilled environment. - In some embodiments, the
content recognition process 606 may employ deep learning-based video recognition to classify detected objects. In an example deep learning-based video recognition process for detecting a face, raw image data is input as a matrix of pixels. A first representational layer may abstract the pixels and encode edges. A second layer may compose and encode arrangements of edges, for example, to detect objects. A third layer may encode identifiable features such as a nose and eyes. A fourth layer may recognize that the image includes a face based on the arrangement of identifiable features. - An example technique for classifying objects detected in images or video is the Haar Cascade classifier.
FIG. 8 shows a flow diagram that illustrates an example high-level process 800 applied by a Haar Cascade classifier, specifically for classifying an object in a piece ofcontent 602 as a face. As shown inFIG. 8 , the content 602 (or a portion thereof) is fed into afirst level process 802 which determines whether an object that can be classified as a face is present in thecontent 602. If, based on the processing at thefirst stage 802, it is determined thatcontent 602 does not include an object that can be classified as a face, that object is immediately eliminated as an instance of a face. If, based on the processing at thefirst stage 802, it is determined thatcontent 602 does include an object that can be classified as a face, theprocess 800 proceeds to thenext stage 804 for further processing. Similar processes are applied at each 804, 806 and so on to somestage final stage 808. - Notably, each stage in the
example process 800 may apply increasing levels of processing which requiring increasingly more computational resources. A benefit of this cascade technique is that objects that are not faces are immediately eliminated as such at higher stages with relatively little processing. To be classified as a particular type of object (e.g., face detection), the content must pass each of the stages 802-808 of the classifier. - Note that the example Haar
Cascade classifier process 800 depicted inFIG. 8 is for classifying detected objects as faces, however, similar classifiers may be trained to detect other classes of objects (e.g., car, building, cat, tree, etc.). - Returning to
FIG. 6 , thecontent recognition process 606 may also include distinguishing between instances of detected objects. For example, a grouping method may be applied to associate pixels corresponding to a particular class of objects to a particular instance of that class by selecting pixels that are substantially similar to certain other pixels corresponding to that instance, pixels that are spatially clustered, pixel clusters that fit an appearance-based model for the object class, etc. Again, this process may involve applying a deep learning (e.g., through applying a convolutional neural network) to distinguish individual instances of detected objects. Some example techniques that can be applied for identifying multiple objects include Regions with Convolutional Neural Network Features (RCNN), Fast RCNN, Single Shot Detector (SDD), You Only Look Once (Yolo), etc. - The
content recognition process 606 may also include recognizing the identity of detected objects (e.g., specific people). For example, theanalytics system 604 may receive inputs (e.g., captured images/video) to learn the appearances of instances of certain objects (e.g., specific people) by building machine-learning appearance-based models. Instance segmentations identified based on processing ofcontent 602 can then be compared against such appearance-based models to resolve unique identities for one or more of the detected objects. Identity recognition can be particularly useful in this context as it would allow the system to ignore the detection of certain known individuals in captured image (e.g., members of a household) while focusing notifications on unknown individuals and/or known unwanted individuals that more likely pose a security threat. - The
content recognition process 606 may also include fusing information related to detected objects to gain a semantic understanding of the captured scene. For example, thecontent recognition process 606 may include fusing semantic information associated with a detected object with geometry and/or motion information of the detected object to infer certain information regarding the scene. Information that may be fused may include, for example, an object's category (i.e., class), identity, location, shape, size, scale, pixel segmentation, orientation, inter-class appearance, activity, and pose. As an illustrative example, thecontent recognition 606 process may fuse information pertaining to one or more detected objects to determine that a clip of video is capturing a known person (e.g., a neighbor) walking their dog past a house. The same process may be applied to another clip to determine that the other clip is capturing an unknown individual peering into a window of a surveilled house. Theanalytics system 604 can then use such information to generate notifications only for the scene that presents a heightened security risk (i.e., the unknown person looking in the window) despite motion being detected in both. - In some embodiments, labeled image data (e.g., historical video from one or more sources) may be input to train a neural network (or other machine-learning based models) as part of the
content recognition process 606. For example, security experts may input previously captured video from a number of different sources as examples of certain classes of objects (e.g., car, building, cat, tree, etc.) to inform thecontent recognition process 606. - As alluded to above, after performing content recognition to, for example, detect objects or further to gain a semantic understanding of a captured scene, the
analytics system 604 will utilize this information as part of anevent detection process 608. Event detection may include detecting recognizable events (e.g., a person walking to the front door) and analyzing certain specifics regarding the detected event (e.g., person's identity, time of day, proximity to other detected events, and other contextual information) to determine if the detected event is indicative of a security threat that warrants a notification. - In some embodiments, determining that a detected event is indicative of a security threat may include comparing data associated with the detected event against a database of data associated with candidate threats, for example, defined based on input from industry security experts. As an illustrative example, by processing
video content 602, theanalytics system 604 may detect an event characterized by an unknown individual approaching the doorway to a residence. Theanalytics system 604 may then compare this semantic information regarding the detected event to a database of candidate threats. Based on the comparison, theanalytics system 604 may identify a particular candidate threat that matches (within some threshold level of certainty) the detected event. In some embodiments, the process of comparing may include generating, by theanalytics system 604, a pattern matching score (e.g., a value between 0 and 10) and identifying the detected event as indicative of the particular candidate security threat if the generated score satisfies a threshold criterion (e.g., above 7 on a scale of 0-10). - Alternatively, or in addition, the
analytics system 604 may employ machine learning to analyze receivedcontent 602 to determine if the content is indicative of a security threat. For example, as previously discussed, theanalytics system 604 may apply machine-learning-based behavioral analytics to learn the behavior of objects captured in video images and identify when the behavior of such objects as indicative of a security threat. Applying a machine-learning-based approach may be beneficial in certain instances as it may alleviate the need to develop complex threat detection rules that rely on preexisting knowledge and that are prone to incorrectly identifying unexpected or rare behavior. - If, based on the analysis of the received
content 602, theanalytics system 604 determines that the content is indicative of a security threat, the analytics system may apply anotification generation process 610 to generate one or more notifications for delivery to an administrative user at auser device 102. - In some embodiments,
notification generation 610 may include generating one ormore notifications 614, for example, in the form of messages that are then transmitted over a computer network to auser device 102 associated with an administrative user. Notifications may include emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts withininterface 510, or any other communications medium appropriate for delivery atuser device 102. - In some embodiments, notification generation may include transmitting processed and/or filtered
content 616 for delivery at the client device. For example, in the case ofvideo content 602, the analytics system may process the receivedvideo content 602 andforward content 616 based on the processing to theuser device 102.Processed content 616 may include, for example, a shortened video clip that specifically depicts the activity upon which the security threat was identified.Processed content 616 may also include transformations to the original content to highlight the activity upon which the security threat was identified. For example, theanalytics system 604 may be configured to processcontent 602 to remove detected motion from the content that is not indicative of a security threat. -
Processed content 616 may be transmitted to theuser device 102 in real time (or near real time) as thecontent 602 is generated by thecamera 110 a. For example, acamera 110 a may transmitcontent 602 in the form of a continuous stream of video toanalytics system 604. Theanalytics system 604 may process the received video stream as it is received and only forward portions of the video stream (i.e., processed/filtered content 616) as events are detected that are indicative of a security threat. This processing may occur in real time or near real time as the video is captured at thecamera 110 a so that an administrator user can effectively respond to the security threats. - Alternatively, or in addition, processed/filtered
content 616 may represent time-shifted recordings that are delivered to theuser device 102 after the events underlying the recordings have already occurred. For example, an administrator user that does not want to be bothered with the delivery of live streams throughout the day may elect instead to, for example, review the recordings for the day once at the end of the day. - In some embodiments, one or more of the
electronic devices 110 may be configured to individually analyze certain content (e.g., captured video) and generate notifications. In such embodiments, ananalytics system 604 operating apart from such an electronic device, may be configured to process such notifications ascontent 602 as part of a notification filtering process. Notifications received by theanalytics system 604 for processing may be referred to as provisional notifications in that they are subject to filtering processes which may result in being forwarded to a user device or discarded/ignored. - As an illustrative example, the
content 602 depicted inFIG. 6 as being input toanalytics system 604 may include a provisional notification from acamera 110 a. For example, thecamera 110 a may be configured to independently analyze captured video to detect motion and generate notifications based on the detected motion. In that case, the processing ofcontent 602 by theanalytics system 604 to detect an event (i.e., event detection process 608) may include analyzing the received provisional notification to interpret or otherwise identify an event (e.g., motion detection) as detected at thecamera 110 a. Further, the process of causing a notification to be presented to a user (i.e., notification generation 610) if the detected event satisfies a specified criterion may include generating a new notification based on the received provisional notification and/or simply forwarding the received provisional notification for delivery to auser device 102. - In some embodiments, the
analytics system 604 may consideruser feedback 618 provided by one or more users. For example, if a user indicates that a certain video clip is undesirable, uninteresting, or otherwise not worthy of a notification, then theanalytics system 604 may reduce/eliminate notifications related to future video clips having similar characteristics (e.g., generated at the same time, generated based on the same trigger, including the same visual/audible objects, etc.). - The
analytics system 604 may also consider feedback provided by a cohort that includes the corresponding user. The cohort may include users that share a characteristic in common, such as geographical location, notification frequency, etc. For example, if theanalytics system 604 considers feedback from users within a neighborhood, theanalytics system 604 may know to filter those notifications pertaining to a cat that lives in the neighborhood. As another example, if theanalytics system 604 considers feedback from users within a city, theanalytics system 604 may know to filter those notifications pertaining to events triggered by weather (e.g., wind or rain). - Various programming models and associated techniques for processing and generating data can be applied by the
analytics system 604 to processcontent 602 and generate notifications. For example, in some embodiments,analytics system 604 may utilize a distributed computing cluster to processcontent 602. Utilizing a distributed computing architecture can be particularly beneficial when processing large amounts of data such as content received from a security system or multiple security systems.FIG. 9 illustrates how various inputs such as content 602 (e.g., video clips, keystrokes) and session metadata may be received from base station(s) 105, for example, via a network-accessible server 146, and fed into a distributed computing cluster 902. In some embodiments, input data from adevelopment cycle 904 such as ticketing/monitoring information and/or information stored in a knowledge base may also be input to the distributed computing cluster 902. - The distributed computing cluster 902 may represent a logical entity that includes sets of host machines (not shown in
FIG. 9 ) that run instances of services configured for distributed processing of data. In an example embodiment, the distributed computing cluster 902 may comprise an Apache Hadoop™ deployment. Apache Hadoop™ is an open-source software framework for reliable, scalable and distributed processing of large data sets across clusters of commodity machines. Examples of services/utilities that can be deployed in an Apache Hadoop™ cluster include the Apache Hadoop™ Distributed File System (HDFS), MapReduce™, Apache Hadoop™ YARN, and/or the like. The host computing devices comprising thecomputing cluster 802 can include physical and/or virtual machines that run instances of roles for the various services/utilities. For example, the Apache™ HDFS service can have the following example roles: a NameNode, a secondary NameNode, DataNode, and balancer. In a distributed system such as computing cluster 902, one service may run on multiple host machines. - Apache Hadoop™ software utilities can be employed to facilitate the development of filtering algorithm(s), the acquisition of data pertaining to surveilled environments, and the application of the filtering algorithm(s) to improve real-time analytics. Here, for example, the Apache Hadoop™ software utilities may consider content 602 (e.g., video clips) generated by electronic devices deployed in surveilled environments, as well as user feedback specifying which notifications are desired. The Apache Hadoop™ software utilities can also develop a classification model for classifying content by training a supervised machine learning algorithm. Various machine learning and/or artificial intelligence technologies can be employed to facilitate the development of the classification model.
-
FIG. 10 illustrates how MapReduce™ can be utilized in combination with Apache Hadoop™ in a distributed computing cluster 902 to process various sources of information. MapReduce™ is a programming model for processing/generating big data sets with a parallel, distributed algorithm on a cluster. As shown inFIG. 10 , MapReduce™ usually splits an input data set (e.g.,content 602 comprising video clips) into independent chunks that are processed by the map tasks in a parallel manner. The framework sorts the outputs of the map tasks, which are then input to the reduce tasks. Ultimately, the output of the reduce tasks may be a classification of the content or an event determination that can be utilized by theanalytics system 604 to generate notifications and/or filter content being delivered to auser device 102. -
FIG. 11 illustrates how content 602 can be processed in batches by theanalytics system 604. Here, for example, video clips generated by security cameras may be processed in groups. In some embodiments, all of the video clips corresponding to a certain segment of surveilled environments (e.g., a particular group of homes) are collected on a periodic basis. For example, video clips may be collected every 15 minutes, 30 minutes, 60 minutes, 120 minutes, etc. Thereafter, each batch of video clips can be processed. After processing has been completed, notifications can be generated by theanalytics system 604 and transmitted substantially simultaneously. Thus, users may periodically receive reports including one or more notifications rather than a steady stream of notifications throughout the day. Users may be permitted to manually specify the cadence at which they receive these reports. -
FIG. 12 shows a flow chart of anexample process 1200 for filtering and/or generating notifications based on analysis of content, according to some embodiments. One or more steps of theexample process 1200 may be performed by any one or more of the components of theexample computer system 1300 described with respect toFIG. 13 . For example, theexample process 1200 depicted inFIG. 12 may be represented in instructions stored in memory that are then executed by a processing unit. Theprocess 1200 described with respect toFIG. 12 is an example provided for illustrative purposes and is not to be construed as limiting. Other processes may include more or fewer steps than depicted while remaining within the scope of the present disclosure. Further, the steps depicted inexample process 1200 may be performed in a different order than is shown. -
Example process 1200 begins atstep 1202 with receivingcontent 602 generated by anelectronic device 110 located in a physical environment (e.g., a surveilled environment 400). As previously discussed, theelectronic device 110 may be one of severalelectronic devices 110 associated with a network-connected security system. In some embodiments, thecontent 602 is received atstep 1202 via a computer network, from abase station 105 associated with the network-connected security system. - In some embodiments, the network-connected security system is a video surveillance system and the
electronic device 110 is a network-connectedvideo camera 110 a. In such embodiments thecontent 602 may include video files. Further, the optical parameter may be any of an optical parameter, an image processing parameter, or an encoding parameter. - In some embodiments, the
content 602 received from theelectronic device 110 atstep 1202 may include a provisional notification generated by theelectronic device 110. For example, theelectronic device 110 may include processing resources to detect an event based on sensory information (e.g., video) and generate a notification based on the detected event. -
Example process 1200 continues atstep 1204 with processing the receivedcontent 602 to detect an event in the surveilled physical environment. - In the case of a video content (e.g., from a
camera 110 a), the processing of content atstep 1204 to detect an event may include processing the video to detect one or more instances of physical objects in the physical environment and analyzing data associated with the detected one or more instances of physical objects to detect a scene captured by thevideo camera 110 a. For example, as previously discussed, a content recognition process 606 (described with respect toFIG. 6 ) may apply computer vision techniques to, for example, detect various instances of physical objects, resolve object identities, and may fuse various sources of information to gain a semantic understanding of a scene captured by a video camera. The event detected atstep 1204 may be based on this scene understanding. - If the received
content 602 includes a provisional notification (e.g., generated by an electronic device), the step of detecting the event atstep 1204 may include processing the received provisional notification (e.g., by reading or interpreting a message included in the notification) and identifying the event (as detected by the electronic device 110) based on the processing. -
Example process 1200 continues atstep 1206 with determining if the detected event satisfies a specified criterion. As previously discussed, a purpose of processing content generated by a network-connected security system may be to determine if a notification to a user is necessary. In a security context, a notification is generally understood to be necessary when an event has occurred in a surveilled environment is abnormal, or more specifically indicative of a security risk. The specified criterion may therefore differ in various embodiments, but is generally established based on a need to selectively notify a user of activity in the surveilled environment that may be of interest to the user whether that activity is merely outside of a normal baseline or more specifically indicative of a security risk or threat. - In an illustrative embodiment, the step of determining if the detected event satisfies a specified criterion includes comparing data associated with the detected event against a database of data associated with a plurality of candidate security threats, generating a pattern matching score based on the detected event and a particular candidate security threat of the plurality of candidate security threats and identifying the detected event as indicative of the particular candidate security threat if the generated pattern matching score satisfies a threshold criterion. For example, pattern matching scores may be generated on a scale of 0-10 with a threshold criterion set at 7 to indicate that a detected event is indicative of a particular candidate security threat based on the comparison.
- Depending on the computer system performing
example process 1200, 1204 and 1206 may include transmitting thesteps content 602 to another computing system for processing. For example, if abase station 105 is performingprocess 1200, 1204 and 1206 may include transmitting, by thesteps base station 105, thecontent 602, via an external network, to an external computing system such as a network-accessible server system 145 to process thecontent 602 to detect an event and determine if the detected event satisfies a specified criterion. -
Example process 1200 concludes at step 1208 with causing a notification to be presented at auser device 102 communicatively coupled to the network-connected security system if the detected event satisfies the specified criterion. In some embodiments, the notification is presented at theuser device 102 in real time or near real time as thecontent 602 is generated by theelectronic device 110. For example, a notification may be presented at auser device 102 within seconds or fractions of a second a portion of video content is captured at acamera 110 a associated with a network-connected security system. The latency between content generation and presentation of the notification will depend on certain limitations in the system (e.g., processing resources, network speed, etc.). - In some embodiments, the notification presented at the
user device 102 at step 1208 may include an alert message (e.g., emails, text messages (e.g., SMS, MMS, etc.), automated phone calls, alerts within interface 510) informing a user of the event. In other words, in some embodiments, step 1208 may include causing the alert message to be transmitted, via a computer network, to theclient device 102. - In some embodiments, the notification presented at the
user device 102 at step 1208 may include at least a portion of thecontent 602 generated by anelectronic device 110. In such embodiments, step 1208 may include causing at least a portion of thecontent 602 received from theelectronic device 110 to be transmitted, via a computer network, to theclient device 102. In some embodiments, causing the at least a portion of thecontent 602 to be transmitted to theclient device 102 may include initiating a peer-to-peer connection between theelectronic device 110 and theclient device 102 and causing the portion ofcontent 602 to be transmitted via the peer-to-peer connection. - In cases where the
content 602 received atstep 1202 includes a provisional notification, step 1208 may include forwarding the received provisional notification if the event (e.g., as detected by the electronic device 110) satisfies the specified criterion. -
FIG. 13 is a block diagram illustrating an example of acomputer system 1300 in which at least some operations described herein can be implemented. For example, some components of thecomputer system 1300 may be hosted any one or more of the devices described with respect to operatingenvironment 100 inFIG. 1 such aselectronic devices 110,base station 105,APs 120,local storage 115, network-accessible server system 145 anduser devices 102. - The
computer system 1300 may include one or more central processing units (“processors”) 1302,main memory 1306,non-volatile memory 1310, network adapter 1312 (e.g., network interface), video display 318, input/output devices 1320, control device 1322 (e.g., keyboard and pointing devices),drive unit 1324 including astorage medium 1326, and signalgeneration device 1330 that are communicatively connected to a bus 1316. The bus 1316 is illustrated as an abstraction that represents one or more physical buses and/or point-to-point connections that are connected by appropriate bridges, adapters, or controllers. The bus 1316, therefore, can include a system bus, a Peripheral Component Interconnect (PCI) bus or PCI-Express bus, a HyperTransport or industry standard architecture (ISA) bus, a small computer system interface (SCSI) bus, a universal serial bus (USB), IIC (I2C) bus, or an Institute of Electrical and Electronics Engineers (IEEE) standard 1394 bus (also referred to as “Firewire”). - The
computer system 1300 may share a similar computer processor architecture as that of a desktop computer, tablet computer, personal digital assistant (PDA), mobile phone, game console, music player, wearable electronic device (e.g., a watch or fitness tracker), network-connected (“smart”) device (e.g., a television or home assistant device), virtual/augmented reality systems (e.g., a head-mounted display), or another electronic device capable of executing a set of instructions (sequential or otherwise) that specify action(s) to be taken by thecomputer system 1300. - While the
main memory 1306,non-volatile memory 1310, and storage medium 1326 (also called a “machine-readable medium”) are shown to be a single medium, the term “machine-readable medium” and “storage medium” should be taken to include a single medium or multiple media (e.g., a centralized/distributed database and/or associated caches and servers) that store one or more sets ofinstructions 1328. The term “machine-readable medium” and “storage medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by thecomputer system 1300. - In general, the routines executed to implement the embodiments of the disclosure may be implemented as part of an operating system or a specific application, component, program, object, module, or sequence of instructions (collectively referred to as “computer programs”). The computer programs typically comprise one or more instructions (e.g.,
instructions 1304, 1308, 1328) set at various times in various memory and storage devices in a computing device. When read and executed by the one ormore processors 1302, the instruction(s) cause thecomputer system 1300 to perform operations to execute elements involving the various aspects of the disclosure. - Moreover, while embodiments have been described in the context of fully functioning computing devices, those skilled in the art will appreciate that the various embodiments are capable of being distributed as a program product in a variety of forms. The disclosure applies regardless of the particular type of machine or computer-readable media used to actually effect the distribution.
- Further examples of machine-readable storage media, machine-readable media, or computer-readable media include recordable-type media such as volatile and
non-volatile memory devices 1310, floppy and other removable disks, hard disk drives, optical disks (e.g., Compact Disk Read-Only Memory (CD-ROMS), Digital Versatile Disks (DVDs)), and transmission-type media such as digital and analog communication links. - The
network adapter 1312 enables thecomputer system 1300 to mediate data in anetwork 1314 with an entity that is external to thecomputer system 1300 through any communication protocol supported by thecomputer system 1300 and the external entity. Thenetwork adapter 1312 can include a network adapter card, a wireless network interface card, a router, an access point, a wireless router, a switch, a multilayer switch, a protocol converter, a gateway, a bridge, a bridge router, a hub, a digital media receiver, and/or a repeater. - The
network adapter 1312 may include a firewall that governs and/or manages permission to access/proxy data in a computer network and tracks varying levels of trust between different machines and/or applications. The firewall can be any number of modules having any combination of hardware and/or software components able to enforce a predetermined set of access rights between a particular set of machines and applications, machines and machines, and/or applications and applications (e.g., to regulate the flow of traffic and resource sharing between these entities). The firewall may additionally manage and/or have access to an access control list that details permissions including the access and operation rights of an object by an individual, a machine, and/or an application, and the circumstances under which the permission rights stand. - The techniques introduced here can be implemented by programmable circuitry (e.g., one or more microprocessors), software and/or firmware, special-purpose hardwired (i.e., non-programmable) circuitry, or a combination of such forms. Special-purpose circuitry can be in the form of one or more application-specific integrated circuits (ASICs), programmable logic devices (PLDs), field-programmable gate arrays (FPGAs), etc.
- The foregoing description of various embodiments of the claimed subject matter has been provided for the purposes of illustration and description. It is not intended to be exhaustive or to limit the claimed subject matter to the precise forms disclosed. Many modifications and variations will be apparent to one skilled in the art. Embodiments were chosen and described in order to best describe the principles of the invention and its practical applications, thereby enabling those skilled in the relevant art to understand the claimed subject matter, the various embodiments, and the various modifications that are suited to the particular uses contemplated.
- Although the Detailed Description describes certain embodiments and the best mode contemplated, the technology can be practiced in many ways no matter how detailed the Detailed Description appears. Embodiments may vary considerably in their implementation details, while still being encompassed by the specification. Particular terminology used when describing certain features or aspects of various embodiments should not be taken to imply that the terminology is being redefined herein to be restricted to any specific characteristics, features, or aspects of the technology with which that terminology is associated. In general, the terms used in the following claims should not be construed to limit the technology to the specific embodiments disclosed in the specification, unless those terms are explicitly defined herein. Accordingly, the actual scope of the technology encompasses not only the disclosed embodiments, but also all equivalent ways of practicing or implementing the embodiments.
- The language used in the specification has been principally selected for readability and instructional purposes. It may not have been selected to delineate or circumscribe the subject matter. It is therefore intended that the scope of the technology be limited not by this Detailed Description, but rather by any claims that issue on an application based hereon. Accordingly, the disclosure of various embodiments is intended to be illustrative, but not limiting, of the scope of the technology as set forth in the following claims.
Claims (29)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US16/239,343 US20190289263A1 (en) | 2018-03-19 | 2019-01-03 | Notifications by a network-connected security system based on content analysis |
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862644847P | 2018-03-19 | 2018-03-19 | |
| US16/239,343 US20190289263A1 (en) | 2018-03-19 | 2019-01-03 | Notifications by a network-connected security system based on content analysis |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20190289263A1 true US20190289263A1 (en) | 2019-09-19 |
Family
ID=67906268
Family Applications (3)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/239,343 Abandoned US20190289263A1 (en) | 2018-03-19 | 2019-01-03 | Notifications by a network-connected security system based on content analysis |
| US16/239,307 Active US10938649B2 (en) | 2018-03-19 | 2019-01-03 | Adjusting parameters in a network-connected security system based on content analysis |
| US17/188,841 Active 2039-05-15 US11665056B2 (en) | 2018-03-19 | 2021-03-01 | Adjusting parameters in a network-connected security system based on content analysis |
Family Applications After (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US16/239,307 Active US10938649B2 (en) | 2018-03-19 | 2019-01-03 | Adjusting parameters in a network-connected security system based on content analysis |
| US17/188,841 Active 2039-05-15 US11665056B2 (en) | 2018-03-19 | 2021-03-01 | Adjusting parameters in a network-connected security system based on content analysis |
Country Status (1)
| Country | Link |
|---|---|
| US (3) | US20190289263A1 (en) |
Cited By (21)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111372043A (en) * | 2020-02-06 | 2020-07-03 | 浙江大华技术股份有限公司 | Abnormity detection method and related equipment and device |
| US10896593B1 (en) * | 2018-06-10 | 2021-01-19 | Frequentis Ag | System and method for brokering mission critical communication between parties having non-uniform communication resources |
| US20210357657A1 (en) * | 2020-05-14 | 2021-11-18 | Matchx Gmbh | Methods, systems, apparatuses, and devices for facilitating managing incidents occurring in areas monitored by low data-rate monitoring devices using the low data-rate monitoring devices |
| US20220067090A1 (en) * | 2020-08-28 | 2022-03-03 | Comcast Cable Communications, Llc | Systems, methods, and apparatuses for intelligent audio event detection |
| US11403849B2 (en) * | 2019-09-25 | 2022-08-02 | Charter Communications Operating, Llc | Methods and apparatus for characterization of digital content |
| US20220335816A1 (en) * | 2021-04-16 | 2022-10-20 | Dice Corporation | Digital video alarm analytics computer system |
| EP4083952A1 (en) * | 2021-04-30 | 2022-11-02 | Arlo Technologies, Inc. | Electronic monitoring system using push notifications with custom audio alerts |
| US20230069768A1 (en) * | 2021-08-31 | 2023-03-02 | Micron Technology, Inc. | Distributed Camera System |
| US11616992B2 (en) | 2010-04-23 | 2023-03-28 | Time Warner Cable Enterprises Llc | Apparatus and methods for dynamic secondary content and data insertion and delivery |
| US20230103335A1 (en) * | 2021-07-30 | 2023-04-06 | AT&T Intellectual Properly I, L.P. | Hyperlocal edge cache |
| US20230132523A1 (en) * | 2021-11-01 | 2023-05-04 | Jpmorgan Chase Bank, N.A. | Systems and methods for wayfinding in hazardous environments |
| US11669595B2 (en) | 2016-04-21 | 2023-06-06 | Time Warner Cable Enterprises Llc | Methods and apparatus for secondary content management and fraud prevention |
| US11688273B2 (en) | 2021-04-16 | 2023-06-27 | Dice Corporation | Digital video alarm monitoring computer system |
| US11741825B2 (en) | 2021-04-16 | 2023-08-29 | Dice Corporation | Digital video alarm temporal monitoring computer system |
| US11790764B2 (en) | 2021-04-16 | 2023-10-17 | Dice Corporation | Digital video alarm situational monitoring computer system |
| GB2619016A (en) * | 2022-05-20 | 2023-11-29 | Intellicam360 Ltd | A security system and a method of securing a location |
| US20230394938A1 (en) * | 2022-06-07 | 2023-12-07 | Voyance Technologies Inc. | System and method for providing security analytics from surveillance systems using artificial intelligence |
| US11887448B2 (en) | 2021-02-18 | 2024-01-30 | Dice Corporation | Digital video alarm guard tour monitoring computer system |
| US12323744B2 (en) | 2021-04-16 | 2025-06-03 | Dice Corporation | Digital video alarm human monitoring computer system |
| US12361805B2 (en) | 2021-04-16 | 2025-07-15 | Dice Corporation | Hyperlinked digital video alarm electronic document |
| ES3035407A1 (en) * | 2024-03-01 | 2025-09-02 | Gallardo Agustin Paraiso | Comprehensive security and alert system based on artificial intelligence |
Families Citing this family (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11019349B2 (en) * | 2017-01-20 | 2021-05-25 | Snap Inc. | Content-based client side video transcoding |
| GB2570449B (en) * | 2018-01-23 | 2022-05-18 | Canon Kk | Method and system for auto-setting of cameras |
| US11394863B2 (en) * | 2018-09-06 | 2022-07-19 | Arlo Technologies, Inc. | No-reference image quality assessment for iterative batch video analysis |
| US10891514B2 (en) * | 2018-12-17 | 2021-01-12 | Microsoft Technology Licensing, Llc | Image classification pipeline |
| US11507677B2 (en) | 2019-02-15 | 2022-11-22 | Microsoft Technology Licensing, Llc | Image classification modeling while maintaining data privacy compliance |
| WO2022050855A1 (en) * | 2020-09-04 | 2022-03-10 | Motorola Solutions, Inc | Computer-implemented method for recommending changes within a security system |
| US11792501B2 (en) * | 2020-12-17 | 2023-10-17 | Motorola Solutions, Inc. | Device, method and system for installing video analytics parameters at a video analytics engine |
| US11620888B2 (en) | 2021-04-19 | 2023-04-04 | Bank Of America Corporation | System for detecting and tracking an unauthorized person |
| US11769324B2 (en) | 2021-04-19 | 2023-09-26 | Bank Of America Corporation | System for detecting unauthorized activity |
| US11354994B1 (en) | 2021-05-04 | 2022-06-07 | Motorola Solutions, Inc. | Analytics for planning an upgrade to a video camera surveillance system |
| WO2023003928A1 (en) * | 2021-07-20 | 2023-01-26 | Nishant Shah | Context-controlled video quality camera system |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150379848A1 (en) * | 2014-06-25 | 2015-12-31 | Allied Telesis, Inc. | Alert system for sensor based detection system |
| US20180232895A1 (en) * | 2016-02-26 | 2018-08-16 | Ring Inc. | Neighborhood alert mode for triggering multi-device recording, multi-camera motion tracking, and multi-camera event stitching for audio/video recording and communication devices |
| US20180268674A1 (en) * | 2017-03-20 | 2018-09-20 | Ring Inc. | Dynamic Identification of Threat Level Associated With a Person Using an Audio/Video Recording and Communication Device |
| US20180286201A1 (en) * | 2017-03-28 | 2018-10-04 | Ring Inc. | Adjustable alert tones and operational modes for audio/video recording and communication devices based upon user location |
| US20180357871A1 (en) * | 2017-06-07 | 2018-12-13 | Amazon Technologies, Inc. | Informative Image Data Generation Using Audio/Video Recording and Communication Devices |
| US20190221090A1 (en) * | 2018-01-12 | 2019-07-18 | Qognify Ltd. | System and method for dynamically ordering video channels according to rank of abnormal detection |
| US20190327453A1 (en) * | 2016-12-22 | 2019-10-24 | Sony Semiconductor Solutions Corporation | Information transmission device, information transmission method, and information transmission system |
| US20190327128A1 (en) * | 2018-04-24 | 2019-10-24 | Amazon Technologies, Inc. | Using a local hub device as a substitute for an unavailable backend device |
Family Cites Families (33)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7148914B2 (en) * | 2001-04-25 | 2006-12-12 | Hewlett-Packard Development Company, L.P. | Method and apparatus for providing subimages to remote sites |
| US8232909B2 (en) | 2008-09-30 | 2012-07-31 | Cooper Technologies Company | Doppler radar motion detector for an outdoor light fixture |
| US9191431B2 (en) | 2011-07-05 | 2015-11-17 | Verizon Patent And Licensing Inc. | Systems and methods for sharing media content between users |
| US8740793B2 (en) | 2011-08-29 | 2014-06-03 | General Electric Company | Radar based systems and methods for monitoring a subject |
| US9942580B2 (en) | 2011-11-18 | 2018-04-10 | At&T Intellecutal Property I, L.P. | System and method for automatically selecting encoding/decoding for streaming media |
| WO2014144628A2 (en) | 2013-03-15 | 2014-09-18 | Master Lock Company | Cameras and networked security systems and methods |
| US10013318B2 (en) | 2013-04-16 | 2018-07-03 | Entit Software Llc | Distributed event correlation system |
| US9106887B1 (en) | 2014-03-13 | 2015-08-11 | Wowza Media Systems, LLC | Adjusting encoding parameters at a mobile device based on a change in available network bandwidth |
| US20160125714A1 (en) | 2014-11-04 | 2016-05-05 | Canary Connect, Inc. | Video recording with security/safety monitoring device |
| GB201501510D0 (en) * | 2015-01-29 | 2015-03-18 | Apical Ltd | System |
| US9826149B2 (en) * | 2015-03-27 | 2017-11-21 | Intel Corporation | Machine learning of real-time image capture parameters |
| US10178533B2 (en) * | 2015-05-29 | 2019-01-08 | Resolution Products, Inc. | Security systems |
| US9996749B2 (en) * | 2015-05-29 | 2018-06-12 | Accenture Global Solutions Limited | Detecting contextual trends in digital video content |
| US10192117B2 (en) | 2015-06-25 | 2019-01-29 | Kodak Alaris Inc. | Graph-based framework for video object segmentation and extraction in feature space |
| US20180213267A1 (en) | 2015-07-31 | 2018-07-26 | The Khoshbin Company | Distributed surveillance |
| US10193919B2 (en) | 2015-08-24 | 2019-01-29 | Empow Cyber Security, Ltd | Risk-chain generation of cyber-threats |
| KR101777238B1 (en) | 2015-10-28 | 2017-09-11 | 네이버 주식회사 | Method and system for image trend detection and curation of image |
| US10437831B2 (en) * | 2015-10-29 | 2019-10-08 | EMC IP Holding Company LLC | Identifying insider-threat security incidents via recursive anomaly detection of user behavior |
| EP3764281B1 (en) | 2016-02-22 | 2024-09-18 | Rapiscan Systems, Inc. | Methods of identifying firearms in radiographic images |
| CN107370983B (en) * | 2016-05-13 | 2019-12-17 | 腾讯科技(深圳)有限公司 | method and device for acquiring track of video monitoring system |
| CN106211359B (en) | 2016-07-18 | 2020-01-03 | 上海小蚁科技有限公司 | Method and device for enabling device to obtain service |
| US10249047B2 (en) | 2016-09-13 | 2019-04-02 | Intelligent Fusion Technology, Inc. | System and method for detecting and tracking multiple moving targets based on wide-area motion imagery |
| US20180121610A1 (en) | 2016-10-28 | 2018-05-03 | Always In Touch, Inc. | Selecting a healthcare data processing approach |
| US10366263B2 (en) * | 2016-12-30 | 2019-07-30 | Accenture Global Solutions Limited | Object detection for video camera self-calibration |
| US10530991B2 (en) * | 2017-01-28 | 2020-01-07 | Microsoft Technology Licensing, Llc | Real-time semantic-aware camera exposure control |
| US10586433B2 (en) | 2017-02-13 | 2020-03-10 | Google Llc | Automatic detection of zones of interest in a video |
| US10769448B2 (en) * | 2017-05-31 | 2020-09-08 | Panasonic I-Pro Sensing Solutions Co., Ltd. | Surveillance system and surveillance method |
| WO2019007919A1 (en) * | 2017-07-03 | 2019-01-10 | Canon Kabushiki Kaisha | Method and system for auto-setting cameras |
| US11188783B2 (en) * | 2017-10-19 | 2021-11-30 | Nokia Technologies Oy | Reverse neural network for object re-identification |
| US10855996B2 (en) | 2018-02-20 | 2020-12-01 | Arlo Technologies, Inc. | Encoder selection based on camera system deployment characteristics |
| US20190261243A1 (en) | 2018-02-20 | 2019-08-22 | Netgear, Inc. | Video-based channel selection in a wireless network-connected camera system |
| US11064208B2 (en) | 2018-02-20 | 2021-07-13 | Arlo Technologies, Inc. | Transcoding in security camera applications |
| US20210084382A1 (en) | 2019-09-13 | 2021-03-18 | Wowza Media Systems, LLC | Video Stream Analytics |
-
2019
- 2019-01-03 US US16/239,343 patent/US20190289263A1/en not_active Abandoned
- 2019-01-03 US US16/239,307 patent/US10938649B2/en active Active
-
2021
- 2021-03-01 US US17/188,841 patent/US11665056B2/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20150379848A1 (en) * | 2014-06-25 | 2015-12-31 | Allied Telesis, Inc. | Alert system for sensor based detection system |
| US20180232895A1 (en) * | 2016-02-26 | 2018-08-16 | Ring Inc. | Neighborhood alert mode for triggering multi-device recording, multi-camera motion tracking, and multi-camera event stitching for audio/video recording and communication devices |
| US20190327453A1 (en) * | 2016-12-22 | 2019-10-24 | Sony Semiconductor Solutions Corporation | Information transmission device, information transmission method, and information transmission system |
| US20180268674A1 (en) * | 2017-03-20 | 2018-09-20 | Ring Inc. | Dynamic Identification of Threat Level Associated With a Person Using an Audio/Video Recording and Communication Device |
| US20180286201A1 (en) * | 2017-03-28 | 2018-10-04 | Ring Inc. | Adjustable alert tones and operational modes for audio/video recording and communication devices based upon user location |
| US20180357871A1 (en) * | 2017-06-07 | 2018-12-13 | Amazon Technologies, Inc. | Informative Image Data Generation Using Audio/Video Recording and Communication Devices |
| US20190221090A1 (en) * | 2018-01-12 | 2019-07-18 | Qognify Ltd. | System and method for dynamically ordering video channels according to rank of abnormal detection |
| US20190327128A1 (en) * | 2018-04-24 | 2019-10-24 | Amazon Technologies, Inc. | Using a local hub device as a substitute for an unavailable backend device |
Cited By (31)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US11616992B2 (en) | 2010-04-23 | 2023-03-28 | Time Warner Cable Enterprises Llc | Apparatus and methods for dynamic secondary content and data insertion and delivery |
| US11669595B2 (en) | 2016-04-21 | 2023-06-06 | Time Warner Cable Enterprises Llc | Methods and apparatus for secondary content management and fraud prevention |
| US12321422B2 (en) | 2016-04-21 | 2025-06-03 | Time Warner Cable Enterprises Llc | Methods and apparatus for secondary content management and fraud prevention |
| US10896593B1 (en) * | 2018-06-10 | 2021-01-19 | Frequentis Ag | System and method for brokering mission critical communication between parties having non-uniform communication resources |
| US11403849B2 (en) * | 2019-09-25 | 2022-08-02 | Charter Communications Operating, Llc | Methods and apparatus for characterization of digital content |
| US12293584B2 (en) | 2019-09-25 | 2025-05-06 | Charter Communications Operating, Llc | Methods and apparatus for characterization of digital content |
| CN111372043A (en) * | 2020-02-06 | 2020-07-03 | 浙江大华技术股份有限公司 | Abnormity detection method and related equipment and device |
| US20210357657A1 (en) * | 2020-05-14 | 2021-11-18 | Matchx Gmbh | Methods, systems, apparatuses, and devices for facilitating managing incidents occurring in areas monitored by low data-rate monitoring devices using the low data-rate monitoring devices |
| US11508155B2 (en) * | 2020-05-14 | 2022-11-22 | Matchx Gmbh | Methods, systems, apparatuses, and devices for facilitating managing incidents occurring in areas monitored by low data-rate monitoring devices using the low data-rate monitoring devices |
| US20220067090A1 (en) * | 2020-08-28 | 2022-03-03 | Comcast Cable Communications, Llc | Systems, methods, and apparatuses for intelligent audio event detection |
| US11887448B2 (en) | 2021-02-18 | 2024-01-30 | Dice Corporation | Digital video alarm guard tour monitoring computer system |
| US11741825B2 (en) | 2021-04-16 | 2023-08-29 | Dice Corporation | Digital video alarm temporal monitoring computer system |
| US12307878B2 (en) * | 2021-04-16 | 2025-05-20 | Dice Corporation | Digital video alarm analytics computer system |
| US11688273B2 (en) | 2021-04-16 | 2023-06-27 | Dice Corporation | Digital video alarm monitoring computer system |
| US11790764B2 (en) | 2021-04-16 | 2023-10-17 | Dice Corporation | Digital video alarm situational monitoring computer system |
| US12361805B2 (en) | 2021-04-16 | 2025-07-15 | Dice Corporation | Hyperlinked digital video alarm electronic document |
| US12323744B2 (en) | 2021-04-16 | 2025-06-03 | Dice Corporation | Digital video alarm human monitoring computer system |
| US20220335816A1 (en) * | 2021-04-16 | 2022-10-20 | Dice Corporation | Digital video alarm analytics computer system |
| US12148277B2 (en) | 2021-04-30 | 2024-11-19 | Arlo Technologies, Inc. | Electronic monitoring system using push notifications with custom audio alerts |
| EP4083952A1 (en) * | 2021-04-30 | 2022-11-02 | Arlo Technologies, Inc. | Electronic monitoring system using push notifications with custom audio alerts |
| US20230103335A1 (en) * | 2021-07-30 | 2023-04-06 | AT&T Intellectual Properly I, L.P. | Hyperlocal edge cache |
| US12501002B2 (en) * | 2021-08-31 | 2025-12-16 | Micron Technology, Inc. | Distributed camera system |
| US20230069768A1 (en) * | 2021-08-31 | 2023-03-02 | Micron Technology, Inc. | Distributed Camera System |
| US20240221477A1 (en) * | 2021-11-01 | 2024-07-04 | Jpmorgan Chase Bank, N.A. | Systems and methods for wayfinding in hazardous environments |
| US20230132523A1 (en) * | 2021-11-01 | 2023-05-04 | Jpmorgan Chase Bank, N.A. | Systems and methods for wayfinding in hazardous environments |
| US11972681B2 (en) * | 2021-11-01 | 2024-04-30 | Jpmorgan Chase Bank, N.A. | Systems and methods for wayfinding in hazardous environments |
| US12333928B2 (en) * | 2021-11-01 | 2025-06-17 | Jpmorgan Chase Bank, N.A. | Systems and methods for wayfinding in hazardous environments |
| GB2619016A (en) * | 2022-05-20 | 2023-11-29 | Intellicam360 Ltd | A security system and a method of securing a location |
| US20230394938A1 (en) * | 2022-06-07 | 2023-12-07 | Voyance Technologies Inc. | System and method for providing security analytics from surveillance systems using artificial intelligence |
| US12417682B2 (en) * | 2022-06-07 | 2025-09-16 | Voyance Technologies Inc. | System and method for providing security analytics from surveillance systems using artificial intelligence |
| ES3035407A1 (en) * | 2024-03-01 | 2025-09-02 | Gallardo Agustin Paraiso | Comprehensive security and alert system based on artificial intelligence |
Also Published As
| Publication number | Publication date |
|---|---|
| US10938649B2 (en) | 2021-03-02 |
| US20210281476A1 (en) | 2021-09-09 |
| US20190288911A1 (en) | 2019-09-19 |
| US11665056B2 (en) | 2023-05-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20190289263A1 (en) | Notifications by a network-connected security system based on content analysis | |
| US11386285B2 (en) | Systems and methods of person recognition in video streams | |
| US20220245396A1 (en) | Systems and Methods of Person Recognition in Video Streams | |
| US12052494B2 (en) | Systems and methods of power-management on smart devices | |
| US11576127B2 (en) | Mesh-based home security system | |
| US10555393B1 (en) | Face recognition systems with external stimulus | |
| US10936655B2 (en) | Security video searching systems and associated methods | |
| US9633548B2 (en) | Leveraging a user's geo-location to arm and disarm a network enabled device | |
| US20190356588A1 (en) | Network routing of media streams based upon semantic contents | |
| US20190260987A1 (en) | Encoder selection based on camera system deployment characteristics | |
| CN117496643A (en) | System and method for detecting and responding to visitor of smart home environment | |
| US11704908B1 (en) | Computer vision enabled smart snooze home security cameras | |
| US10212778B1 (en) | Face recognition systems with external stimulus | |
| US11783010B2 (en) | Systems and methods of person recognition in video streams | |
| US11908196B1 (en) | Security event processing | |
| US12136294B1 (en) | Biometric data processing for a security system | |
| US11651456B1 (en) | Rental property monitoring solution using computer vision and audio analytics to detect parties and pets while preserving renter privacy | |
| EP3410343A1 (en) | Systems and methods of person recognition in video streams | |
| WO2021180004A1 (en) | Video analysis method, video analysis management method, and related device | |
| WO2025064209A1 (en) | Biometric data processing for a security system | |
| WO2024163106A1 (en) | Security event processing |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION COUNTED, NOT YET MAILED |
|
| AS | Assignment |
Owner name: NETGEAR, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:AMINI, PEIMAN;EMMANUEL, JOSEPH AMALAN ARUL;REEL/FRAME:054353/0879 Effective date: 20190411 |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE AFTER FINAL ACTION FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: ADVISORY ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |