GB2585919A - Method and system for reviewing and analysing video alarms - Google Patents
Method and system for reviewing and analysing video alarms Download PDFInfo
- Publication number
- GB2585919A GB2585919A GB1910561.8A GB201910561A GB2585919A GB 2585919 A GB2585919 A GB 2585919A GB 201910561 A GB201910561 A GB 201910561A GB 2585919 A GB2585919 A GB 2585919A
- Authority
- GB
- United Kingdom
- Prior art keywords
- frames
- limited number
- alarm
- computer
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000004458 analytical method Methods 0.000 claims abstract description 52
- 238000012544 monitoring process Methods 0.000 claims abstract description 7
- 238000010801 machine learning Methods 0.000 claims abstract description 6
- 238000013519 translation Methods 0.000 claims abstract description 6
- 238000010200 validation analysis Methods 0.000 claims abstract description 6
- 238000001514 detection method Methods 0.000 description 5
- 238000012552 review Methods 0.000 description 5
- 238000013528 artificial neural network Methods 0.000 description 4
- 238000012549 training Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 230000006870 function Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000010354 integration Effects 0.000 description 2
- 238000012423 maintenance Methods 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 230000004075 alteration Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000013341 scale-up Methods 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000009424 underpinning Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B29/00—Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
- G08B29/18—Prevention or correction of operating errors
- G08B29/185—Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
- G08B29/186—Fuzzy logic; neural networks
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
- G08B21/0202—Child monitoring systems using a transmitter-receiver system carried by the parent and the child
- G08B21/0205—Specific application combined with child monitoring using a transmitter-receiver system
- G08B21/0208—Combination with audio or video communication, e.g. combination with "baby phone" function
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19695—Arrangements wherein non-video detectors start video recording or forwarding but do not generate an alarm themselves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/188—Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Child & Adolescent Psychology (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Signal Processing (AREA)
- Automation & Control Theory (AREA)
- Emergency Management (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- Evolutionary Computation (AREA)
- Fuzzy Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computer Security & Cryptography (AREA)
- Alarm Systems (AREA)
Abstract
A method for reviewing and analysing video alarms, comprises: Following an alarm signal, receiving video data of an event at a monitored location from video camera 1 associated with an alarm sensor. Isolating a limited number of frames from the video data in a segment of interest of the video data. Sending the limited number of frames via a data connection to analysis system 2 that is located geographically remotely to the monitored location. Upon receipt by the analysis system of the limited number of frames, automatically analysing the limited number of frames by the computer. In dependence on the analysis, validating or dismissing the alarm. Machine learning techniques may be used to analyse the limited number of frames. Upon validation of the alarm, the limited number of frames may be automatically forwarded to a central monitoring station or a personal computing device that is linked to a user account associated with the video source. The analysis system may include translation modules 6 configured to convert received alarm signals into a common data format for subsequent analysis by the computer
Description
Intellectual Property Office Application No. GII1910561.8 RTM Date:24 January 2020 The following terms are registered trade marks and should be read as such wherever they occur in this document: Hikvi sion Dahua Videofied Adpro Intellectual Property Office is an operating name of the Patent Office www.gov.uk /ipo Method and system for reviewing and analysing video alarms The present disclosure relates to a method and system for reviewing and analysing video alarms, in particular to such a system and method for autonomously reviewing and filtering false alarms in the context of video verification or remote guarding.
A conventional video verification service works in conjunction with burglar alarms to validate an incident. When a locally installed sensor detects an anomaly, a locally installed video camera records a short clip, which is sent to a centralised/remote control room managed by a security company. An operator in the control room manually reviews the clip and, if the operator sees a problem, contacts the police or another relevant emergency response service.
The locally installed sensor may be a hardware sensor (such as an Infrared sensor). There may alternatively or additionally be provided some form of local video analytics performed. Such local video analytics requires a server to be installed locally that processes real-time video. Processed video is sent to the control room for manual review.
There are a number of limitations with the prior art approach: 1. High cost of installation -Each install requires an engineer to visit the site, spend several hours configuring the device and requires the site to be revisited for maintenance. This is in addition to the cost of the server 2. Old fashioned computer vision techniques -Prior art systems rely on hand crafted rules such as size, shape, colour to detect and classify objects. These systems are extremely sensitive to weather and environment and hence lead to a large number of false alarms. Because the system is a local installation, it doesn't improve or evolve over time.
3. Inability to scale -Traditional analytics systems depend on analysing live video. Due to bandwidth limitations this must be done locally -it is impossible to stream the outputs from 1000s of cameras to a remote location for analysis.
The present invention arose in a bid to provide an improved method and system of reviewing and analysing video alarms capable of reducing false alarms According to the present invention in a first aspect, there is provided a method for reviewing and analysing video, comprising: receiving video data of an event at a monitored location from a video source associated with an alarm sensor, following an alarm signal; isolating a limited number of frames from the video data in a segment of interest of the video data; sending the limited number of frames via a data connection to a computer that is located geographically remotely to the monitored location; upon receipt by the computer of the limited number of frames, automatically analysing the limited number of frames by the computer; and, in dependence on the analysis, validating or dismissing the alarm.
By such an arrangement a solution is provided that sits between the alarm device and a manual reviewer (such as an operator at a central monitoring station or a user of a residential security system) and acts like a firewall. Alarms generated at the monitored location are intercepted and automatically reviewed before they are sent for manual review. False alarms may be discarded and true alarms may be forwarded for manual review.
The solution is unique, as compared to the prior art, since automatic analysis is based on the limited number of frames and not on a live stream. Such analysis allows for the offsite/centralised analysis of video and scalability of the system.
Most preferably, upon validation of the alarm, the limited number of frames are automatically forwarded to a further computer for manual review. In the event the alarm is dismissed, which will be the case with a false alarm, the limited number of frames will not be forwarded.
The further computer may be a computer at a central monitoring station (also known as an alarm receiving centre) or may be a personal computing device, including but not limited to a smartphone, tablet computer or other such device. The video source will be linked to the central monitoring station or personal computing device through a user account associated with the video source.
According to the present invention in a further aspect, there is provided an analysis system for use in implementing the method as detailed above. The analysis system comprising a computer that is arranged to automatically analyse a limited number of frames from the video data and in dependence on the analysis, validate or dismiss an alarm.
Further, preferred, features are presented in the dependent claims.
The present invention will now be described by way of example with reference to the accompanying drawing, in which: Figure 1 shows, schematically, a system implementing a method for reviewing and analysing video alarms according to an embodiment of the present invention.
In accordance with a first embodiment, there is provided a method for reviewing and analysing video alarms, comprising: receiving video data of an event at a monitored location from a video source associated with an alarm sensor, following an alarm signal; isolating a limited number of frames from the video data in a segment of interest of the video data; sending the limited number of frames via a data connection to a computer that is located geographically remotely to the monitored location; upon receipt by the computer of the limited number of frames, automatically analysing the limited number of frames by the computer; and in dependence on the analysis, validating or dismissing the alarm.
A system implementing the method is shown, schematically, in Figure 1. The system comprises a plurality of video cameras 1, an analysis system 2, which comprises one or more computers, and one or more further computers 3. The arrow 4 indicates the data connection between any one or more of the cameras and the analysis system 2 and the arrow 5 indicates a data connection between the analysis system 2 and any one or more of the further computers 3. The data connections may be wired or wireless, as will be appreciated by those skilled in the art. The analysis system 2 is most preferably cloud based, i.e. implemented using cloud computing.
It is to be noted, as will be readily appreciated by those skilled in the art, that there may be any number of video cameras 1 at any number of locations. The present invention is not to be limited in this regard. There could, for example, be tens of thousands of video cameras at tens of thousands of different locations. Similarly, there may be any number of further computers 3 at any number of locations and the invention is again not to be limited in this regard. A key advantage of the present invention is its scalability. The scalability is possible since limited data is sent from the video cameras to the analysis system 2. There may be 9 or less frames of video sent, more preferably 6 or less frames of video sent and most preferably 3 or less frames of video sent. With a limited number of frames sent, there is no bandwidth issue.
The video cameras 1 will each comprise, or at least be associated with, an alarm sensor that triggers with motion. The alarm sensor may operate on the basis of infrared or otherwise, as will be appreciated by those skilled in the art.
Core to the present invention is the analysis system 2, which may be integrated into existing security camera system infrastructure. That is to say, the analysis system 2 may be configured to receive and send data from/to existing alarm system hardware components. The analysis system 2 may, for example, receive alarm signals from existing/third party video cameras and forward such signals, following validation to existing/third party alarm monitoring systems/user hardware for subsequent manual analysis. As discussed, the analysis system 2 is preferably cloud based.
The analysis system 2 receives the limited number of frames of video from the video cameras 1 and the automatic analysis by the analysis system 2 is based on the limited number of frames (9 or less, more preferably 6 or less, and most preferably 3 or less) and not on a live stream. Such analysis allows for the offsite/centralised analysis of video and scalability of the system, as discussed.
The analysis system 2 is preferably camera agnostic and capable of operating with the cameras of any existing manufacturers. There is presently no common alarm protocol. Each manufacturer generally uses its own protocol to emit alarms. The analysis system 2 preferably allows for integration with any known camera brands/manufacturers.
As shown in Figure 1, the analysis system comprises a plurality of adaptors 6. Each adaptor is arranged to handle one alarm protocol only. The adaptors will be implemented in suitable software. The adaptors are essentially translation modules that are each arranged to receive alarms according to a specific protocol/data format and to convert those alarms into a suitable (common/standard) data format to be understood by the other software components of the analysis system 2. By making the heterogeneous alarms uniform using the adaptors, it becomes possible to decompose the software into micro services, each of which is responsible for one thing only.
All cameras are built around standard protocols which, include SMTP (Simple Email Transfer Protocol), HTTP (HyperText Transfer Protocol) and TCP (Transport Control Protocol). Preferably, whilst different camera brands can usually support more than one of those protocols, the analysis system 2 is configured to associate each brand with only one protocol with such protocol being used for the integration.
The following table, by way of non-limiting example only, illustrates several camera brands against the protocols implemented in the analysis system 2:
Protocol Description Camera brands
SMTP Alarms are sent by email, although the format of those em ails can vary greatly. Hikvision, Dahua, Videcon, etc. HTTP HTTP request sent with the alarm Rialto TCP Alarms will be streamed bit by bit to the server. Videofied TCP Custom SDK has been used. Adpro (proprietary) In practice, an alarm will be directed by the analysis system 2 to the appropriate adaptor dependent on its protocol. According to the nature of the alarm, various communications may happen with different services 7, 8, 9 within the analysis system 2.
In the depicted arrangement, as is preferable, the following services are provided: o Alarm intelligence 7 -a service responsible for analysing the images and determining the validity of the alarm.
o API server 8 -a service responsible for communication with a database of the analysis system 2.
o Storage 9 -cloud storage where the alarm is stored for a predetermined period of time, such as 7 days, 30 days or otherwise, as desired.
Each adaptor preferably perform four tasks: 1. Identify the camera unique identifier.
2. Extract the images from the alarm.
3. Communicate with the services within the analysis system.
4. Forward the alarm to the further computer 3.
The deployment, monitoring and maintenance of the services is preferably done by using the Kubernetes open-source container-orchestration. All of the analysis system 2 services are part of a single cluster and are orchestrated into applications (pods), preferably over multiple machines (nodes). Such an implementation allows the underpinning applications to scale up and down in terms of both function and alarm load, whilst ensuring that no data is lost and costs are saved.
It must be appreciated that in alternative embodiments, the services provided may be different. For example, not all of the above services need be provided and/or the services may be modified.
The system is preferably arranged such that upon validation of an alarm, the analysis system 2 forwards on the respective alarm data in its original data format. Accordingly, the system 2 has no impact on the functioning of the alarm system infrastructure it is integrated into. The existing elements of the alarm system infrastructure For the automatic analysis of the limited number of frames, advanced machine learning techniques are preferably implemented. A number of different machine learning techniques may be implemented, and the present invention is not be specifically limited in this regard. Several non-limiting examples of possible machine learning techniques are presented below.
In one arrangement, a RCNN (Region proposal based Convolutional Neural Network) object detector is implemented. Such an object detector takes an image as an input and is trained to output a set of bounding boxes which represent the location of objects and the probabilities of the classification of these objects. This algorithm is composed of a neural network divided into three different parts. The first part is a standard classifier preferably trained on a big dataset composed of millions of images. It is used to create an internal representation of the image which is used by the second part, which provides the 300 most likely regions where the objects should be. The third part consists in an evaluation based on a basic classifier of each of these region proposals in order to determine their probability of being an actual object.
This object detector is combined with a basic motion detector and the IOU (Intersection over Union) of this combination is used to determine the validity of an alarm, wherein if the IOU contains at least one element meaning an object has been detected and this object is moving, then the alarm is flagged as true. In the opposite scenario, the alarm is false.
An alternative technique, labelled as a moving object detector for ease of reference, which offers increased accuracy, implements a neural network that is specially designed to overlook static detections and only focus on the moving detections. Instead of getting detections frame by frame, this algorithm takes as its input a pair of consecutive images and returns the bounding boxes of moving objects with a set of probabilities for determining their associated classifications.
The architecture of the neural network is based on the 'DSOD: Learning Deeply Supervised Object Detectors from Scratch' paper available at httpitioneriaccess.thecvf.comicoritent ICCV 2017/papers/Shen DSOD Learning D eerily 1CCV 2017 oaper.pdf.
They are a number of benefits conferred by this architecture, including but not limited to the following: 1. Proposal Free: This network directly predicts detection boxes and their classes. Some other approaches that require a separate or integrated network to propose regions of interest also require pre-training on other datasets. The proposal free method allows for focus exclusively on domain specific data.
2. Deep Supervision: Useful learning signals are provided to multiple layers of the network rather than just the final output layer. Doing so allows the provision of more feedback to the model.
3. Stem Block: This is a simple block that allows efficient compression of information throughout the network. By repeatedly using this block, information loss is kept to a minimum while still ensuring that the network doesn't get too large (which would increase both training and inference time).
4. Dense Prediction Structure: Here, Dense blocks (as proposed by the work in the 'Densely Connected Convolutional Networks' paper available at htip:Ilopenaccess.thecvf.cornicontent cvor 2017/papers/Huang Densely Connected Convolutional CVPR 2017 oaoerodf) are used. Dense connections join layers of a network even if they aren't directly next to each other. These extra connections enable earlier feature maps of the network to have a stronger influence on latter layers, if they are deemed useful. This is especially helpful in an object detection setting as it allows the network to identify objects at a wide range of scales.
In a yet further and preferred technique, the moving object detector is combined with a Convolutional Neural Network (CNN). It takes a pair of images as its input and outputs the pixel locations of where the moving object should be. This class of algorithm is usually called a segmentation neural network and aims at giving a classification of the image pixel by pixel. Hence, instead of having bounding boxes image by image, blobs of pixels are presented where movements are seen.
This model preferably has an encoder-decoder architecture composed of: 1. An encoder which takes as its input two images, corresponding to two consecutive frames, concatenated at the channel level, and normalized. The encoder learns a useful representation of the input, and consists of 8 convolutional layers, increasing the number of filters for each layer.
2. A decoder which takes as its input the representation from the encoder and emits two maps of the same width and height of the input images, with 1s at the pixel locations where the model predicts a movement has occurred, and Os where the models predicts no movement has occurred. The decoder consists of 5 deconvolutional layers, with a decreasing number of filters for each layer. The last deconvolutional layer is initialized with a prior layer that predicts lower probability for the rare class (foreground) at the start of training. This initialization improves training stability.
The loss function consists of two terms for each image, one for each class in the output (motion present, motion absent). Both the terms are normalized focal loss values where the normalization is over the total number of ground truth pixels belonging to the relevant class. The normalization is done so as to capture every motion blob irrespective of size (if the normalization is absent, the loss will reward the network to get most pixels right by compromising on small blobs of motion).
As will be appreciated by those skilled in the art, alternative machine learning techniques may be implemented than those discussed for automatically detecting motion in the analysis system 2.
Numerous modifications and alterations will be readily appreciated by those skilled in the art, within the scope of the claims that follow. 11.
Claims (10)
- Claims 1. A method for reviewing and analysing video alarms, comprising: receiving video data of an event at a monitored location from a video source associated with an alarm sensor, following an alarm signal; isolating a limited number of frames from the video data in a segment of interest of the video data; sending the limited number of frames via a data connection to a computer that is located geographically remotely to the monitored location; upon receipt by the computer of the limited number of frames, automatically analysing the limited number of frames by the computer; and in dependence on the analysis, validating or dismissing the alarm.
- 2. A method as claimed in Claim 1, wherein the limited number of frames comprises 9 or less frames.
- 3. A method as claimed in Claim 1, wherein the limited number of frames comprises 6 or less frames.
- 4. A method as claimed in Claim 1, wherein the limited number of frames comprises 3 or less frames.
- 5. A method as claimed in any preceding claim, wherein the computer implements machine learning techniques to analyse the limited number of frames.
- 6. A method as claimed in any preceding claim, wherein upon validation of the alarm, the limited number of frames are automatically forwarded to a further computer.
- 7. A method as claimed in Claim 6, wherein the further computer is a computer at a central monitoring station or a personal computing device that is linked to a user account associated with the video source.
- 8. A method as claimed in any preceding claim, wherein the computer comprises a plurality of translation modules, each translation module being configured to handle alarm data in a different data format to the remaining translation modules, the translation modules each being configured to convert received alarm signals into a common data format for subsequent analysis by the computer.
- 9. A method as claimed in any preceding claim, wherein the computer is arranged, upon validation of an alarm, to forward on the respective alarm data in its original data format.
- 10. An analysis system for use in implementing the method as claimed in any preceding claim, the analysis system comprising a computer that is arranged to automatically analyse a limited number of frames from video data, and, in dependence on the analysis, validate or dismiss an alarm.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1910561.8A GB2585919B (en) | 2019-07-24 | 2019-07-24 | Method and system for reviewing and analysing video alarms |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| GB1910561.8A GB2585919B (en) | 2019-07-24 | 2019-07-24 | Method and system for reviewing and analysing video alarms |
Publications (3)
| Publication Number | Publication Date |
|---|---|
| GB201910561D0 GB201910561D0 (en) | 2019-09-04 |
| GB2585919A true GB2585919A (en) | 2021-01-27 |
| GB2585919B GB2585919B (en) | 2022-09-14 |
Family
ID=67839766
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| GB1910561.8A Active GB2585919B (en) | 2019-07-24 | 2019-07-24 | Method and system for reviewing and analysing video alarms |
Country Status (1)
| Country | Link |
|---|---|
| GB (1) | GB2585919B (en) |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240233137A9 (en) * | 2021-02-11 | 2024-07-11 | Calipsa Limited | A data-generating procedure from raw tracking inputs |
Families Citing this family (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN110556129B (en) * | 2019-09-09 | 2022-04-19 | 北京大学深圳研究生院 | Bimodal emotion recognition model training method and bimodal emotion recognition method |
| CN114071060B (en) * | 2021-11-11 | 2023-06-09 | 华能招标有限公司 | Intelligent remote comment supervision system |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040201471A1 (en) * | 2003-04-14 | 2004-10-14 | Netbotz, Inc. | Extensible sensor monitoring, alert processing and notification system and method |
| WO2006101477A1 (en) * | 2005-03-15 | 2006-09-28 | Chubb International Holdings Limited | Nuisance alarm filter |
| US10055961B1 (en) * | 2017-07-10 | 2018-08-21 | Careview Communications, Inc. | Surveillance system and method for predicting patient falls using motion feature patterns |
| US20190066472A1 (en) * | 2007-07-16 | 2019-02-28 | Checkvideo Llc | Apparatus and methods for video alarm verification |
| US20190141297A1 (en) * | 2017-11-07 | 2019-05-09 | Ooma, Inc. | Systems and Methods of Activity Based Recording for Camera Applications |
-
2019
- 2019-07-24 GB GB1910561.8A patent/GB2585919B/en active Active
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20040201471A1 (en) * | 2003-04-14 | 2004-10-14 | Netbotz, Inc. | Extensible sensor monitoring, alert processing and notification system and method |
| WO2006101477A1 (en) * | 2005-03-15 | 2006-09-28 | Chubb International Holdings Limited | Nuisance alarm filter |
| US20190066472A1 (en) * | 2007-07-16 | 2019-02-28 | Checkvideo Llc | Apparatus and methods for video alarm verification |
| US10055961B1 (en) * | 2017-07-10 | 2018-08-21 | Careview Communications, Inc. | Surveillance system and method for predicting patient falls using motion feature patterns |
| US20190141297A1 (en) * | 2017-11-07 | 2019-05-09 | Ooma, Inc. | Systems and Methods of Activity Based Recording for Camera Applications |
Cited By (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20240233137A9 (en) * | 2021-02-11 | 2024-07-11 | Calipsa Limited | A data-generating procedure from raw tracking inputs |
Also Published As
| Publication number | Publication date |
|---|---|
| GB2585919B (en) | 2022-09-14 |
| GB201910561D0 (en) | 2019-09-04 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11069214B2 (en) | Event entity monitoring network and method | |
| US10802942B2 (en) | Methods and apparatus to detect anomalies of a monitored system | |
| US10524027B2 (en) | Sensor based system and method for premises safety and operational profiling based on drift analysis | |
| US10769448B2 (en) | Surveillance system and surveillance method | |
| US10825310B2 (en) | 3D monitoring of sensors physical location in a reduced bandwidth platform | |
| JP2018207222A (en) | Camera and parameter registration method | |
| US10121080B2 (en) | Systems and methods for controlling the recording, storing and transmitting of video surveillance content | |
| CN117312801A (en) | AI-based smart city monitoring system and method | |
| WO2014121340A9 (en) | A surveillance system | |
| JP2010519608A (en) | Monitoring system and method | |
| US11188758B2 (en) | Tracking sequences of events | |
| CN101329804A (en) | A security device and system | |
| GB2585919A (en) | Method and system for reviewing and analysing video alarms | |
| JP6989294B2 (en) | Monitoring system and monitoring method | |
| KR101804358B1 (en) | Equipment monitoring system using image analysis | |
| KR102247359B1 (en) | Image analysis system and method for remote monitoring | |
| US20250068928A1 (en) | Decentralized Federated Learning Systems, Devices, and Methods for Security Threat Detection and Reaction | |
| US20220277217A1 (en) | System and method for multimodal neuro-symbolic scene understanding | |
| US11954750B2 (en) | Systems and methods for multi-factor validation of information, dynamically supporting common operational picture and decision making, in real time | |
| KR102411209B1 (en) | System and Method for Image Classification Based on Object Detection Events by Edge Device | |
| CN111291597A (en) | Image-based crowd situation analysis method, device, equipment and system | |
| CN114143532A (en) | Camera abnormity diagnosis method and device | |
| TW202111670A (en) | Method and system for detecting image object | |
| Micheloni et al. | Adaptive video communication for an intelligent distributed system: Tuning sensors parameters for surveillance purposes | |
| KR20220031258A (en) | A method for providing active security control service based on learning data corresponding to counseling event |