WO2014042514A2 - A surveillance system and a method for tampering detection and correction - Google Patents
A surveillance system and a method for tampering detection and correction Download PDFInfo
- Publication number
- WO2014042514A2 WO2014042514A2 PCT/MY2013/000159 MY2013000159W WO2014042514A2 WO 2014042514 A2 WO2014042514 A2 WO 2014042514A2 MY 2013000159 W MY2013000159 W MY 2013000159W WO 2014042514 A2 WO2014042514 A2 WO 2014042514A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- tampering
- camera
- feature points
- reference image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B29/00—Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
- G08B29/02—Monitoring continuously signalling or alarm systems
- G08B29/04—Monitoring of the detection circuits
- G08B29/046—Monitoring of the detection circuits prevention of tampering with detection circuits
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
- G08B13/19604—Image analysis to detect motion of the intruder, e.g. by frame subtraction involving reference image or background adaptation with time to compensate for changing conditions, e.g. reference image update on detection of light level change
Definitions
- the present invention generally relates to a surveillance system and more particularly to a surveillance system having a method for detecting a tampered camera view and correcting video analytics parameters for the tampered camera view.
- a surveillance system includes at least one camera which provides video images of a monitored area or a region of interest (ROI) for video analytics.
- the video analytics is performed by a video analytics component to automatically detect suspicious, unauthorised or illegal events occurring within the ROI.
- the camera is strategically located and oriented so that the camera view is able to capture images of the ROI.
- the camera view may be tampered by changing the camera orientation, changing the camera focus, blocking the camera lens and etc. If the tampering is intentional, then it might suspiciously indicate that an unauthorised or illegal event is happening within the ROI and thus, the responsible personnel should be alerted. If the tampering is unintentional such as strong wind blowing at the camera or a bird hitting the camera, this may affect the performance of event detection as the camera view has been changed.
- European Patent No. 1936576 discloses a method and a module for identifying possible tampering of a camera view. The method comprising receiving an image for analysis from an image sequence, converting the received image into an edge image, generating a similarity value indicating a level of similarity between said edge image and a reference edge image, indicating possible tampering of the camera view if the similarity value is within a specified tampering range, and updating the reference edge image by combining a recently analysed edge image with the reference edge image in case of each one of a predetermined number of consecutively analysed images does not result in an indication of possible tampering.
- US 2007/0247526 discloses a camera tamper detection, wherein a security system includes at least one camera that provides a reference image regarding an area within a field of vision of the camera. A controller determines whether a difference between at least a portion of a test image obtained by the camera and a corresponding portion of the reference image indicates tampering with the camera. Disclosed examples detect a variety of tampering conditions and provide an indication of camera tampering so that corrective or preventative measures may be taken.
- the video analytics component of the surveillance system may not be able to recognize the tampered camera view and thus, the video analytics component is unable to detect suspicious, unauthorised or illegal event occurring within the ROI.
- the video analytics component is unable to detect global spatial pixel changes in one image with respect to a reference image which causes the video analytics based on static background to fail since the image of the tampered camera view is changed globally from the reference or background image.
- the video analytics is interrupted even though there is only a minor change to the orientation of the camera.
- the surveillance system should be able to correct its video analytics parameters to perform video analytics for the camera view tampered by changing camera orientation.
- a method for detecting camera tampering of a surveillance system is provided.
- the method is characterised by the steps of detecting global changes in _an image captured by a camera (10) of the surveillance system with respect to a reference image; if there are no global changes detected in the image, identifying an object in the image and performing an event analysis on the object; and if there are global changes detected in the image, adjusting video analytics configuration parameters based on the image and determining the level of tampering, wherein if the level of tampering is severe, triggering an alarm to notify on the tampering occurrence, and wherein if the level of tampering is not severe, performing an event analysis on an object identified in the image based on the adjusted video analytics configuration parameters.
- the step of detecting global changes in the image captured by a camera (10) of the surveillance system with respect to the reference image further comprising the steps of extracting feature points in the image captured by the camera (10); finding each feature point in the captured image that matches within the proximity of the same spatial coordinate in the reference image; computing an average similarity value and a percentage value of the matched feature points; and comparing the average similarity value and the percentage value of the matched feature points with predetermined threshold values, wherein indicating that there are global changes in the image captured by the camera with respect to the reference image if the average similarity value and the percentage value are lower than the threshold values, and wherein indicating that there are no global changes in the image captured by the camera with respect to the reference image if the average similarity value and the percentage value are equal or greater than the threshold values.
- the step of finding each feature point in the captured image that matches within the proximity of the same spatial coordinate in the reference image preferably includes the steps of comparing a feature point descriptor of a feature point in the captured image with a feature point descriptor of a feature point in the reference image; computing a similarity value for each pair of feature points; and comparing the similarity value with a predetermined threshold to determine whether the two feature points are a match or not.
- the step of adjusting video analytics configuration parameters based on the image comprising the steps of aligning the captured image with the reference image; if the captured image cannot be aligned, indicating that the level of tampering is severe; defining a new location of the ROI or LOI; computing the difference between the new ROI or LOI and the ROI or LOI of the reference image; if the difference is less than a predetermined threshold, indicating that there is no tampering of the camera view and performing event analysis on the captured image; and if the difference is greater than the predetermined threshold, computing image similarity frame count between subsequent images and comparing the computed image similarity frame count with a predetermined threshold, wherein if the similarity value is greater than the predetermined threshold, updating the reference image with the captured image and reconstructing the significance map based on the updated reference image.
- the step of aligning the captured image with the reference image preferably comprising the steps of matching the feature points of the captured image with the feature points of the reference image; and computing a transformation matrix
- a method for initializing video analytics configuration parameters of a surveillance system to detect camera tampering is provided.
- the method is characterised by the steps of defining a region of interest ( OI) or line of interest (LOI) within the camera view; extracting feature point from a reference image; constructing a significance map based on the extracted feature points; and modelling a background scene.
- the significance map is constructed by expanding the area surrounding the feature points and summing all of the expanded feature points.
- a surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60).
- the image processing module (40) includes a tampering detection unit (41) to detect tampering of camera view and initialize video analytics configuration parameters, a tampering adaptation unit (42) to perform correction on the video analytics configuration parameters based on a tampered camera view, and an event detection unit (43) to perform event analysis and detection.
- FIG. 1 shows a block diagram of a surveillance system for tampering detection and correction according to an embodiment of the present invention.
- FIGS. 2(a-b) show flowcharts of a method for tampering detection and correction for a surveillance system according to an embodiment of the present invention.
- FIG. 3 shows a flowchart of a method for detecting tampering of a camera view according to an embodiment of the present invention.
- FIG. 4 shows a flowchart of a method for correcting video analytics configuration parameters based on a tampered camera view according to an embodiment of the present invention.
- FIGS. 5(a-c) show an image being constructed to a significance map according to the method as shown in FIG. 2a.
- FIG. 6 shows an image of a tampered camera view matched to a reference image.
- the surveillance system is used to detect suspicious event such as intruder by analysing video images captured by a camera. Moreover, the surveillance system is able to detect tampering of the camera view. If tampering is detected, the surveillance system is able to adjust its video analytics configuration parameters to perform video analytics even though the orientation of the camera has been changed as long as a partial of the ROI is within the tampered camera view.
- the surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60).
- the camera (10) is used to capture video images of a monitored area.
- the camera (10) is connected to the video acquisition module (20).
- the video acquisition module (20) is used for interfacing communication of the at least one camera (10) with the storage device (30), the image processing module (40) and the display module (50).
- the storage device (30) is used to store video analytics parameters and images acquired from the camera (10) for performing video analytics. Moreover, the storage device (30) stores system configuration data such as scheduling data for event detection and etc.
- the image processing module (40) includes a tampering detection unit (41), a tampering adaptation unit (42), and an event detection unit (43).
- the tampering detection unit (41) is used to detect tampering of the camera view. Moreover, the tampering detection unit (41) is used to initialize video analytics configuration parameters of the surveillance system.
- the tampering adaptation unit (42) is used to perform correction on the video analytics configuration parameters based on the tampered camera view. Thus, this enables the surveillance system to perform video analytics based on the tampered camera view.
- the event detection unit (43) is used to perform event analysis and detection based on user specified event-of-interest such as intrusion, loitering and etc. The event detection unit (43) performs the event analysis and detection on the current image captured by the camera (10).
- the display module (50) is used for displaying images from either the camera (10) or the image processing module (40).
- the post detection module (50) is used for triggering an alarm once it receives an event detection alert from the image processing module (40).
- FIGS. 2(a-b) show flowcharts of a method for tampering detection and correction of the surveillance system.
- the method allows the surveillance system to perform video analytics even though the orientation of the camera has been changed as long as a partial of the ROI is within the tampered camera view.
- the method can be divided into two phases which includes an initialization phase as shown in FIG. 2a, and a detection and correction phase FIG. 2b.
- the initialization phase the system is set-up prior to performing video analytics on the images captured by the camera (10).
- each input image from the camera (10) is analysed for event detection.
- the input images are analysed to determine whether the camera view has been tampered or not. Referring to FIG.
- an administrator defines a region of interest (ROI) or line of interest (LOI) within the camera view.
- ROI or LOI is used to indicate an area for monitoring of any suspicious event.
- the suspicious event is automatically detected by the event detection unit (43).
- the defined ROI or LOI is stored in the storage device (30).
- the tampering detection unit (41) extracts feature points from a reference image by using feature extraction technique such as scale-invariant feature transform (SIFT) and speeded up robust feature (SURF). These points and its corresponding feature point descriptors are stored for further image matching process.
- SIFT scale-invariant feature transform
- SURF speeded up robust feature
- the tampering detection unit (41) constructs a significance map based on the extracted feature points of the reference image.
- the significance map is a binary map to indicate a significance area in a reference image.
- the significance map is constructed by expanding the area surrounding the feature points and thereon, summing all of the expanded feature points.
- FIG. 5b An example of a reference image having a plurality of expanded feature points is shown in FIG. 5b.
- FIG. 5c An example of a significance map based on the expanded feature points is shown in FIG. 5c.
- a background scene is modelled by the tampering detection unit (41), wherein a background image is stored to represent the surrounding of the ROI.
- the background image is referred as reference image herein below.
- the tampering detection unit (41) detects global changes in a current image captured by the camera (10) with respect to the reference image.
- the detection method is as shown in FIG. 3 which will be described later on. If there are no global changes detected in the current image, this indicates that there is no tampering of the camera view and thus, the event detection unit (43) identifies an object in the current image and performs event analysis on the object (decision 202, steps 203 and 204). If there are global changes detected in the current image, the tampering adaptation unit (42) adjusts its video analytics configuration parameters based on the current camera view as in decision 202 and step 205. The method for adjusting the video analytics configuration parameters is as shown in FIG. 4 and will be described later on.
- the surveillance system determines the level of tampering. If the tampering is severe, the surveillance system triggers an alarm to notify an administrator of the tampering occurrence and thus, the administrator would be able to rectify the camera view and investigate the monitored area (decision 206 and step 207). Otherwise, the event detection unit (43) performs event analysis on an object identified in the current image based on adjusted video analytics configuration parameters (decision 206, steps 203 and 204).
- FIG. 3 there is shown a flowchart of a method for detecting tampering of a camera view as provided in step 201 of FIG. 2b.
- a plurality of feature points is extracted from the current image captured by the camera (10).
- the tampering detection unit (41) finds each feature point in current image that matches within the proximity of the same spatial coordinate in the reference image. For instance, for a feature point in the current image having the coordinate of (x1 , y1 ), the tampering detection unit (41) searches within the proximity of the same spatial coordinate of (x1 , y1) in the reference image to determine whether there is a similar feature point. In order to find the matching feature points, the tampering detection unit (41) compares a feature point descriptor of a feature point in the current image with a feature point descriptor of a feature point in the reference image.
- the feature point descriptor is a feature point vector describing properties of a feature point such as colour and edge properties of the area surrounding the feature point.
- the tampering detection unit (41) computes a similarity value for each pair of feature points and thereon, compares the similarity value to a predetermined threshold to determine whether the two feature points are a match.
- the similarity value quantifies the similarity of two feature point descriptors of two feature points in comparison.
- the similarity value is computed by using Euclidean distance or Bhattacryya distance.
- FIG. 6 shows an image of a tampered camera view matched to a reference image.
- the tampering detection unit (41) averages the similarity values for all matched feature points and thereon, computes a percentage value of the matched feature points as in steps 303 and 304. For example, there are 100 feature points in the current image, and 120 feature points in the reference image. However, only 50 out of 100 feature points in the current image that match the feature points in the reference image.
- the similarity value for each matched pair feature point is computed, wherein the similarity value is a value between 0 and 1 , whereby 1 indicates most similar. Thereon, all of the 50 similarity values of the 50 matched feature points are averaged out.
- the percentage value of matched feature point is then computed by dividing number of points matched over total number of feature points in the current and reference images, wherein the percentage is [(50 * 2) / (100 + 120)] x 100% which equals to 45%.
- the average similarity value and the percentage value of the matched feature points are compared with predetermined threshold values as in decision 305. If the average similarity value and the percentage value are lower than the threshold values, this indicates that there are global changes in the current image captured by the camera (10) with respect to the reference images. If the average similarity value and the percentage value are equal or greater than the threshold values, this indicates that there are no global changes in the current image captured by the camera (10) with respect to the reference images. Referring now to FIG.
- step 401 there is shown a flowchart of a method for correcting video analytics configuration parameters based on a tampered camera view as provided in step 205 of FIG. 2b.
- the current image is aligned with the reference image.
- the alignment of the current image is done by matching the feature points of the current image with the feature points of the reference image and computing a transformation matrix by using the coordinates of the matched feature points.
- the transformation matrix indicates whether the current image can be aligned with the reference image.
- the transformation matrix defines how the current images should be aligned with the reference image.
- Such transformation matrix includes a translation matrix, an affine transform matrix and etc.
- the tampering adaptation unit (42) determines whether the current image can be aligned with the reference image. If the current image cannot be aligned, the tampering adaptation unit (42) indicates that the level of tampering is severe (decision 402 and step 403).
- the tampering adaptation unit (42) defines a new location or coordinates of the ROI or LOI (decision 402 and step 404).
- the new location of the ROI or LOI is defined by using the computed transformation matrix to align the ROI or LOI in the current image with respect to the ROI or LOI of the reference image.
- the new coordinates of the vertices in the current image is computed by multiplying their corresponding homogeneous point with the transformation matrix computed earlier and thus, the homogeneous point of (x1 ,y1) is (x1 ,y1 ,1 ) and so on.
- the tampering adaptation unit (42) computes the difference between the new ROI or LOI and the ROI or LOI of the reference image.
- the difference between the two ROIs or LOIs can be determined by using the method such as checking overlapping area of the ROIs or LOIs, and computing total distance between all points of the ROIs or LOIs. If the difference is less than a predetermined threshold, the tampering adaptation unit (42) indicates that there is no tampering of the camera view and thereon, the surveillance system proceed to perform event analysis on the current image (decision 405 and step 406). In other words, this indicates that the global changes detected is due to a moving object blocking some part of the current image and thus, the global changes are not due to the camera (10) being tampered with.
- the tampering adaptation unit (42) computes image similarity frame count between subsequent images (decision 405 and step 407).
- the image similarity frame count is computed by counting the number of frames that are similar in terms of appearance. This is to ensure that the tampered camera view is consistent for few subsequent frames.
- the tampering adaptation unit (42) compares the computed image similarity frame count with a predetermined threshold. If the image similarity frame count is greater than or equal to the predetermined threshold, the surveillance system updates its reference image (decision 408 and step 409). Thereon, as in steps 410 and 411 , the tampering adaptation unit (42) reconstructs the significance map based on the updated reference image and the tampering adaptation unit (42) indicates that the camera view has been tampered with.
- the tampering adaptation unit (42) indicates that the camera view has been tampered with but the camera view is still shifting (decision 408 and step 411). Thus, there is no suitable image that can be adapted as the new reference image.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Security & Cryptography (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Closed-Circuit Television Systems (AREA)
- Burglar Alarm Systems (AREA)
- Alarm Systems (AREA)
Description
A SURVEILLANCE SYSTEM AND A METHOD FOR TAMPERING DETECTION
AND CORRECTION
FIELD OF INVENTION
The present invention generally relates to a surveillance system and more particularly to a surveillance system having a method for detecting a tampered camera view and correcting video analytics parameters for the tampered camera view. BACKGROUND OF THE INVENTION
A surveillance system includes at least one camera which provides video images of a monitored area or a region of interest (ROI) for video analytics. The video analytics is performed by a video analytics component to automatically detect suspicious, unauthorised or illegal events occurring within the ROI. The camera is strategically located and oriented so that the camera view is able to capture images of the ROI.
However, the camera view may be tampered by changing the camera orientation, changing the camera focus, blocking the camera lens and etc. If the tampering is intentional, then it might suspiciously indicate that an unauthorised or illegal event is happening within the ROI and thus, the responsible personnel should be alerted. If the tampering is unintentional such as strong wind blowing at the camera or a bird hitting the camera, this may affect the performance of event detection as the camera view has been changed.
In regard to this, European Patent No. 1936576 discloses a method and a module for identifying possible tampering of a camera view. The method comprising receiving an image for analysis from an image sequence, converting the received image into an edge image, generating a similarity value indicating a level of similarity between said edge image and a reference edge image, indicating possible tampering of the camera view if the similarity value is within a specified tampering range, and updating the reference edge image by combining a recently analysed edge image with the reference edge image in case of each one of a predetermined number of consecutively analysed images does not result in an indication of possible tampering.
US Patent Publication No. US 2007/0247526 discloses a camera tamper detection,, wherein a security system includes at least one camera that provides a reference image regarding an area within a field of vision of the camera. A controller determines whether a difference between at least a portion of a test image obtained by the camera and a corresponding portion of the reference image indicates tampering with the camera. Disclosed examples detect a variety of tampering conditions and provide an indication of camera tampering so that corrective or preventative measures may be taken.
When the camera view is tampered by changing the camera orientation, the video analytics component of the surveillance system may not be able to recognize the tampered camera view and thus, the video analytics component is unable to detect suspicious, unauthorised or illegal event occurring within the ROI. In particular, the video analytics component is unable to detect global spatial pixel changes in one image with respect to a reference image which causes the video analytics based on static background to fail since the image of the tampered camera view is changed globally from the reference or background image. Thus, the video analytics is interrupted even though there is only a minor change to the orientation of the camera.
Therefore, there is a need to provide a surveillance system capable of detecting tampering of camera view. Moreover, the surveillance system should be able to correct its video analytics parameters to perform video analytics for the camera view tampered by changing camera orientation.
SUMMARY OF INVENTION
In one aspect of the present invention, a method for detecting camera tampering of a surveillance system is provided. The method is characterised by the steps of detecting global changes in _an image captured by a camera (10) of the surveillance system with respect to a reference image; if there are no global changes detected in the image, identifying an object in the image and performing an event analysis on the object; and if there are global changes detected in the image, adjusting video analytics configuration parameters based on the image and determining the level of tampering, wherein if the level of tampering is severe,
triggering an alarm to notify on the tampering occurrence, and wherein if the level of tampering is not severe, performing an event analysis on an object identified in the image based on the adjusted video analytics configuration parameters. Preferably, the step of detecting global changes in the image captured by a camera (10) of the surveillance system with respect to the reference image further comprising the steps of extracting feature points in the image captured by the camera (10); finding each feature point in the captured image that matches within the proximity of the same spatial coordinate in the reference image; computing an average similarity value and a percentage value of the matched feature points; and comparing the average similarity value and the percentage value of the matched feature points with predetermined threshold values, wherein indicating that there are global changes in the image captured by the camera with respect to the reference image if the average similarity value and the percentage value are lower than the threshold values, and wherein indicating that there are no global changes in the image captured by the camera with respect to the reference image if the average similarity value and the percentage value are equal or greater than the threshold values. Moreover, the step of finding each feature point in the captured image that matches within the proximity of the same spatial coordinate in the reference image preferably includes the steps of comparing a feature point descriptor of a feature point in the captured image with a feature point descriptor of a feature point in the reference image; computing a similarity value for each pair of feature points; and comparing the similarity value with a predetermined threshold to determine whether the two feature points are a match or not.
Preferably, the step of adjusting video analytics configuration parameters based on the image comprising the steps of aligning the captured image with the reference image; if the captured image cannot be aligned, indicating that the level of tampering is severe; defining a new location of the ROI or LOI; computing the difference between the new ROI or LOI and the ROI or LOI of the reference image; if the difference is less than a predetermined threshold, indicating that there is no tampering of the camera view and performing event analysis on the captured image; and if the difference is greater than the predetermined threshold, computing image similarity frame count between subsequent images and comparing the computed image similarity frame count with a predetermined threshold, wherein if the similarity
value is greater than the predetermined threshold, updating the reference image with the captured image and reconstructing the significance map based on the updated reference image. Moreover, the step of aligning the captured image with the reference image preferably comprising the steps of matching the feature points of the captured image with the feature points of the reference image; and computing a transformation matrix by using the matched feature points.
In another aspect of the present invention, a method for initializing video analytics configuration parameters of a surveillance system to detect camera tampering is provided. The method is characterised by the steps of defining a region of interest ( OI) or line of interest (LOI) within the camera view; extracting feature point from a reference image; constructing a significance map based on the extracted feature points; and modelling a background scene. Preferably, the significance map is constructed by expanding the area surrounding the feature points and summing all of the expanded feature points.
In yet another aspect of the present invention, a surveillance system is provided. The surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60). The image processing module (40) includes a tampering detection unit (41) to detect tampering of camera view and initialize video analytics configuration parameters, a tampering adaptation unit (42) to perform correction on the video analytics configuration parameters based on a tampered camera view, and an event detection unit (43) to perform event analysis and detection.
BRIEF DESCRIPTION OF THE DRAWINGS
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
FIG. 1 shows a block diagram of a surveillance system for tampering detection and correction according to an embodiment of the present invention.
FIGS. 2(a-b) show flowcharts of a method for tampering detection and correction for a surveillance system according to an embodiment of the present invention.
FIG. 3 shows a flowchart of a method for detecting tampering of a camera view according to an embodiment of the present invention.
FIG. 4 shows a flowchart of a method for correcting video analytics configuration parameters based on a tampered camera view according to an embodiment of the present invention.
FIGS. 5(a-c) show an image being constructed to a significance map according to the method as shown in FIG. 2a.
FIG. 6 shows an image of a tampered camera view matched to a reference image.
DESCRIPTION OF THE PREFFERED EMBODIMENT
A preferred embodiment of the present invention will be described herein below with reference to the accompanying drawings. In the following description, well known functions or constructions are not described in detail since they would obscure the description with unnecessary detail.
Referring to FIG. 1 , there is shown a surveillance system according to an embodiment of the present invention. The surveillance system is used to detect suspicious event such as intruder by analysing video images captured by a camera. Moreover, the surveillance system is able to detect tampering of the camera view. If tampering is detected, the surveillance system is able to adjust its video analytics configuration parameters to perform video analytics even though the orientation of the camera has been changed as long as a partial of the ROI is within the tampered camera view.
The surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60). The camera (10) is used to capture video images of a monitored area. The camera (10) is connected to the video acquisition module (20).
The video acquisition module (20) is used for interfacing communication of the at least one camera (10) with the storage device (30), the image processing module (40) and the display module (50).
The storage device (30) is used to store video analytics parameters and images acquired from the camera (10) for performing video analytics. Moreover, the storage device (30) stores system configuration data such as scheduling data for event detection and etc.
The image processing module (40) includes a tampering detection unit (41), a tampering adaptation unit (42), and an event detection unit (43). The tampering detection unit (41) is used to detect tampering of the camera view. Moreover, the tampering detection unit (41) is used to initialize video analytics configuration parameters of the surveillance system. The tampering adaptation unit (42) is used to perform correction on the video analytics configuration parameters based on the tampered camera view. Thus, this enables the surveillance system to perform video analytics based on the tampered camera view. The event detection unit (43) is used to perform event analysis and detection based on user specified event-of-interest such as intrusion, loitering and etc. The event detection unit (43) performs the event analysis and detection on the current image captured by the camera (10).
The display module (50) is used for displaying images from either the camera (10) or the image processing module (40).
The post detection module (50) is used for triggering an alarm once it receives an event detection alert from the image processing module (40).
FIGS. 2(a-b) show flowcharts of a method for tampering detection and correction of the surveillance system. The method allows the surveillance system to perform video analytics even though the orientation of the camera has been changed as long as a partial of the ROI is within the tampered camera view. The method can be divided into two phases which includes an initialization phase as shown in FIG. 2a, and a detection and correction phase FIG. 2b. During the initialization phase, the system is set-up prior to performing video analytics on the images captured by the
camera (10). During the detection and correction phase, each input image from the camera (10) is analysed for event detection. Moreover, the input images are analysed to determine whether the camera view has been tampered or not. Referring to FIG. 2a, there is shown a flowchart of a method for initializing video analytics configuration parameters of the surveillance system. Initially, as in step 101 , an administrator defines a region of interest (ROI) or line of interest (LOI) within the camera view. This ROI or LOI is used to indicate an area for monitoring of any suspicious event. The suspicious event is automatically detected by the event detection unit (43). The defined ROI or LOI is stored in the storage device (30).
Thereon, in step 102, the tampering detection unit (41) extracts feature points from a reference image by using feature extraction technique such as scale-invariant feature transform (SIFT) and speeded up robust feature (SURF). These points and its corresponding feature point descriptors are stored for further image matching process. FIG. 5a shows an example of a plurality of feature points in a reference image.
Next, in step 103, the tampering detection unit (41) constructs a significance map based on the extracted feature points of the reference image. The significance map is a binary map to indicate a significance area in a reference image. The significance map is constructed by expanding the area surrounding the feature points and thereon, summing all of the expanded feature points. An example of a reference image having a plurality of expanded feature points is shown in FIG. 5b. Moreover, an example of a significance map based on the expanded feature points is shown in FIG. 5c.
Next, as in step 104, a background scene is modelled by the tampering detection unit (41), wherein a background image is stored to represent the surrounding of the ROI. The background image is referred as reference image herein below.
Referring to FIG. 2b, there is shown a flowchart of the detection and correction phase. In step 201 , the tampering detection unit (41) detects global
changes in a current image captured by the camera (10) with respect to the reference image. The detection method is as shown in FIG. 3 which will be described later on. If there are no global changes detected in the current image, this indicates that there is no tampering of the camera view and thus, the event detection unit (43) identifies an object in the current image and performs event analysis on the object (decision 202, steps 203 and 204). If there are global changes detected in the current image, the tampering adaptation unit (42) adjusts its video analytics configuration parameters based on the current camera view as in decision 202 and step 205. The method for adjusting the video analytics configuration parameters is as shown in FIG. 4 and will be described later on.
Thereon, the surveillance system determines the level of tampering. If the tampering is severe, the surveillance system triggers an alarm to notify an administrator of the tampering occurrence and thus, the administrator would be able to rectify the camera view and investigate the monitored area (decision 206 and step 207). Otherwise, the event detection unit (43) performs event analysis on an object identified in the current image based on adjusted video analytics configuration parameters (decision 206, steps 203 and 204).
Referring to FIG. 3, there is shown a flowchart of a method for detecting tampering of a camera view as provided in step 201 of FIG. 2b. Initially, as in step 301 , a plurality of feature points is extracted from the current image captured by the camera (10).
In step 302, the tampering detection unit (41) finds each feature point in current image that matches within the proximity of the same spatial coordinate in the reference image. For instance, for a feature point in the current image having the coordinate of (x1 , y1 ), the tampering detection unit (41) searches within the proximity of the same spatial coordinate of (x1 , y1) in the reference image to determine whether there is a similar feature point. In order to find the matching feature points, the tampering detection unit (41) compares a feature point descriptor of a feature
point in the current image with a feature point descriptor of a feature point in the reference image. The feature point descriptor is a feature point vector describing properties of a feature point such as colour and edge properties of the area surrounding the feature point. Based on the comparison, the tampering detection unit (41) computes a similarity value for each pair of feature points and thereon, compares the similarity value to a predetermined threshold to determine whether the two feature points are a match. The similarity value quantifies the similarity of two feature point descriptors of two feature points in comparison. Suitably, the similarity value is computed by using Euclidean distance or Bhattacryya distance. FIG. 6 shows an image of a tampered camera view matched to a reference image.
Thereon, the tampering detection unit (41) averages the similarity values for all matched feature points and thereon, computes a percentage value of the matched feature points as in steps 303 and 304. For example, there are 100 feature points in the current image, and 120 feature points in the reference image. However, only 50 out of 100 feature points in the current image that match the feature points in the reference image. The similarity value for each matched pair feature point is computed, wherein the similarity value is a value between 0 and 1 , whereby 1 indicates most similar. Thereon, all of the 50 similarity values of the 50 matched feature points are averaged out. The percentage value of matched feature point is then computed by dividing number of points matched over total number of feature points in the current and reference images, wherein the percentage is [(50 * 2) / (100 + 120)] x 100% which equals to 45%. The average similarity value and the percentage value of the matched feature points are compared with predetermined threshold values as in decision 305. If the average similarity value and the percentage value are lower than the threshold values, this indicates that there are global changes in the current image captured by the camera (10) with respect to the reference images. If the average similarity value and the percentage value are equal or greater than the threshold values, this indicates that there are no global changes in the current image captured by the camera (10) with respect to the reference images.
Referring now to FIG. 4, there is shown a flowchart of a method for correcting video analytics configuration parameters based on a tampered camera view as provided in step 205 of FIG. 2b. Initially, as in step 401 , the current image is aligned with the reference image.
The alignment of the current image is done by matching the feature points of the current image with the feature points of the reference image and computing a transformation matrix by using the coordinates of the matched feature points. The transformation matrix indicates whether the current image can be aligned with the reference image. Moreover, the transformation matrix defines how the current images should be aligned with the reference image. Such transformation matrix includes a translation matrix, an affine transform matrix and etc.
Thereon, the tampering adaptation unit (42) determines whether the current image can be aligned with the reference image. If the current image cannot be aligned, the tampering adaptation unit (42) indicates that the level of tampering is severe (decision 402 and step 403).
However, if the current image can be aligned with the reference image, the tampering adaptation unit (42) defines a new location or coordinates of the ROI or LOI (decision 402 and step 404). The new location of the ROI or LOI is defined by using the computed transformation matrix to align the ROI or LOI in the current image with respect to the ROI or LOI of the reference image. For example, if a rectangular-shaped ROI with 4 vertices of (x1 ,y1), (x2, y2), (x3, y3) and (x4,y4) in the reference image, the new coordinates of the vertices in the current image is computed by multiplying their corresponding homogeneous point with the transformation matrix computed earlier and thus, the homogeneous point of (x1 ,y1) is (x1 ,y1 ,1 ) and so on. Thereon, the tampering adaptation unit (42) computes the difference between the new ROI or LOI and the ROI or LOI of the reference image. The difference between the two ROIs or LOIs can be determined by using the method such as checking overlapping area of the ROIs or LOIs, and computing total distance between all points of the ROIs or LOIs. If the difference is less than a predetermined threshold, the tampering adaptation unit (42) indicates that there is no tampering of
the camera view and thereon, the surveillance system proceed to perform event analysis on the current image (decision 405 and step 406). In other words, this indicates that the global changes detected is due to a moving object blocking some part of the current image and thus, the global changes are not due to the camera (10) being tampered with.
However, if the difference is greater than the predetermined threshold, the tampering adaptation unit (42) computes image similarity frame count between subsequent images (decision 405 and step 407). The image similarity frame count is computed by counting the number of frames that are similar in terms of appearance. This is to ensure that the tampered camera view is consistent for few subsequent frames. Thereon, the tampering adaptation unit (42) compares the computed image similarity frame count with a predetermined threshold. If the image similarity frame count is greater than or equal to the predetermined threshold, the surveillance system updates its reference image (decision 408 and step 409). Thereon, as in steps 410 and 411 , the tampering adaptation unit (42) reconstructs the significance map based on the updated reference image and the tampering adaptation unit (42) indicates that the camera view has been tampered with.
However, if the image similarity frame count is less than the predetermined threshold, the tampering adaptation unit (42) indicates that the camera view has been tampered with but the camera view is still shifting (decision 408 and step 411). Thus, there is no suitable image that can be adapted as the new reference image.
While embodiments of the invention have been illustrated and described, it is not intended that these embodiments illustrated and describe all possible forms of the invention. Rather, the words used in the specifications are words of description rather than limitation and various changes may be made without departing from the scope of the invention.
Claims
1. A method for detecting camera tampering of a surveillance system is characterised by the steps of:
a) detecting global changes in an image captured by a camera (10) of the surveillance system with respect to a reference image;
b) if there are no global changes detected in the image, identifying an object in the image and performing an event analysis on the object; and
c) if there are global changes detected in the image, adjusting video analytics configuration parameters based on the image and determining the level of tampering, wherein if the level of tampering is severe, triggering an alarm to notify on the tampering occurrence, and wherein if the level of tampering is not severe, performing an event analysis on an object identified in the image based on the adjusted video analytics configuration parameters.
2. The method as claimed in claim 1 , wherein detecting global changes in the image captured by a camera (10) of the surveillance system with respect to the reference image comprising the steps of:
a) extracting feature points in the image captured by the camera (10); b) finding each feature point in the captured image that matches within the proximity of the same spatial coordinate in the reference image; c) computing an average similarity value and a percentage value of the matched feature points; and
d) comparing the average similarity value and the percentage value of the matched feature points with predetermined threshold values, wherein indicating that there are global changes in the image captured by the camera with respect to the reference image if the average similarity value and the percentage value are lower than the threshold values, and wherein indicating that there are no global changes in the image captured by the camera with respect to the reference image if the average similarity value and the percentage value are equal or greater than the threshold values. 3. The method as claimed in claim 2, wherein step (b) includes the steps of:
a) comparing a feature point descriptor of a feature point in the captured image with a feature point descriptor of a feature point in the reference image;
b) computing a similarity value for each pair of feature points; and c) comparing the similarity value with a predetermined threshold to determine whether the two feature points are a match or not.
The method as claimed in claim 1 , wherein adjusting video analytics configuration parameters based on the image comprising the steps of:
a) aligning the captured image with the reference image;
b) if the captured image cannot be aligned, indicating that the level of tampering is severe;
c) defining a new location of the ROI or LOI;
d) computing the difference between the new ROI or LOI and the ROI or LOI of the reference image;
e) if the difference is less than a predetermined threshold, indicating that there is no tampering of the camera view and performing event analysis on the captured image; and
f) if the difference is greater than the predetermined threshold, computing image similarity frame count between subsequent images and comparing the computed image similarity frame count with a predetermined threshold, wherein if the similarity value is greater than the predetermined threshold, updating the reference image with the captured image and reconstructing the significance map based on the updated reference image.
The method as claimed in claim 4, wherein aligning the captured image with the reference image comprising the steps of:
a) matching the feature points of the captured image with the feature points of the reference image; and
b) computing a transformation matrix by using the matched feature points.
6. A method for initializing video analytics configuration parameters of a surveillance system to detect camera tampering is characterised by the steps of:
a) defining a region of interest (ROI) or line of interest (LOI) within the camera view;
b) extracting feature point from a reference image;
c) constructing a significance map based on the extracted feature points; and
d) modelling a background scene.
7. The method as claimed in claim 6, wherein the significance map is constructed by expanding the area surrounding the feature points and summing all of the expanded feature points.
8. A surveillance system comprising at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60); and wherein the surveillance system is characterised in that said image processing module (40) includes:
a) a tampering detection unit (41) to detect tampering of camera view and initialize video analytics configuration parameters, b) a tampering adaptation unit (42) to perform correction on the video analytics configuration parameters based on a tampered camera view, and
c) an event detection unit (43) to perform event analysis and detection.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| MYPI2012700635 | 2012-09-12 | ||
| MYPI2012700635A MY159122A (en) | 2012-09-12 | 2012-09-12 | A surveillance system and a method for tampering detection and correction |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| WO2014042514A2 true WO2014042514A2 (en) | 2014-03-20 |
| WO2014042514A3 WO2014042514A3 (en) | 2014-05-08 |
Family
ID=49447767
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/MY2013/000159 Ceased WO2014042514A2 (en) | 2012-09-12 | 2013-09-06 | A surveillance system and a method for tampering detection and correction |
Country Status (2)
| Country | Link |
|---|---|
| MY (1) | MY159122A (en) |
| WO (1) | WO2014042514A2 (en) |
Cited By (10)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015157289A1 (en) * | 2014-04-08 | 2015-10-15 | Lawrence Glaser | Video image verification system utilizing integrated wireless router and wire-based communications |
| CN106375756A (en) * | 2016-09-28 | 2017-02-01 | 宁波大学 | A Detection Method for Single Object Removal and Tampering in Surveillance Video |
| WO2018050644A1 (en) * | 2016-09-13 | 2018-03-22 | Davantis Technologies, S.L. | Method, computer system and program product for detecting video surveillance camera tampering |
| US10539412B2 (en) | 2014-07-31 | 2020-01-21 | Hewlett-Packard Development Company, L.P. | Measuring and correcting optical misalignment |
| CN111080628A (en) * | 2019-12-20 | 2020-04-28 | 湖南大学 | Image tampering detection method and device, computer equipment and storage medium |
| CN112040219A (en) * | 2020-07-28 | 2020-12-04 | 北京旷视科技有限公司 | Camera picture detection method and device, electronic equipment and readable storage medium |
| CN113474792A (en) * | 2019-03-28 | 2021-10-01 | 康蒂-特米克微电子有限公司 | Automatic identification and classification against attacks |
| CN114004886A (en) * | 2021-10-29 | 2022-02-01 | 中远海运科技股份有限公司 | Camera displacement judging method and system for analyzing high-frequency stable points of image |
| CN116012365A (en) * | 2023-01-19 | 2023-04-25 | 蔚来汽车科技(安徽)有限公司 | Method and fault detection device for determining a display fault of an intelligent cockpit |
| CN116156217A (en) * | 2022-12-16 | 2023-05-23 | 杭州当虹科技股份有限公司 | Method for Consistency Verification of Video Content Based on Intelligent Recognition |
Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070247526A1 (en) | 2004-04-30 | 2007-10-25 | Flook Ronald A | Camera Tamper Detection |
| EP1936576A1 (en) | 2006-12-20 | 2008-06-25 | Axis AB | Camera tampering detection |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI417813B (en) * | 2010-12-16 | 2013-12-01 | Ind Tech Res Inst | Cascadable camera tampering detection transceiver module |
-
2012
- 2012-09-12 MY MYPI2012700635A patent/MY159122A/en unknown
-
2013
- 2013-09-06 WO PCT/MY2013/000159 patent/WO2014042514A2/en not_active Ceased
Patent Citations (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20070247526A1 (en) | 2004-04-30 | 2007-10-25 | Flook Ronald A | Camera Tamper Detection |
| EP1936576A1 (en) | 2006-12-20 | 2008-06-25 | Axis AB | Camera tampering detection |
Cited By (17)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| WO2015157289A1 (en) * | 2014-04-08 | 2015-10-15 | Lawrence Glaser | Video image verification system utilizing integrated wireless router and wire-based communications |
| US20170323543A1 (en) * | 2014-04-08 | 2017-11-09 | Lawrence F Glaser | Video image verification system utilizing integrated wireless router and wire-based communications |
| US10539412B2 (en) | 2014-07-31 | 2020-01-21 | Hewlett-Packard Development Company, L.P. | Measuring and correcting optical misalignment |
| WO2018050644A1 (en) * | 2016-09-13 | 2018-03-22 | Davantis Technologies, S.L. | Method, computer system and program product for detecting video surveillance camera tampering |
| CN106375756A (en) * | 2016-09-28 | 2017-02-01 | 宁波大学 | A Detection Method for Single Object Removal and Tampering in Surveillance Video |
| CN106375756B (en) * | 2016-09-28 | 2017-12-19 | 宁波大学 | It is a kind of to remove the detection method distorted for the single object of monitor video |
| CN113474792A (en) * | 2019-03-28 | 2021-10-01 | 康蒂-特米克微电子有限公司 | Automatic identification and classification against attacks |
| JP2022519868A (en) * | 2019-03-28 | 2022-03-25 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツング | Automatic recognition and classification of hostile attacks |
| JP7248807B2 (en) | 2019-03-28 | 2023-03-29 | コンティ テミック マイクロエレクトロニック ゲゼルシャフト ミット ベシュレンクテル ハフツング | Automatic recognition and classification of hostile attacks |
| US12217176B2 (en) | 2019-03-28 | 2025-02-04 | Conti Temic Microelectronic Gmbh | Automatic identification and classification of adversarial attacks |
| CN111080628A (en) * | 2019-12-20 | 2020-04-28 | 湖南大学 | Image tampering detection method and device, computer equipment and storage medium |
| CN111080628B (en) * | 2019-12-20 | 2023-06-20 | 湖南大学 | Image tampering detection method, apparatus, computer device and storage medium |
| CN112040219A (en) * | 2020-07-28 | 2020-12-04 | 北京旷视科技有限公司 | Camera picture detection method and device, electronic equipment and readable storage medium |
| CN114004886A (en) * | 2021-10-29 | 2022-02-01 | 中远海运科技股份有限公司 | Camera displacement judging method and system for analyzing high-frequency stable points of image |
| CN114004886B (en) * | 2021-10-29 | 2024-04-09 | 中远海运科技股份有限公司 | Camera shift discrimination method and system for analyzing high-frequency stable points of image |
| CN116156217A (en) * | 2022-12-16 | 2023-05-23 | 杭州当虹科技股份有限公司 | Method for Consistency Verification of Video Content Based on Intelligent Recognition |
| CN116012365A (en) * | 2023-01-19 | 2023-04-25 | 蔚来汽车科技(安徽)有限公司 | Method and fault detection device for determining a display fault of an intelligent cockpit |
Also Published As
| Publication number | Publication date |
|---|---|
| WO2014042514A3 (en) | 2014-05-08 |
| MY159122A (en) | 2016-12-15 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| WO2014042514A2 (en) | A surveillance system and a method for tampering detection and correction | |
| AU2011201953B2 (en) | Fault tolerant background modelling | |
| CN110807377B (en) | Target tracking and intrusion detection method, device and storage medium | |
| US9098748B2 (en) | Object detection apparatus, object detection method, monitoring camera system and storage medium | |
| US11176382B2 (en) | System and method for person re-identification using overhead view images | |
| US9922423B2 (en) | Image angle variation detection device, image angle variation detection method and image angle variation detection program | |
| CN106851049B (en) | A method and device for scene change detection based on video analysis | |
| CN104966304B (en) | Multi-target detection tracking based on Kalman filtering and nonparametric background model | |
| US7778445B2 (en) | Method and system for the detection of removed objects in video images | |
| US20140003710A1 (en) | Unsupervised learning of feature anomalies for a video surveillance system | |
| US20100157070A1 (en) | Video stabilization in real-time using computationally efficient corner detection and correspondence | |
| US10043105B2 (en) | Method and system to characterize video background changes as abandoned or removed objects | |
| US20140270362A1 (en) | Fast edge-based object relocalization and detection using contextual filtering | |
| US20140341474A1 (en) | Motion stabilization and detection of articulated objects | |
| KR20120020008A (en) | Method for reconstructing super-resolution image, and system for detecting illegally parked vehicles therewith | |
| US10475191B2 (en) | System and method for identification and suppression of time varying background objects | |
| US20150220782A1 (en) | Apparatus and method for detecting camera tampering using edge image | |
| US7545417B2 (en) | Image processing apparatus, image processing method, and image processing program | |
| US20180322332A1 (en) | Method and apparatus for identifying pupil in image | |
| US7982774B2 (en) | Image processing apparatus and image processing method | |
| McIvor et al. | The background subtraction problem for video surveillance systems | |
| JP3486229B2 (en) | Image change detection device | |
| KR101861245B1 (en) | Movement detection system and method for multi sensor cctv panorama video | |
| KR101395666B1 (en) | Surveillance apparatus and method using change of video image | |
| US10916016B2 (en) | Image processing apparatus and method and monitoring system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13779643 Country of ref document: EP Kind code of ref document: A2 |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 13779643 Country of ref document: EP Kind code of ref document: A2 |