[go: up one dir, main page]

CN113033504B - A multi-scale video anomaly detection method - Google Patents

A multi-scale video anomaly detection method Download PDF

Info

Publication number
CN113033504B
CN113033504B CN202110542929.7A CN202110542929A CN113033504B CN 113033504 B CN113033504 B CN 113033504B CN 202110542929 A CN202110542929 A CN 202110542929A CN 113033504 B CN113033504 B CN 113033504B
Authority
CN
China
Prior art keywords
scale
video
anomaly detection
test
score
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN202110542929.7A
Other languages
Chinese (zh)
Other versions
CN113033504A (en
Inventor
房体品
韩忠义
杨光远
张凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lingxin Huizhi Shandong Intelligent Technology Co ltd
Original Assignee
Guangdong Zhongju Artificial Intelligence Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhongju Artificial Intelligence Technology Co ltd filed Critical Guangdong Zhongju Artificial Intelligence Technology Co ltd
Priority to CN202110542929.7A priority Critical patent/CN113033504B/en
Publication of CN113033504A publication Critical patent/CN113033504A/en
Application granted granted Critical
Publication of CN113033504B publication Critical patent/CN113033504B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Multimedia (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a multi-scale video anomaly detection method, which comprises the following steps: step S1: acquiring video sample data, and carrying out multi-scale change on the video sample; step S2: constructing an anomaly detection model and carrying out model training; step S3: and testing the multi-scale anomaly detection model. According to the invention, the predicted frame and the real frame are blocked on different scales, so that the monitoring sensitivity of local abnormity is improved.

Description

一种基于多尺度视频异常检测方法A multi-scale video anomaly detection method

技术领域technical field

本发明属于安防技术领域,尤其涉及一种基于多尺度视频异常检测方法。The invention belongs to the technical field of security, and in particular relates to a method for detecting abnormality based on multi-scale video.

背景技术Background technique

视频异常检测是指对视频中发生的异常行为进行检测。随着监控视频的不断普及,自动识别视频中的异常事件变得越来越有必要,因为人工检查可能会造成大量的资源浪费(例如,劳动力)。然而,由于异常事件的罕见性和多样性,视频异常检测是一项具有挑战性的任务。更具体地说,异常事件很少发生,可能是以前从未见过的事件。因此,要收集所有类型的异常事件相当困难,这使得传统的二元分类方法不适合。此外,它很难以明确界定异常情况。鉴于异常点通常是与上下文相关,一个场景中的异常事件可以被视为作为另一个场景中的正常事件。一般认为异常是指训练样本达不到预期,偏离正常分布的行为。例如,在正常的交通场景中,在人行道上骑车可以被认为是一种异常行为。基于这一事实,人们提出了许多半监督视频异常检测的方法。具体来说,它们一般假设训练集中只有正常数据。试图学习正常分布。在测试中阶段,可以发现偏离常态的事件为异常。其实,这些半监督方法 是比较符合视频异常检测的实际情况的。根据视频序列的时间特征,它们大致可分为两类:基于重建模型的方法和基于预测模型的方法。对于基于重建模型的方法,他们通常会把正常帧输入到深度神经网络中,试图将这些帧以较小的误差重建。此外,一些研究还将轨迹或骨架特征送进神经网络进行重建。基于此,在测试中阶段,异常现象由于其偏离正常的视觉模式预计将表现出较大的重建误差。然而,深度神经网络由于其巨大的容量和通用性,重构模型有时甚至能很好地重建异常事件,导致异常无法检测出来。对于基于预测模型的方法,他们通常将连续的帧输入到神经网络中,试图预测未来的帧,对正常数据的预测误差较小。考虑到异常检测是识别不符合预期的事件,自然要利用预测与预期的差异来检测异常事件。在生成式对抗网络的基础上,未来帧的预测实现了更大的现实可能性,促进了视频中异常检测的性能。这些方案都是在单一空间中学习正常模式并检测异常。例如,基于深度学习的重建和预测方法通常会在原始图像空间中逐个像素比较原始帧和生成帧以检测异常。我们发现,异常往往发生在视频帧的某个区域,如果只是单纯的将生成的帧和真实帧直接求一个峰值信噪比得分,会导致对局部异常不敏感,不能对局部异常进行很好的检测。如何提高对局部异常的名感性,是待解决的技术问题。本发明通过将预测的帧和真实的帧在不同尺度上分块,提高了局部异常的监测敏感性。具体的:通过样本的局部改变做样本的多尺度变化,在提高样本数量的同时,增强局部差异样本,在提高局部监测准确性的同时,大大提高了模型训练效率;通过测试次数,训练次数,误差范围的限定,科学选用已训练好的权重文件,通过设置窗口进行逐行卷积降低了神经网络的计算量,大大加快训练的速度;通过多尺度测试降低局部预测误差,消除显著的局部预测错误,从而最终提高模型的区域预测能力。Video anomaly detection refers to the detection of abnormal behaviors that occur in videos. With the increasing popularity of surveillance video, automatic identification of anomalous events in the video becomes more and more necessary, since manual inspection can cause a lot of waste of resources (eg, labor). However, video anomaly detection is a challenging task due to the rarity and diversity of anomalous events. More specifically, unusual events are rare and may be events that have never been seen before. Therefore, it is quite difficult to collect all types of abnormal events, which makes traditional binary classification methods unsuitable. Furthermore, it is difficult to clearly define anomalies. Given that outliers are usually contextual, an anomalous event in one scenario can be treated as a normal event in another scenario. It is generally believed that abnormality refers to the behavior of training samples that do not meet expectations and deviate from the normal distribution. For example, in normal traffic scenarios, cycling on the sidewalk can be considered an abnormal behavior. Based on this fact, many methods for semi-supervised video anomaly detection have been proposed. Specifically, they generally assume that there is only normal data in the training set. Trying to learn the normal distribution. During the testing phase, events that deviate from the norm can be identified as anomalies. In fact, these semi-supervised methods are more in line with the actual situation of video anomaly detection. According to the temporal features of video sequences, they can be roughly divided into two categories: reconstruction model-based methods and predictive model-based methods. For methods based on reconstructed models, they usually feed normal frames into a deep neural network and try to reconstruct those frames with small errors. In addition, some studies also feed trajectory or skeleton features into neural networks for reconstruction. Based on this, in the testing phase, anomalies are expected to exhibit large reconstruction errors due to their deviation from normal visual patterns. However, due to the huge capacity and generality of deep neural networks, the reconstructed models can sometimes even reconstruct anomalous events well, resulting in undetectable anomalies. For predictive model-based methods, they typically feed successive frames into a neural network to try to predict future frames with smaller prediction errors for normal data. Considering that anomaly detection is about identifying events that do not meet expectations, it is natural to exploit the difference between predictions and expectations to detect anomalous events. Based on generative adversarial networks, the prediction of future frames enables greater real-world possibilities, boosting the performance of anomaly detection in videos. These schemes learn normal patterns and detect anomalies in a single space. For example, deep learning-based reconstruction and prediction methods typically compare original and generated frames pixel-by-pixel in the original image space to detect anomalies. We found that the anomaly often occurs in a certain area of the video frame. If you simply calculate a peak signal-to-noise ratio score between the generated frame and the real frame, it will lead to insensitivity to local anomalies and cannot perform well on local anomalies. detection. How to improve the sensitivity to local anomalies is a technical problem to be solved. The present invention improves the detection sensitivity of local anomalies by dividing the predicted frame and the real frame into blocks on different scales. Specifically: Multi-scale changes of samples are made through local changes of samples. While increasing the number of samples, local difference samples are enhanced. While improving the accuracy of local monitoring, the model training efficiency is greatly improved; To limit the error range, scientifically select the trained weight files, and perform line-by-line convolution by setting the window to reduce the computational load of the neural network and greatly speed up the training; reduce local prediction errors through multi-scale testing and eliminate significant local predictions error, which ultimately improves the regional prediction ability of the model.

发明内容SUMMARY OF THE INVENTION

为了解决现有技术中的上述问题,本发明提出了一种基于多尺度视频异常检测方法,所述方法包含:In order to solve the above problems in the prior art, the present invention proposes a multi-scale video anomaly detection method, the method includes:

步骤S1:获取视频样本数据,对视频样本作多尺度变化;具体的:将视频样本网格化,确定网格化视频样本数据之间的网格差异,基于网格差异追踪视频样本数据的网格变化情况,根据网格的变化情况做尺度变化,选择目标尺度并基于目标尺度对视频样本作变化;Step S1: acquiring video sample data, and performing multi-scale changes on the video samples; specifically: gridding the video samples, determining grid differences between the gridded video sample data, and tracking the network of the video sample data based on the grid differences. Grid change, make scale changes according to grid changes, select the target scale and make changes to the video samples based on the target scale;

步骤S2:构建异常检测模型,进行模型训练;具体的:构建异常检测模型,采用视频样本数据I1~It,It+1进行模型训练;Step S2: constructing an anomaly detection model, and performing model training; specifically: constructing an anomaly detection model, and using video sample data I 1 to It , It +1 to perform model training;

步骤S3:进行多尺度异常检测模型的测试;用异常检测模型预测得到下一帧预测值I t+1,计算下一帧预测值和下一帧真实值It+1之间的误差值;基于所述误差值得到测试得分,根据测试得分确定是否存在异常;Step S3: Test the multi-scale anomaly detection model; use the anomaly detection model to predict and obtain the predicted value I ' t+1 of the next frame, and calculate the error value between the predicted value of the next frame and the real value I t+1 of the next frame ; Obtain the test score based on the error value, and determine whether there is an abnormality according to the test score;

步骤S3具体包括如下步骤:Step S3 specifically includes the following steps:

步骤S31:进行测试初始化;初始化测试尺度Scale为1;其中:测试尺度为尺度划分的层次或次数;Step S31: perform test initialization; initialize the test scale Scale to 1; wherein: the test scale is the level or number of scale divisions;

步骤S32:按照测试尺度进对视频样本作尺度划分得到一个或多个经过尺度划分的视频样本序列;将视频样本序列分别依次输入异常检测模型,得到和所述一个或多个视频样本序列对应的预测输出;计算一个或多个预测输出的误差值,获取和误差值对应的得分SC;Step S32: Scale the video samples according to the test scale to obtain one or more scale-divided video sample sequences; input the video sample sequences into the anomaly detection model in turn, and obtain the corresponding one or more video sample sequences. Prediction output; calculate the error value of one or more prediction outputs, and obtain the score SC corresponding to the error value;

Figure 46150DEST_PATH_IMAGE001
; 其中:ci为第ci个网格;VN ci是第ci个网格的预测值,VNci是第ci个网格的真实值,d(VN ci, VNci)是VN ci和VNci之间的欧氏距离,Scale是测试尺度,dmax为d(VN ci,VNci)中的最大值,dmin 为d(VN ci,VNci)中的最小值;
Figure 46150DEST_PATH_IMAGE001
; where: ci is the ci th grid; VN ci is the predicted value of the ci th grid, VN ci is the actual value of the ci th grid, d(VN ci , VN ci ) is the VN ci and Euclidean distance between VN ci , Scale is the test scale, d max is the maximum value in d(VN ' ci , VN ci ), and d min is the minimum value in d(VN ' ci , VN ci );

步骤S33:判断划分终止条件是否满足,如果是,则进入步骤S34,否则,则对测试尺度做增量操作,并进入步骤S32;Step S33: determine whether the division termination condition is satisfied, if so, go to step S34, otherwise, perform an incremental operation on the test scale, and go to step S32;

步骤S34:基于得分得到总得分,如果总得分超过总得分阈值,或者存在一个得分超过的得分阈值时,确定测试视频样本存在异常。Step S34: Obtain a total score based on the score, if the total score exceeds the total score threshold, or if there is a score exceeding the score threshold, it is determined that the test video sample is abnormal.

进一步的,所述获取和误差值对应的得分,具体为:根据误差值和得分的对照表查找和所述误差值对应的得分。Further, the obtaining of the score corresponding to the error value is specifically: searching for the score corresponding to the error value according to the comparison table of the error value and the score.

进一步的,划分终止条件为测试尺度等于预设值和/或存在一个得分超过的得分阈值。Further, the division termination condition is that the test scale is equal to the preset value and/or there is a score threshold that the score exceeds.

进一步的,所述划分终止条件为测试尺度等于关注尺度加1。Further, the division termination condition is that the test scale is equal to the attention scale plus 1.

进一步的,所述基于得分得到总得分,具体为:对上述得分进行加权求和得到总得分。Further, the obtaining the total score based on the score is specifically: performing a weighted summation on the above scores to obtain the total score.

进一步的,步骤S3还包括:网格化后划分成的网格是大小相同的。Further, step S3 further includes: the grids divided into grids are of the same size.

进一步的,所述异常检测模型为神经网络模型。Further, the anomaly detection model is a neural network model.

本发明的有益效果包括:(1)对样本的局部改变做样本的多尺度变化,在提高样本数量的同时,增强局部差异样本,在提高局部监测准确性的同时,大大提高了模型训练效率;(2)通过测试次数,训练次数,误差范围的限定,科学选用已训练好的权重文件,通过设置窗口进行逐行卷积降低了神经网络的计算量,大大加快训练的速度;(3)通过多尺度测试降低局部预测误差,消除显著的局部预测错误,从而最终提高模型的区域预测能力。The beneficial effects of the present invention include: (1) performing multi-scale changes of samples for local changes of samples, enhancing the local difference samples while increasing the number of samples, and greatly improving the model training efficiency while improving the accuracy of local monitoring; (2) Through the limitation of the number of tests, the number of trainings, and the error range, the weight file that has been trained is scientifically selected, and the line-by-line convolution by setting the window reduces the calculation amount of the neural network and greatly speeds up the training speed; (3) Pass Multi-scale testing reduces local prediction errors and eliminates significant local prediction errors, thereby ultimately improving the model's regional prediction ability.

附图说明Description of drawings

此处所说明的附图是用来提供对本发明的进一步理解,构成本申请的一部分,但并不构成对本发明的不当限定,在附图中:The accompanying drawings described here are used to provide a further understanding of the present invention and constitute a part of this application, but do not constitute an improper limitation of the present invention. In the accompanying drawings:

图1为本发明的一种基于多尺度视频异常检测方法示意图。FIG. 1 is a schematic diagram of a multi-scale video anomaly detection method according to the present invention.

具体实施方式Detailed ways

下面将结合附图以及具体实施例来详细说明本发明,其中的示意性实施例以及说明仅用来解释本发明,但并不作为对本发明的限定。The present invention will be described in detail below with reference to the accompanying drawings and specific embodiments, wherein the exemplary embodiments and descriptions are only used to explain the present invention, but are not intended to limit the present invention.

视频异常检测是一项具有挑战性的任务,要收集所有类型的异常事件相当困难,这使得传统的二元分类方法不适合。此外,它很难以明确界定异常情况。鉴于异常点通常是与上下文相关,一个场景中的异常事件可以被视为作为另一个场景中的正常事件。Video anomaly detection is a challenging task, and it is quite difficult to collect all types of anomalous events, which makes traditional binary classification methods unsuitable. Furthermore, it is difficult to clearly define anomalies. Given that outliers are usually contextual, an anomalous event in one scenario can be treated as a normal event in another scenario.

现有技术中在进行异常检测时,通过异常类型的训练,直接在二分方法中发现异常类型,在模型的训练过程中正常帧输入到深度神经网络中,试图将这些帧以较小的误差重建,还涉及将轨迹或骨架特征送进神经网络进行重建。但是显然这种二分的预测方法不可能枚举所有的异常类型;或者直接通过正常帧训练来实现异常检测,但是这样的异常检测会因为异常往往发生在较为局部的范围,导致异常检测不出,本发明通过多尺度检测提高了对异常检测的效率;In the prior art, when performing abnormality detection, the abnormality type is directly found in the binary method through the training of the abnormality type. During the training process of the model, the normal frames are input into the deep neural network, and an attempt is made to reconstruct these frames with a small error. , which also involves feeding the trajectory or skeleton features into a neural network for reconstruction. However, it is obviously impossible to enumerate all anomaly types with this dichotomous prediction method; or to realize anomaly detection directly through normal frame training, but such anomaly detection will not be able to detect anomalies because anomalies often occur in a relatively local range. The invention improves the efficiency of abnormal detection through multi-scale detection;

本发明所述的一种基于多尺度视频异常检测方法,包含如下步骤:A method for detecting anomalies based on multi-scale video according to the present invention includes the following steps:

步骤S1:获取视频样本数据,对视频样本作多尺度变化;具体的:将视频样本网格化,确定网格化视频样本数据之间的网格差异,基于网格差异追踪视频样本数据的网格变化情况,根据网格的变化情况做尺度变化,选择目标尺度并基于目标尺度对视频样本作变化;Step S1: acquiring video sample data, and performing multi-scale changes on the video samples; specifically: gridding the video samples, determining grid differences between the gridded video sample data, and tracking the network of the video sample data based on the grid differences. Grid change, make scale changes according to grid changes, select the target scale and make changes to the video samples based on the target scale;

步骤S1具体包括如下步骤:Step S1 specifically includes the following steps:

步骤S11:在网格化尺度小于尺度阈值时将视频样本数据网格化,具体为:按照网格化尺度将视频样本数据网格化;Step S11: gridding the video sample data when the gridding scale is smaller than the scale threshold, specifically: gridding the video sample data according to the gridding scale;

优选的:所述网格化尺度的初始值为1;每次进入该步骤S11,网格化尺度的尺度值增加1或者一个单位;增加的方式也可以是按照2的次幂的方式增加,如:J=2JS-1,其中JS是该尺度值对应的最小划分次数,尺度值对应网格个数;在尺度为2的情况下,可以直接将视频帧划分为上下部分,或者左右两部分;Preferably: the initial value of the gridding scale is 1; each time the step S11 is entered, the scale value of the gridding scale is increased by 1 or one unit; the increase method can also be increased by the power of 2, For example: J=2 JS-1 , where JS is the minimum number of divisions corresponding to the scale value, and the scale value corresponds to the number of grids; when the scale is 2, the video frame can be directly divided into upper and lower parts, or two left and right parts. part;

优选的:网格化后划分成的网格是大小相同的;Preferred: the grids divided into the same size after gridding;

步骤S12:确定网格化视频样本之间的网格差异,具体为:对于视频样本I1~It,It+1中的两个相邻的视频帧Ii和Ii+1,计算相邻视频帧的对应网格VNi,j和VNi+1,j的差异值CRAi,j,1=<j<=J,1=<i<=t,得到差异值矩阵[CRAi,j];CRAi,j为第j个网格,第i+1和第i个视频帧之间的差异值;其中:J为网格化尺度值;Step S12: Determine the grid difference between the gridded video samples, specifically: for the video samples I 1 to It , two adjacent video frames I i and I i + 1 in It +1 , calculate The difference value CRA i,j of the corresponding grids VN i,j and VN i+1,j of adjacent video frames, 1=<j<=J, 1=<i<=t, get the difference value matrix [CRA i ,j ]; CRA i,j is the jth grid, the difference value between the i+1th and the ith video frame; where: J is the grid scale value;

步骤S13:如果差异值矩阵中存在落入预设差异值范围的网格,则将当前网格化尺度作为目标尺度,将落入预设范围的网格作为目标网格,并进入步骤S16;否则,进入步骤S14;Step S13: if there is a grid that falls within the preset difference value range in the difference value matrix, the current gridding scale is used as the target scale, and the grid that falls within the preset range is used as the target grid, and step S16 is entered; Otherwise, go to step S14;

步骤S14:根据公式(1)计算脊变化熵SH,如果脊变化熵小于预设值,则进入步骤S11;否则,进入步骤S15;Step S14: Calculate the ridge change entropy SH according to formula (1). If the ridge change entropy is less than the preset value, go to step S11; otherwise, go to step S15;

Figure 825887DEST_PATH_IMAGE002
(1);
Figure 825887DEST_PATH_IMAGE002
(1);

其中:im,jm为CRAi,j最大值所在的网格和视频帧编号;Where: im, jm are the grid and video frame number where the maximum value of CRA i, j is located;

步骤S15:计算网格差异和

Figure 221096DEST_PATH_IMAGE003
,将网格差异和按照大小分类, 得到的每个分类和其他分类之间的差值都大于预设阈值,如果得到的分类个数大于2,则将 网格差异和最大的分类中的所有网格作为目标网格,将当前网格化尺度作为目标尺度,并 进入步骤S16,否则,进入步骤S11; Step S15: Calculate the grid difference sum
Figure 221096DEST_PATH_IMAGE003
, classify the grid difference sum according to the size, the difference between each classification and other classifications obtained is greater than the preset threshold, if the number of obtained classifications is greater than 2, the grid difference and the largest classification The grid is used as the target grid, and the current gridding scale is used as the target scale, and then go to step S16, otherwise, go to step S11;

通过网格差异和对网格进行区分,得到的每个分类之间都存在明显的差异性,通过差异性分类将固定的部分和变化强烈的部分区分开来;本发明通过引入网格所处时间作为权重,在考虑差异性的同时,进行变化趋势的考量,精选了样本的扩充基础;Through grid difference and grid distinction, there are obvious differences between each classification obtained, and the fixed part and the strongly changed part are distinguished by the difference classification; Time is used as the weight, while considering the difference, the change trend is considered, and the expansion basis of the sample is selected;

步骤S16:基于目标尺度和目标网格对视频样本作变化;具体的:生成视频样本的变化视频样本,变化视频样本RI1~RIt,RIt+1,其中,RIi中的目标网格为复制Ii中对应目标网格数据,RIi中非目标网格数据复制Ii-1中对应非目标网格数据;通过样本变化一方面增加样本数量,同时增加同一视频帧中局部区域的变化幅度,便于后续进行快速的局部训练;Step S16: making changes to the video samples based on the target scale and the target grid; specifically: generating a video sample that is a video sample, changing the video samples RI 1 to RI t , RI t+1 , where the target grid in RI i In order to copy the corresponding target grid data in I i , the non-target grid data in RI i copy the corresponding non-target grid data in I i-1 ; on the one hand, the number of samples is increased through the sample change, and the local area in the same video frame is increased simultaneously. The range of change is convenient for subsequent rapid local training;

可替换的:RIi中目标网格数据为基于RI1~RIt,RIt+1的t+1个视频帧中对应非目标网格数据计算得到;例如:像素值取均值;Alternatively: the target grid data in RI i is calculated based on the corresponding non-target grid data in t+1 video frames of RI 1 to RI t , RI t+1 ; for example, the pixel value is averaged;

可替换的,RIi中非目标网格数据为基于RI1~RIt,RIt+1的t+1个视频帧中对应非目标网格数据计算得到;例如:像素值取均值;Alternatively, the non-target grid data in RI i is calculated based on the corresponding non-target grid data in t+1 video frames of RI 1 to RI t , RI t+1 ; for example, pixel values are averaged;

优选的:基于目标尺度确定所生成的变化视频样数量,当目标尺度和关注尺寸差异小时,例如:大小比较在阈值范围内来确定;产生较多的视频样本,反之亦然;关注尺寸和异常检测所针对的内容相关,例如:异常检测关于目标人,那么该关注尺寸和人在视频样本中的尺寸相关;通过这样的方式使得尺度变化监测效率最高;Preferred: determine the number of generated changing video samples based on the target scale, when the difference between the target scale and the attention size is small, for example: the size comparison is determined within the threshold range; generate more video samples, and vice versa; pay attention to the size and abnormality The content targeted by the detection is related, for example: anomaly detection is about the target person, then the size of attention is related to the size of the person in the video sample; in this way, the scale change monitoring efficiency is the highest;

现有技术中在进行样本扩充的时不考虑视频内容及其改变,更不考虑局部改变,因为涉及局部改变就要涉及图像的分割,显然对于样本扩充来说这样是得不偿失的,本发明通过网格拆分找到合理的粒度范围,在对样本的局部改变做样本的多尺度变化,在提高样本数量的同时,保障样本增强局部差异样本,从而在提高局部监测准确性的同时,大大提高了模型训练效率;In the prior art, the video content and its changes are not considered when performing sample expansion, let alone local changes, because the segmentation of images is involved in local changes. Obviously, this is not worth the loss for sample expansion. Grid splitting finds a reasonable range of granularity, and makes multi-scale changes of samples to local changes of samples. While increasing the number of samples, it ensures that samples enhance local difference samples, thereby greatly improving the model while improving the accuracy of local monitoring. training efficiency;

步骤S2:构建异常检测模型,进行模型训练;具体的:构建异常检测模型,采用视频样本数据I1~It,It+1进行模型训练;Step S2: constructing an anomaly detection model, and performing model training; specifically: constructing an anomaly detection model, and using video sample data I 1 to It , It +1 to perform model training;

优选的:所述异常检测模型为神经网络模型;Preferably: the anomaly detection model is a neural network model;

所述构建异常检测模型,为构建基于神经网络模型的异常检测模型,具体包括如下步骤:The building anomaly detection model is to construct an anomaly detection model based on a neural network model, which specifically includes the following steps:

步骤SA1:对视频样本数据通过卷积神经网络的映射层进行输入映射,基于视频样本帧生成输入矩阵,其中每个样本帧对应一个输入矩阵;Step SA1: perform input mapping on the video sample data through the mapping layer of the convolutional neural network, and generate an input matrix based on the video sample frame, wherein each sample frame corresponds to an input matrix;

步骤SA2:设置窗口长度为W的卷积核,对W个输入矩阵做连续的卷积操作;Step SA2: Set a convolution kernel with a window length of W, and perform continuous convolution operations on W input matrices;

优选的:对W个输入矩阵作逐行卷积操作,具体的:将输入矩阵中的一行元素作为一个输入向量,对输入向量作卷积操作;Kerk=f(W * Vk:k+w+b);其中Kerk为第k次卷积的卷积核,f()表示激活函数,b为卷积参数,W是窗口宽度,Vk:k+w为从第k个输入向量开始到第k+w-1个输入向量;Preferably: perform a row-by-row convolution operation on W input matrices, specifically: take a row of elements in the input matrix as an input vector, and perform a convolution operation on the input vector; Ker k =f(W * V k:k+ w +b); where Ker k is the convolution kernel of the kth convolution, f() represents the activation function, b is the convolution parameter, W is the window width, and Vk:k+w is the input vector from the kth Start to the k+w-1th input vector;

优选的:W≤t;Preferred: W≤t;

步骤SA3:对卷积结果作池化操作;Step SA3: Perform a pooling operation on the convolution result;

优选的:所述卷积为均值卷积;Preferably: the convolution is mean convolution;

可替换的:所述神经网络模型为U-net模型;本发明对于视频样本同时存在特征向量稀疏 和维度较高的问题,采用神经网络会带来的大量计算导致死机,本发明通过设置窗口进行逐行卷积降低了神经网络的计算量,提高了异常检测效率;Alternative: the neural network model is a U-net model; the present invention has the problems of sparse feature vector and high dimension for video samples at the same time, and the large amount of computations that will be brought about by the neural network will lead to a crash, the present invention is performed by setting a window. The line-by-line convolution reduces the computational complexity of the neural network and improves the efficiency of anomaly detection;

所述构建异常检测模型,还包括权重文件选择步骤,具体为:The building anomaly detection model further includes a weight file selection step, specifically:

步骤SB1:从权重文件集合中选择一权重文件作为模型的当前权重文件,进行模型的训练;训练次数为第一给定次数;Step SB1: select a weight file as the current weight file of the model from the weight file set, carry out the training of the model; The training number of times is the first given number of times;

当权重文件集合中所有权重文件均不满足要求时,从权重文件集合中选择在步骤SB4中误差值最小的权重文件作为所选择的权重文件;When all the weight files in the weight file set do not meet the requirements, the weight file with the smallest error value in step SB4 is selected from the weight file set as the selected weight file;

优选的:所述权重文件为已训练好的权重文件;Preferably: the weight file is a trained weight file;

步骤SB2:采用异常检测模型进行第一测试并获取误差值;若误差值在第一合理范围,则进入步骤SB3,否则,进入步骤SB1;Step SB2: use the abnormality detection model to perform the first test and obtain the error value; if the error value is within the first reasonable range, then proceed to step SB3, otherwise, proceed to step SB1;

优选的:所述误差值为第一测试次数的平均误差值;Preferably: the error value is the average error value of the first number of tests;

优选的:所述第一测试次数为0.1*第一给定次数;Preferably: the first number of tests is 0.1*the first given number of times;

步骤SB3:用当前权重文件进行模型的继续训练,训练次数为第二给定次数;Step SB3: Use the current weight file to continue the training of the model, and the number of training times is the second given number of times;

步骤SB4: 采用模型进行第二测试并获取误差值;若误差值在第二合理范围,则将当前权重文件作为所选择的权重文件;否则,进入步骤SB1;Step SB4: adopt the model to carry out the second test and obtain the error value; if the error value is in the second reasonable range, the current weight file is used as the selected weight file; otherwise, enter step SB1;

优选的:所述第一给定次数NG1小于等于第二给定次数NG2;Preferably: the first given number of times NG1 is less than or equal to the second given number of times NG2;

优选的:随着模型的训练次数不断增加,合理范围也相应收窄;具体的:Preferred: As the training times of the model continue to increase, the reasonable range is correspondingly narrowed; specifically:

具体的:所述第一合理范围数D1=[DD1,DU1]满足如下关系:Specifically: the first reasonable range number D1=[DD1,DU1] satisfies the following relationship:

Figure 97785DEST_PATH_IMAGE004
Figure 97785DEST_PATH_IMAGE004
;

所述第一合理范围数D2=[DD2,DU2]满足如下关系:The first reasonable range number D2=[DD2,DU2] satisfies the following relationship:

Figure 733297DEST_PATH_IMAGE005
Figure 733297DEST_PATH_IMAGE005
;

优选的:所述第二测试次数等于第一测试次数;虽然使用其他模型的权重文件能够增加训练速度,但是如何选用,怎么选用现有技术中鲜有涉及,本发明通过测试次数,训练次数,误差范围的限定,科学选用已训练好的权重文件,大大加快训练的速度;Preferably: the second number of tests is equal to the first number of tests; although the weight files of other models can be used to increase the training speed, how to select them is rarely involved in the prior art. The present invention passes the number of tests, the number of trainings, The error range is limited, and the trained weight file is scientifically selected, which greatly speeds up the training speed;

步骤S3:进行多尺度异常检测模型的测试;用异常检测模型预测得到下一帧预测值I t+1,计算下一帧预测值和下一帧真实值It+1之间的误差值;基于所述误差值得到测试得分,根据测试得分确定是否存在异常;Step S3: Test the multi-scale anomaly detection model; use the anomaly detection model to predict and obtain the predicted value I ' t+1 of the next frame, and calculate the error value between the predicted value of the next frame and the actual value I t+1 of the next frame ; Obtain the test score based on the error value, and determine whether there is an abnormality according to the test score;

优选的:所述误差值为峰值信噪比;当误差值较大时,测试得分高,当测试得分大于异常得分时,确定存在异常;Preferably: the error value is the peak signal-to-noise ratio; when the error value is large, the test score is high, and when the test score is greater than the abnormal score, it is determined that there is abnormality;

所述误差,具体为:求取下一帧预测值和下一帧真实值之前的强度损失和/或梯度损失作为误差;The error is specifically: the intensity loss and/or gradient loss before the predicted value of the next frame and the actual value of the next frame are obtained as the error;

步骤S3具体包括如下步骤:Step S3 specifically includes the following steps:

步骤S31:进行测试初始化;初始化测试尺度Scale为1;其中:测试尺度为尺度划分的层次或者次数;Step S31: perform test initialization; initialize the test scale Scale to 1; wherein: the test scale is the level or number of scale divisions;

步骤S32:按照测试尺度进对视频样本作尺度划分得到一个或多个经过尺度划分的视频样本序列;将视频样本序列分别依次输入异常检测模型,得到和所述一个或多个视频样本序列对应的预测输出;计算一个或多个预测输出的误差值,获取和误差值对应的得分SC;Step S32: Scale the video samples according to the test scale to obtain one or more scale-divided video sample sequences; input the video sample sequences into the anomaly detection model in turn, and obtain the corresponding one or more video sample sequences. Prediction output; calculate the error value of one or more prediction outputs, and obtain the score SC corresponding to the error value;

Figure 62647DEST_PATH_IMAGE006
Figure 62647DEST_PATH_IMAGE006
;

其中:ci为第ci个网格;VN'ci是第ci个网格的预测值,VNci是第ci个网格的真实值,d(VN'ci,VNci)是VN'ci和VNci之间的欧氏距离,Scale是测试尺度;Where: ci is the ci-th grid; VN'ci is the predicted value of the ci-th grid, VNci is the real value of the ci-th grid, and d(VN'ci, VNci) is the difference between VN'ci and VNci The Euclidean distance, Scale is the test scale;

所述获取和误差值对应的得分,具体为:根据误差值和得分的对照表查找和所述误差值对应的得分;The obtaining of the score corresponding to the error value is specifically: searching for the score corresponding to the error value according to the comparison table of the error value and the score;

步骤S33:判断划分终止条件是否满足,如果是,则进入步骤S34,否则,则对测试尺度做增量操作,并进入步骤S32;Step S33: determine whether the division termination condition is satisfied, if so, go to step S34, otherwise, perform an incremental operation on the test scale, and go to step S32;

优选的:划分终止条件为测试尺度等于预设值和/或存在一个得分超过的得分阈值;Preferably: the division termination condition is that the test scale is equal to the preset value and/or there is a score threshold that the score exceeds;

可替换的:所述划分终止条件为测试尺度等于关注尺度+1;Alternatively: the division termination condition is that the test scale is equal to the attention scale+1;

步骤S34:基于得分得到总得分,如果总得分超过总得分阈值,或者存在一个得分超过的得分阈值时,确定测试视频样本存在异常;本发明提出通过多尺度测试降低局部预测误差,消除显著的局部预测错误,从而最终提高模型的区域预测能力,以达到异常检测效率这个最终目的;Step S34: Obtain the total score based on the score. If the total score exceeds the total score threshold, or there is a score exceeding the score threshold, it is determined that the test video sample is abnormal; the present invention proposes to reduce local prediction errors through multi-scale testing and eliminate significant local. Predict errors, thereby ultimately improving the regional prediction ability of the model to achieve the ultimate goal of anomaly detection efficiency;

所述基于得分得到总得分,具体为:对上述得分进行加权求和得到总得分;SCALL=∑SC;The obtaining of the total score based on the score is specifically: weighted summation of the above scores to obtain the total score; SCALL=∑SC;

软件环境可以分为两类,包括在一个或多个硬件环境上执行的系统软件和应用软件。在一个实施例中,在此公开的方法和过程可以实现为系统软件、应用软件或它们的组合。系统软件可以包括诸如操作系统(OS)和信息管理系统之类的控制程序,它们指示硬件环境中的一个或多个处理器(例如微处理器)如何运行和处理信息。应用软件可以包括但不限于程序代码、数据结构、固件、驻留软件、微代码,或者可以由处理器读取、分析或执行的任何其它形式的信息或例程。Software environments can be divided into two categories, including system software and application software executing on one or more hardware environments. In one embodiment, the methods and processes disclosed herein may be implemented as system software, application software, or a combination thereof. System software may include control programs, such as an operating system (OS) and information management system, that instruct one or more processors (eg, microprocessors) in a hardware environment how to operate and process information. Application software may include, but is not limited to, program code, data structures, firmware, resident software, microcode, or any other form of information or routines that can be read, analyzed, or executed by a processor.

换言之,应用软件可以实现为程序代码,其以机器可用或计算机可读存储介质的形式嵌入在计算机程序产品中,计算机程序产品提供程序代码以便由机器、计算机或任何指令执行系统使用或者与其结合使用。此外,应用软件可以包括一个或多个计算机程序,这些计算机程序在从存储介质加载到本地存储器之后,在系统软件之上执行。在客户端-服务器体系结构中,应用软件可以包括客户端软件和服务器软件。例如,在一个实施例中,客户端软件可以在客户端计算系统上执行,该客户端计算系统不同于并且独立于执行服务器软件的服务器计算系统。In other words, application software may be implemented as program code, in the form of a machine-usable or computer-readable storage medium, embedded in a computer program product that provides the program code for use by or in connection with a machine, computer, or any instruction execution system . Furthermore, the application software may include one or more computer programs that execute on top of the system software after being loaded from a storage medium into local memory. In a client-server architecture, application software may include client software and server software. For example, in one embodiment, client software may execute on a client computing system that is distinct and separate from the server computing system that executes the server software.

软件环境还可以包括浏览器软件以便访问通过本地或远程计算网络提供的数据。进一步,软件环境可以包括用户接口(例如图形用户接口(GUI))以便接收用户命令和数据。有必要重申,上面描述的硬件和软件体系结构和环境用于实例目的。因此,可以在任何类型的系统体系结构、功能或逻辑平台或处理环境上实现一个或多个实施例。The software environment may also include browser software to access data provided over a local or remote computing network. Further, the software environment may include a user interface (eg, a graphical user interface (GUI)) for receiving user commands and data. It is necessary to reiterate that the hardware and software architectures and environments described above are for example purposes. Accordingly, one or more embodiments may be implemented on any type of system architecture, functional or logical platform or processing environment.

以上所述仅是本发明的较佳实施方式,故凡依本发明专利申请范围所述的构造、特征及原理所做的等效变化或修饰,均包括于本发明专利申请范围内。The above descriptions are only the preferred embodiments of the present invention, so all equivalent changes or modifications made according to the structures, features and principles described in the scope of the patent application of the present invention are included in the scope of the patent application of the present invention.

Claims (6)

1. A multi-scale video anomaly detection method, the method comprising:
step S1: acquiring video sample data, and carrying out multi-scale change on the video sample; specifically, the method comprises the following steps: gridding the video samples, determining grid difference among gridded video sample data, tracking the grid change condition of the video sample data based on the grid difference, carrying out scale change according to the change condition of the grid, selecting a target scale and changing the video samples based on the target scale;
step S2: constructing an anomaly detection model and carrying out model training; specifically, the method comprises the following steps: constructing an abnormal detection model by adopting video sample data I1~It,It+1Carrying out model training;
the method for constructing the anomaly detection model comprises the following steps of:
step SA 1: performing input mapping on video sample data through a mapping layer of a convolutional neural network, and generating an input matrix based on video sample frames, wherein each sample frame corresponds to one input matrix;
step SA 2: setting a convolution kernel with the window length of W, and performing continuous convolution operation on W input matrixes;
performing a line-by-line convolution operation on the W input matrixes, specifically: taking a row of elements in the input matrix as an input vector, and performing convolution operation on the input vector; kerk=f(W * Vk:k+w+ b); wherein KerkFor the convolution kernel of the kth convolution, f () represents the activation function, b is the convolution parameter, W is the window width, Vk:k+wStarting from the kth input vector to the (k + w-1) th input vector;
step SA 3: performing pooling operation on the convolution result;
the method for constructing the anomaly detection model further comprises a weight file selection step, which specifically comprises the following steps:
SB1, selecting a weight file from the weight file set as the current weight file of the model, and training the model; the training times are first given times;
step SB 2: performing a first test by adopting an anomaly detection model and obtaining an error value; if the error value is within the first reasonable range, go to step SB3, otherwise, go to step SB 1;
step SB 3: continuing training the model by using the current weight file, wherein the training times are second given times;
step SB4, performing a second test by using the model and obtaining an error value; if the error value is in the second reasonable range, taking the current weight file as the selected weight file; otherwise, go to step SB 1;
step S3: testing a multi-scale anomaly detection model; predicting by using an anomaly detection model to obtain a predicted value I of a next frame t+1Calculating the predicted value and the real value of the next framet+1An error value therebetween; obtaining a test score based on the error value, and determining whether an abnormality exists according to the test score;
step S3 specifically includes the following steps:
step S31: carrying out test initialization; initializing a test Scale to be 1; wherein: the testing scale is the level or the times of scale division;
step S32: carrying out scale division on the video samples according to the test scale to obtain one or more video sample sequences subjected to scale division; respectively and sequentially inputting the video sample sequences into an anomaly detection model to obtain prediction outputs corresponding to the one or more video sample sequences; calculating one or more error values of the prediction output, and obtaining scores SC corresponding to the error values;
Figure DEST_PATH_IMAGE001
wherein: ci is the ci-th grid; VN ciIs the predicted value of the ci-th grid, VNciIs the true value of the ci-th mesh,d(VN ci,VNci) Is VN ciAnd VNciEuclidean distance between them, Scale is the test Scale, dmaxIs d (VN) ci,VNci) Maximum value of (1), dminIs d (VN) ci,VNci) Minimum value of (1);
step S33: judging whether the dividing termination condition is met, if so, entering a step S34, otherwise, performing incremental operation on the test scale, and entering a step S32;
step S34: and obtaining a total score based on the scores, and determining that the test video sample has abnormality if the total score exceeds a total score threshold value or F is a score threshold value exceeding the F.
2. The method for detecting anomaly based on multi-scale video according to claim 1, wherein the obtaining of the score corresponding to the error value specifically comprises: and searching a score corresponding to the error value according to the comparison table of the error value and the score.
3. The multi-scale video based anomaly detection method according to claim 2, wherein the partition termination condition is that the test scale equals a preset value and/or that there is a score threshold value over which a score exceeds.
4. The method for detecting abnormality based on multi-scale video according to claim 3, wherein the total score is obtained based on the score, and specifically comprises: and weighting and summing the scores to obtain a total score.
5. The method for detecting anomaly based on multi-scale video according to claim 4, wherein the step S3 further comprises: the grids divided after gridding are the same in size.
6. The multi-scale video based anomaly detection method according to claim 5, wherein the anomaly detection model is a neural network model.
CN202110542929.7A 2021-05-19 2021-05-19 A multi-scale video anomaly detection method Expired - Fee Related CN113033504B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110542929.7A CN113033504B (en) 2021-05-19 2021-05-19 A multi-scale video anomaly detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110542929.7A CN113033504B (en) 2021-05-19 2021-05-19 A multi-scale video anomaly detection method

Publications (2)

Publication Number Publication Date
CN113033504A CN113033504A (en) 2021-06-25
CN113033504B true CN113033504B (en) 2021-08-27

Family

ID=76455571

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110542929.7A Expired - Fee Related CN113033504B (en) 2021-05-19 2021-05-19 A multi-scale video anomaly detection method

Country Status (1)

Country Link
CN (1) CN113033504B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951827A (en) * 2017-02-21 2017-07-14 南京邮电大学 A Global Abnormal Behavior Detection Method Based on Object Motion Characteristics

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6819790B2 (en) * 2002-04-12 2004-11-16 The University Of Chicago Massive training artificial neural network (MTANN) for detecting abnormalities in medical images
US8189905B2 (en) * 2007-07-11 2012-05-29 Behavioral Recognition Systems, Inc. Cognitive model for a machine-learning engine in a video analysis system
CN106097353B (en) * 2016-06-15 2018-06-22 北京市商汤科技开发有限公司 Method for segmenting objects and device, computing device based on the fusion of multi-level regional area
CN106548153B (en) * 2016-10-27 2019-05-28 杭州电子科技大学 Video abnormality detection method based on graph structure under multi-scale transform
CN107818302A (en) * 2017-10-20 2018-03-20 中国科学院光电技术研究所 Non-rigid multi-scale object detection method based on convolutional neural network
CN108052859B (en) * 2017-10-31 2022-02-25 深圳大学 A method, system and device for abnormal behavior detection based on clustered optical flow features
CN109918995B (en) * 2019-01-16 2023-07-28 上海理工大学 A Crowd Anomaly Detection Method Based on Deep Learning
CN112258431B (en) * 2020-09-27 2021-07-20 成都东方天呈智能科技有限公司 Image classification model and classification method based on hybrid depthwise separable dilated convolution
CN112801109A (en) * 2021-04-14 2021-05-14 广东众聚人工智能科技有限公司 Remote sensing image segmentation method and system based on multi-scale feature fusion

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106951827A (en) * 2017-02-21 2017-07-14 南京邮电大学 A Global Abnormal Behavior Detection Method Based on Object Motion Characteristics

Also Published As

Publication number Publication date
CN113033504A (en) 2021-06-25

Similar Documents

Publication Publication Date Title
Berend et al. Cats are not fish: Deep learning testing calls for out-of-distribution awareness
CN109766992B (en) Anomaly detection and attack classification method for industrial control based on deep learning
US10607137B2 (en) Branch predictor selection management
CN109656818B (en) Fault prediction method for software intensive system
CN105930794A (en) Indoor scene identification method based on cloud computing
He et al. Deep neural network and transfer learning for accurate hardware-based zero-day malware detection
CN111104339B (en) Software interface element detection method, system, computer equipment and storage medium based on multi-granularity learning
CN112417463B (en) Software vulnerability prediction method, device, computer equipment and storage medium
CN117914618B (en) Network intrusion detection method, system, equipment and medium based on contrast learning
US20230104028A1 (en) System for failure prediction for industrial systems with scarce failures and sensor time series of arbitrary granularity using functional generative adversarial networks
CN114169460A (en) Sample screening method, sample screening device, computer equipment and storage medium
Cai et al. Crowd-sam: Sam as a smart annotator for object detection in crowded scenes
Zhong et al. Causal-iqa: Towards the generalization of image quality assessment based on causal inference
CN114330650A (en) Small sample feature analysis method and device based on evolutionary meta-learning model training
CN113627591A (en) Dynamic graph data processing method and device, electronic equipment and storage medium
CN113033504B (en) A multi-scale video anomaly detection method
CN112488033B (en) Data set construction method and device, electronic equipment and storage medium
CN113283388A (en) Training method, device and equipment of living human face detection model and storage medium
CN118193273A (en) Hard disk fault detection method and device
CN117079070A (en) Test stage self-adaptive learning method, device and equipment based on feature alignment
Bariko et al. Efficient implementation of background subtraction GMM method on digital signal processor
Ugrenovic et al. Towards classification trustworthiness: one-class classifier ensemble
CN119940520B (en) Multimedia data processing method and device
CN119558955B (en) Data generation method and system for member privacy risk detection in credit reporting models
CN120180088A (en) Method and device for predicting security risks of power trading business based on time series detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Jiang Zhifang

Inventor after: Fang Tipin

Inventor after: Han Zhongyi

Inventor after: Yang Guangyuan

Inventor after: Zhang Kai

Inventor before: Fang Tipin

Inventor before: Han Zhongyi

Inventor before: Yang Guangyuan

Inventor before: Zhang Kai

TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20240207

Address after: Room 1609, 16th Floor, Building 2, Xinsheng Building, Northwest Corner of Xinluo Street and Yingxiu Road Intersection, Shunhua Road Street, Jinan Area, China (Shandong) Pilot Free Trade Zone, Jinan City, Shandong Province, 250014

Patentee after: Lingxin Huizhi (Shandong) Intelligent Technology Co.,Ltd.

Country or region after: China

Address before: Room 156-8, No.5 Lingbin Road, Dangan Town, Xiangzhou District, Zhuhai City, Guangdong Province 519000

Patentee before: Guangdong Zhongju Artificial Intelligence Technology Co.,Ltd.

Country or region before: China

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210827