[go: up one dir, main page]

CN118301287A - Intelligent video data interception and analysis system and method based on artificial intelligence - Google Patents

Intelligent video data interception and analysis system and method based on artificial intelligence Download PDF

Info

Publication number
CN118301287A
CN118301287A CN202410386687.0A CN202410386687A CN118301287A CN 118301287 A CN118301287 A CN 118301287A CN 202410386687 A CN202410386687 A CN 202410386687A CN 118301287 A CN118301287 A CN 118301287A
Authority
CN
China
Prior art keywords
monitoring
target
monitoring equipment
video data
range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202410386687.0A
Other languages
Chinese (zh)
Other versions
CN118301287B (en
Inventor
谭为伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Jidian Information Technology Co ltd
Nanjing Cook Network Technology Co ltd
Original Assignee
Nanjing Jidian Information Technology Co ltd
Nanjing Cook Network Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Jidian Information Technology Co ltd, Nanjing Cook Network Technology Co ltd filed Critical Nanjing Jidian Information Technology Co ltd
Priority to CN202410386687.0A priority Critical patent/CN118301287B/en
Publication of CN118301287A publication Critical patent/CN118301287A/en
Application granted granted Critical
Publication of CN118301287B publication Critical patent/CN118301287B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26291Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for providing content or additional data updates, e.g. updating software modules, stored at the client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/458Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules ; time-related management operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本发明涉及视频数据分析技术领域,具体为一种基于人工智能的视频数据智能化截取分析系统及方法,包括:监控信息采集模块、数据传输分析模块、视频截取管理模块和目标识别追踪模块,通过监控信息采集模块采集监控到的历史视频数据、当前视频数据和监控环境信息,通过数据传输分析模块用于通过分析监控信息选择是否联动监控设备,并选择监控设备的数据传输方式,通过视频截取管理模块用于在选择联动监控设备后,对代表传输方的监控设备监控到的视频数据做截取处理,通过目标识别追踪模块用于利用代表接收方的监控设备接收视频数据并对目标进行识别和追踪,在前端实现目标识别和追踪,加快了目标追踪的速度,减轻了终端的视频数据存储压力。

The present invention relates to the technical field of video data analysis, and specifically to an artificial intelligence-based video data intelligent interception and analysis system and method, comprising: a monitoring information acquisition module, a data transmission analysis module, a video interception management module and a target identification and tracking module, wherein the monitoring information acquisition module collects monitored historical video data, current video data and monitoring environment information, the data transmission analysis module is used to select whether to link a monitoring device by analyzing monitoring information, and select a data transmission mode of the monitoring device, the video interception management module is used to intercept and process the video data monitored by the monitoring device representing the transmission party after selecting the linked monitoring device, and the target identification and tracking module is used to use the monitoring device representing the receiving party to receive video data and identify and track the target, thereby realizing target identification and tracking at the front end, accelerating the speed of target tracking, and reducing the video data storage pressure of the terminal.

Description

Intelligent video data interception and analysis system and method based on artificial intelligence
Technical Field
The invention relates to the technical field of video data analysis, in particular to an intelligent video data interception and analysis system and method based on artificial intelligence.
Background
With the development of video monitoring technology, intelligent monitoring equipment is more and more popular, and the intelligent monitoring equipment is embedded with an intelligent chip, so that the intelligent monitoring equipment has the functions of automatically identifying, storing target data, automatically alarming and the like for targets in video, can realize the function of identifying and tracking the targets in the monitored video at the front end, does not need to transmit the monitored video data to a terminal for further target identification, and can effectively lighten video storage pressure and accelerate the speed of identifying and tracking the targets;
However, for the identification tracking of dynamic targets, a plurality of monitoring devices are often required to track simultaneously to master the dynamic change condition of the targets, although intelligent monitoring devices are gradually popularized, part of monitoring devices are not provided with an identification function because intelligent chips are not embedded, video data monitored by the monitoring devices are generally transmitted to a terminal to further identify the targets in the prior art, on one hand, video data monitored by the monitoring devices without the identification function are transmitted to the terminal to further identify the targets, and the target dynamic moving data cannot be well connected with the target dynamic data monitored by the monitoring devices with the identification function before, so that the rapid acquisition of the target dynamic is not facilitated; on the other hand, the terminal has limited storage space, receives too many video data without targets, and easily causes waste of storage space.
Therefore, an intelligent intercepting and analyzing system and method for video data based on artificial intelligence are needed to solve the above problems.
Disclosure of Invention
The invention aims to provide an intelligent video data interception and analysis system and method based on artificial intelligence, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: an artificial intelligence based video data intelligent intercept and analysis system, the system comprising: the system comprises a monitoring information acquisition module, a data transmission analysis module, a video interception management module and a target identification tracking module;
the output end of the monitoring information acquisition module is connected with the input ends of the data transmission analysis module and the video interception management module, the output end of the data transmission analysis module is connected with the input end of the video interception management module, and the output end of the video interception management module is connected with the input end of the target identification tracking module;
the monitoring information acquisition module is used for acquiring monitored historical video data, current video data and monitoring environment information;
The data transmission analysis module is used for selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment;
The video interception management module is used for intercepting video data monitored by monitoring equipment representing a transmitting party after the linkage monitoring equipment is selected, and transmitting the intercepted video data to the monitoring equipment representing a receiving party;
The target recognition and tracking module is used for recognizing and tracking the target by using monitoring equipment representing the receiver.
Further, the monitoring information acquisition module comprises a video data acquisition unit, a target data acquisition unit and an environment data acquisition unit;
The video data acquisition unit is used for acquiring video data of different targets monitored and tracked in the past by the monitoring equipment with the target recognition function;
the target data acquisition unit is used for acquiring target characteristic data to be tracked currently, and acquiring current target video data after the monitoring equipment with the target recognition function recognizes a target to be tracked currently;
The environment data acquisition unit is used for acquiring the type, the monitoring range and the position information of the monitoring equipment which are nearest to the monitoring equipment for identifying the target to be tracked currently, and the monitoring equipment type comprises two types of monitoring equipment with an identification function and monitoring equipment without a target identification function.
Further, the data transmission analysis module comprises a mobile probability analysis unit and a video transmission selection unit;
The input end of the mobile probability analysis unit is connected with the output ends of the video data acquisition unit and the environment data acquisition unit, and the output end of the mobile probability analysis unit is connected with the input end of the video transmission selection unit;
The mobile probability analysis unit is used for taking the monitoring equipment which is closest to the monitoring equipment and is not provided with the target recognition function as first monitoring equipment, taking the monitoring equipment which is closest to the first monitoring equipment and is not provided with the target recognition function as second monitoring equipment, taking video data of different targets which are monitored and tracked in the past by the first monitoring equipment, acquiring a mobile track before the different targets which are monitored and tracked in the past disappear in the monitoring range of the first monitoring equipment, setting sampling points at equal intervals on the mobile track, connecting the sampling points and constructing a mobile vector of a historical target, wherein two adjacent sampling points are respectively a starting point and an ending point of a random mobile vector formed, the historical target is the different targets which are monitored and tracked in the past, taking a mobile vector formed by a mobile track end point of the target before the target disappears in the monitoring range of the first monitoring equipment and a previous sampling point of the end point as first mobile vector, taking the mobile vector formed by the mobile track end point before the target disappears in the monitoring range of the first monitoring equipment as second mobile vector, confirming the center point of the monitoring range, connecting the mobile track and the monitoring range of the different targets after the monitoring and the monitoring range disappear in the monitoring range in the first monitoring equipment, calculating the second inclined angle is formed by the mobile vector between the second moving vector and the second moving range of the historical target, calculating whether the mobile vector is in the first moving range and the second moving range is in the moving range of the second moving range, analyzing the probability that the historical targets corresponding to different deflection angles move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment in the past;
The video transmission selection unit is used for forming a training sample from the deflection angle and the probability, fitting the training sample, establishing a movement probability judgment model, calling the video data of the current target, acquiring the movement track of the current target after the current target disappears in the monitoring range of the first monitoring device, analyzing the deflection angle of the current target and the monitoring range of the second monitoring device, substituting the deflection angle into the movement probability judgment model, predicting the probability that the current target disappears in the monitoring range of the first monitoring device and moves into the monitoring range of the second monitoring device, setting a probability threshold, comparing the predicted probability with the probability threshold, and if the predicted probability exceeds the probability threshold, predicting that the current target moves into the monitoring range of the second monitoring device after the current target disappears in the monitoring range of the first monitoring device, and selecting the video data monitored by the second monitoring device to be directly transmitted to the first monitoring device; if the predicted probability does not exceed the probability threshold, predicting that the current target does not move into the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device, and selecting to transmit video data monitored by the second monitoring device to the monitoring terminal.
Further, the video interception management module comprises a monitoring equipment connection unit, a moving time prediction unit and a video interception transmission unit;
the input end of the monitoring equipment connecting unit is connected with the output end of the video transmission selecting unit, the input end of the moving time predicting unit is connected with the output ends of the monitoring equipment connecting unit and the target data obtaining unit, and the output end of the moving time predicting unit is connected with the input end of the video intercepting and transmitting unit;
The monitoring equipment connection unit is used for connecting the first monitoring equipment with the second monitoring equipment through a local area network if the current target is predicted to be moved into the monitoring range of the second monitoring equipment after being disappeared in the monitoring range of the first monitoring equipment, linking the monitoring equipment through the local area network by connecting the first monitoring equipment with the second monitoring equipment, directly transmitting video data monitored by the second monitoring equipment to the first monitoring equipment, wherein the first monitoring equipment is the monitoring equipment representing a receiver, and the second monitoring equipment is the monitoring equipment representing a transmission party;
The moving time prediction unit is used for analyzing the interval duration between the time point of the current target in the monitoring range of the second monitoring device and the time point of the current target in the monitoring range of the first monitoring device if the current target is predicted to be moved to the monitoring range of the second monitoring device after being disappeared in the monitoring range of the first monitoring device, and selecting the starting time point of video data transmission;
The video interception and transmission unit is used for intercepting and processing video data directly transmitted to the first monitoring equipment: and intercepting video data after the starting time point in the video data monitored by the second monitoring equipment, and transmitting the intercepted video data to the first monitoring equipment.
Further, the target recognition tracking module comprises a video data receiving unit and a target recognition unit;
the input end of the video data receiving unit is connected with the output end of the video intercepting and transmitting unit, and the output end of the video data receiving unit is connected with the input end of the target identifying unit;
the video data receiving unit is used for controlling the first monitoring equipment to receive the intercepted video data;
The target recognition unit is used for analyzing video data by using first monitoring equipment and extracting object features appearing in the video data by using an artificial intelligence technology, comparing the extracted object features with target features needing to be tracked currently, recognizing and tracking targets, and the first monitoring equipment has a target recognition function.
An intelligent video data interception and analysis method based on artificial intelligence comprises the following steps:
s10: collecting monitored historical video data, current video data and monitoring environment information;
s20: selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment;
S30: after the linkage monitoring equipment is selected, intercepting video data monitored by the monitoring equipment representing a transmitting party, and transmitting the intercepted video data to the monitoring equipment representing a receiving party;
S40: the target is identified and tracked using a monitoring device on behalf of the recipient.
Further, in S10: the method comprises the steps of acquiring video data of different targets monitored and tracked in the past by monitoring equipment with a target recognition function, acquiring target feature data required to be tracked currently, acquiring current target video data after the monitoring equipment with the target recognition function recognizes the target required to be tracked currently, and acquiring monitoring equipment type, monitoring range and monitoring equipment position information closest to the monitoring equipment recognizing the target required to be tracked currently.
Further, in S20: if the type of the monitoring equipment closest to the monitoring equipment for identifying the target needing to be tracked is the monitoring equipment without the target identification function, the monitoring equipment for identifying the target needing to be tracked is used as first monitoring equipment, the monitoring equipment closest to the first monitoring equipment and without the target identification function is used as second monitoring equipment, video data of different targets monitored and tracked by the first monitoring equipment in the past are acquired, moving tracks of different targets monitored and tracked in the past before disappearance of the monitoring range of the first monitoring equipment are acquired, sampling points are arranged on the moving tracks at equal intervals, the sampling points are connected and a moving vector of a historical target is constructed, two adjacent sampling points are respectively a starting point and an ending point of a random moving vector formed, the moving vector formed by a moving track end point before disappearance of the target in the monitoring range of the first monitoring equipment and a previous sampling point of the end point is used as a first moving vector, coordinates of the first moving vector are acquired as (E and F), the monitoring range information of the second monitoring equipment is acquired, the center point of the monitoring range is confirmed, the moving track before disappearance of the different targets in the past is monitored and the moving track and the moving vector is connected with the center point of the monitoring range are acquired as a second moving vector (F) according to a coordinate formulaCalculating the included angle between the first motion vector and the second motion vectorTaking the calculated included angle as a deviation angle between a historical target and a monitoring range of the second monitoring equipment to obtain a deviation angle set as={,…,,…,N different deflection angles, i representing a random term,Representing a collectionChecking whether the history target appears in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device according to the i-th deflection angle in the monitoring range, and counting that the deflection angle with the monitoring range of the second monitoring device isThe number of historical targets in the (1) is Q, the number of historical targets in the (Q) historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is w, and the deflection angle is calculated according to the formula P i =w/QThe probability P i that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is obtained, the probability set that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,…,Pi,…,Pn }, and the deflection angle and the probability form a training sample { about {,P1),(,P2),…,(P n), fitting the training samples and establishing a movement probability judgment model: Wherein, the method comprises the steps of, wherein, AndRepresenting a fitting coefficient, wherein x represents a variable representing a deflection angle in a model, y represents a variable representing probability in the model, calling current target video data, acquiring a moving track of the current target after the current target disappears in a monitoring range of first monitoring equipment, and analyzing to obtain the deflection angle of the current target and a monitoring range of second monitoring equipment as followsThe deflection angle calculation mode of the current target and the monitoring range of the second monitoring equipment is the same as the deflection angle calculation mode, and the current target and the monitoring range of the second monitoring equipment are calculatedSubstituting into the movement probability judgment model: order thePredicting that the probability that the current target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device isSetting the probability threshold value as q, and comparingAnd q: if it isPredicting that the current target moves into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment; if it isPredicting that the current target does not move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, and selecting to transmit video data monitored by the second monitoring equipment to the monitoring terminal;
When the type of the monitoring equipment closest to the monitoring equipment which identifies the current target to be tracked is the monitoring equipment without the target identification function, the probability that the historical target which disappears in different positions in the monitoring range of the first monitoring equipment moves to the second monitoring equipment is acquired and analyzed through big data, the deflection angles of the historical target and the monitoring range of the second monitoring equipment are analyzed, the quantity proportion that the targets corresponding to the different deflection angles move to the second monitoring equipment in the past is counted, the deflection angles and the probabilities in the historical data form a training sample to establish a movement probability judging model, the current target data are substituted to predict whether the current target moves to the monitoring range of the second monitoring equipment after the current target disappears in the monitoring range of the first monitoring equipment, if yes, the video monitored by the second monitoring equipment is directly transmitted to the first monitoring equipment, and the probability that the current target does not move to the monitoring range of the second monitoring equipment is pre-judged in priority, so that the invalid linkage of the monitoring equipment is favorable for reducing the monitoring equipment is avoided, the video data monitored by the monitoring equipment without the identification function are directly transmitted to the monitoring equipment and the monitoring equipment with the recognition function is favorable for quickly connecting the video data with the target before the monitoring equipment is monitored, and the dynamic data is fast and the dynamic data is favorable for grasping the target.
Further, in S30: the first monitoring equipment and the second monitoring equipment are connected through a local area network, video data monitored by the second monitoring equipment are directly transmitted to the first monitoring equipment after being intercepted, and the intercepting processing mode is as follows: the method comprises the steps of calling the number of historical targets which are in a second monitoring device monitoring range after disappearance in a first monitoring device monitoring range to be k, collecting a straight line distance set from the end point position of a moving track before disappearance of k historical targets in the first monitoring device monitoring range to the center point of the second monitoring device monitoring range to be d= { d 1,d2,…,dk }, collecting the moving speed set of k historical targets monitored by the first monitoring device to be v= { v 1,v2,…,vk }, calculating a moving coefficient L j of one historical target randomly according to a formula L j=dj/vj to obtain a moving coefficient set of L= { L 1,L2,…,Lk }, collecting an interval duration set between the time point of k historical targets in the second monitoring device monitoring range and the time point of disappearance in the first monitoring device monitoring range to be t= { t 1,t2,…,tk }, performing straight line fitting on the data points { (L 1,t1),(L2,t2),…,(Lk,tk) }, and establishing a target appearance time pre-judging model: Wherein, the method comprises the steps of, wherein, Representing the bias of the target time-of-occurrence pre-judgment model,Representing the intercept, X represents the independent variable of the target appearance time pre-judging model which refers to the movement coefficient, Y represents the dependent variable of the target appearance time pre-judging model which refers to the interval duration, obtaining the movement coefficient of the current target to be L ', enabling X=L ', and predicting to obtain the interval duration between the time point of the current target in the monitoring range of the second monitoring equipment and the time point disappeared in the monitoring range of the first monitoring equipment to beThe method comprises the steps of obtaining a time point T when a current target disappears in a monitoring range of first monitoring equipment, and selecting a starting time point of video data transmission as follows: intercepting the monitoring video data of the second monitoring equipment Video data thereafter;
Considering that the first monitoring equipment receives limited data quantity, if the second monitoring equipment monitors all video data to be transmitted to the first monitoring equipment, excessive invalid video data exist, the speed of identifying targets in the video is reduced, historical target moving data which is moved to the monitoring range of the second monitoring equipment in the past is analyzed through a big data technology, a target occurrence time pre-judging model is built, the current target moving data is substituted to pre-judge the occurrence time of the current target in the monitoring range of the second monitoring equipment, and the video monitored by the second monitoring equipment is intercepted in advance to retransmit the data, so that invalid data in the video data are reduced, and the speed of identifying the targets in the video is increased.
Further, in S40: controlling a first monitoring device to receive intercepted video data, analyzing the video data by using the first monitoring device, extracting object features appearing in the video data by using a convolutional neural network, comparing the extracted object features with target features needing to be tracked currently, judging whether the object appearing in the video is a target needing to be tracked currently, and identifying and tracking the target;
The target recognition and tracking is completed at the front end by utilizing the artificial intelligence technology, so that the speed and accuracy of target recognition and tracking are improved, and meanwhile, the video data storage pressure of the terminal is reduced.
Compared with the prior art, the invention has the following beneficial effects:
When the type of the monitoring equipment closest to the monitoring equipment for identifying the current target to be tracked is the monitoring equipment without the target identification function, acquiring and analyzing the probability of moving the historical target which disappears at different positions in the monitoring range of the first monitoring equipment to the second monitoring equipment through big data, analyzing the deflection angle of the historical target and the monitoring range of the second monitoring equipment, forming a training sample by the deflection angle and the probability in the historical data to establish a movement probability judging model, predicting whether the current target can move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, if so, directly transmitting the video monitored by the second monitoring equipment to the first monitoring equipment, preferentially predicting whether the current target can move into the monitoring range of the second monitoring equipment, thereby being beneficial to reducing invalid linkage of the monitoring equipment, directly transmitting the video data monitored by the monitoring equipment without the identification function to the monitoring equipment with the identification function and being beneficial to quickly connecting the dynamic data of the target so as to help quickly grasp the dynamic state of the target;
Historical target movement data which is moved to the monitoring range of the second monitoring equipment in the past is analyzed through a big data technology, a target occurrence time pre-judging model is established, the current target movement data is substituted to pre-judge the occurrence time of the current target in the monitoring range of the second monitoring equipment, and video monitored by the second monitoring equipment is intercepted in advance to retransmit data, so that invalid video data received by the monitoring equipment with the identification function is reduced, and the speed of identifying the target in the video is increased;
The target recognition and tracking is completed at the front end by utilizing the artificial intelligence technology, so that the speed and accuracy of target recognition and tracking are improved, and meanwhile, the video data storage pressure of the terminal is reduced.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
FIG. 1 is a block diagram of an artificial intelligence based video data intelligent intercept and analyze system of the present invention;
FIG. 2 is a flow chart of an intelligent intercepting and analyzing method of video data based on artificial intelligence.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The invention is further described below with reference to fig. 1-2 and the specific embodiments.
Embodiment one:
As shown in fig. 1, the present embodiment provides an artificial intelligence based video data intelligent interception and analysis system, which includes: the system comprises a monitoring information acquisition module, a data transmission analysis module, a video interception management module and a target identification tracking module; the monitoring information acquisition module is used for acquiring monitored historical video data, current video data and monitoring environment information; the data transmission analysis module is used for selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment; the video interception management module is used for intercepting video data monitored by monitoring equipment representing a transmitting party after the linkage monitoring equipment is selected, and transmitting the intercepted video data to the monitoring equipment representing a receiving party; the target recognition and tracking module is used for recognizing and tracking the target by using the monitoring equipment representing the receiver.
The monitoring information acquisition module comprises a video data acquisition unit, a target data acquisition unit and an environment data acquisition unit; the video data acquisition unit is used for acquiring video data of different targets which are monitored and tracked in the past by the monitoring equipment with the target identification function; the target data acquisition unit is used for acquiring target characteristic data to be tracked currently, and acquiring current target video data after the monitoring equipment with the target recognition function recognizes a target to be tracked currently; the environment data acquisition unit is used for acquiring the type, the monitoring range and the position information of the monitoring equipment which are nearest to the monitoring equipment for identifying the target which needs to be tracked currently, and the monitoring equipment type comprises two types of monitoring equipment with an identification function and monitoring equipment without a target identification function.
The data transmission analysis module comprises a mobile probability analysis unit and a video transmission selection unit; the mobile probability analysis unit is used for taking the monitoring equipment which is closest to the monitoring equipment and is not provided with the target recognition function and is used for recognizing the target to be tracked as the first monitoring equipment, taking the monitoring equipment which is closest to the first monitoring equipment and is not provided with the target recognition function as the second monitoring equipment, taking video data of different targets which are monitored and tracked in the past by the first monitoring equipment, acquiring a mobile track before the different targets which are monitored and tracked in the past disappear in the monitoring range of the first monitoring equipment, setting sampling points at equal intervals on the mobile track, connecting the sampling points and constructing a mobile vector of a historical target, wherein two adjacent sampling points are respectively a starting point and an ending point of a random mobile vector formed, the historical target is the different targets which are monitored and tracked in the past, taking the mobile vector formed by the mobile track end point of the target before the target disappears in the monitoring range of the first monitoring equipment as the first mobile vector, taking the mobile vector formed by the first sampling point before the target disappears in the monitoring range of the first monitoring equipment as the first mobile vector, confirming the center point of the monitoring range of the second monitoring equipment, connecting the mobile track end point and the mobile track before the second mobile track disappears in the monitoring range of the monitoring equipment, calculating the second mobile vector is the second mobile vector in the second moving range of the second moving range, calculating the included angle between the mobile track and the second mobile end point and the second mobile vector is in the first moving range of the monitoring equipment, analyzing the probability that the historical targets corresponding to different deflection angles move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment in the past; the video transmission selection unit is used for forming a training sample from the deflection angle and the probability, fitting the training sample, establishing a movement probability judgment model, calling the video data of the current target, acquiring the movement track of the current target after the current target disappears in the monitoring range of the first monitoring device, analyzing the deflection angle of the current target and the monitoring range of the second monitoring device, substituting the deflection angle into the movement probability judgment model, predicting the probability that the current target disappears in the monitoring range of the first monitoring device and moves into the monitoring range of the second monitoring device, setting a probability threshold value, comparing the predicted probability with the probability threshold value, and if the predicted probability exceeds the probability threshold value, predicting that the current target can move into the monitoring range of the second monitoring device after the current target disappears in the monitoring range of the first monitoring device, and selecting the video data monitored by the second monitoring device to be directly transmitted to the first monitoring device; if the predicted probability does not exceed the probability threshold, predicting that the current target does not move into the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device, and selecting to transmit video data monitored by the second monitoring device to the monitoring terminal.
The video interception management module comprises a monitoring equipment connection unit, a moving time prediction unit and a video interception transmission unit; the monitoring equipment connection unit is used for connecting the first monitoring equipment with the second monitoring equipment through a local area network if the current target is predicted to be moved into the monitoring range of the second monitoring equipment after being disappeared in the monitoring range of the first monitoring equipment, linking the monitoring equipment through the local area network by connecting the first monitoring equipment with the second monitoring equipment, directly transmitting video data monitored by the second monitoring equipment to the first monitoring equipment, wherein the first monitoring equipment is the monitoring equipment representing a receiver, and the second monitoring equipment is the monitoring equipment representing a transmission party; the mobile time prediction unit is used for analyzing the interval duration between the time point of the current target in the monitoring range of the second monitoring device and the time point of the current target in the monitoring range of the first monitoring device if the current target is predicted to be moved to the monitoring range of the second monitoring device after being disappeared in the monitoring range of the first monitoring device, and selecting the starting time point of video data transmission; the video interception and transmission unit is used for intercepting and processing video data directly transmitted to the first monitoring equipment: and intercepting video data after the starting time point in the video data monitored by the second monitoring equipment, and transmitting the intercepted video data to the first monitoring equipment.
The target recognition tracking module comprises a video data receiving unit and a target recognition unit; the video data receiving unit is used for controlling the first monitoring equipment to receive the intercepted video data; the target recognition unit is used for analyzing the video data by using the first monitoring equipment, extracting object features appearing in the video data by using an artificial intelligence technology, comparing the extracted object features with target features needing to be tracked currently, recognizing and tracking the target, and the first monitoring equipment has a target recognition function.
Embodiment two:
As shown in fig. 2, the present embodiment provides an artificial intelligence based video data intelligent interception and analysis method, which is implemented based on the analysis system in the embodiment, and specifically includes the following steps:
S10: collecting monitored historical video data, current video data and monitoring environment information, collecting video data of different targets monitored and tracked by monitoring equipment with a target recognition function in the past, obtaining target feature data required to be tracked currently, obtaining current target video data after the monitoring equipment with the target recognition function recognizes the target required to be tracked currently, and collecting monitoring equipment type, monitoring range and monitoring equipment position information closest to the monitoring equipment recognizing the target required to be tracked currently;
S20: selecting linked monitoring equipment by analyzing moving data of targets in video data and monitoring environment information, selecting a data transmission mode of the monitoring equipment, if the type of the monitoring equipment closest to the monitoring equipment identifying the targets needing to be tracked currently is the monitoring equipment without a target identification function, taking the monitoring equipment identifying the targets needing to be tracked currently as first monitoring equipment, taking the monitoring equipment closest to the first monitoring equipment without the target identification function as second monitoring equipment, taking video data of different targets monitored by the first monitoring equipment in the past, acquiring moving tracks of the different targets monitored and tracked in the past before disappearance in the monitoring range of the first monitoring equipment, setting sampling points on a moving track at equal intervals, connecting the sampling points and constructing a moving vector of a historical target, wherein two adjacent sampling points are a starting point and an ending point of a random moving vector formed by the two adjacent sampling points respectively, taking the moving vector formed by a moving track end point before the target disappears in a monitoring range of first monitoring equipment and a previous sampling point of the end point as a first moving vector, acquiring coordinates of the first moving vector as (E and F), acquiring monitoring range information of second monitoring equipment, and confirming a monitoring range center point, wherein the monitoring range center point of the monitoring equipment refers to a center point of a plane monitored by the monitoring equipment, such as: if the monitoring range of the second monitoring device is a circular area range with a point on the monitored plane as a circle center and a radius r, a second motion vector is constructed by connecting a motion track end point and the monitoring range center point corresponding to the monitoring range center point of the second monitoring device, coordinates of the second motion vector are (e, f) are obtained, a two-dimensional coordinate system is established by taking any point on the plane of the target as an origin, and then the motion vector coordinates are obtained, for example: if the tracked target is a vehicle, establishing a two-dimensional coordinate system by taking any point on the ground where the vehicle runs as an origin, acquiring coordinate information, and according to a formula Calculating the included angle between the first motion vector and the second motion vectorTaking the calculated included angle as a deviation angle between a historical target and a monitoring range of the second monitoring equipment to obtain a deviation angle set as={,…,,…,N different deflection angles, i representing a random term,Representing a collectionChecking whether the history target appears in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device according to the i-th deflection angle in the monitoring range, and counting that the deflection angle with the monitoring range of the second monitoring device isThe number of historical targets in the (1) is Q, the number of historical targets in the (Q) historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is w, and the deflection angle is calculated according to the formula P i =w/QThe probability P i that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is obtained, the probability set that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,…,Pi,…,Pn }, and the deflection angle and the probability form a training sample { about {,P1),(,P2),…,(P n), fitting the training samples and establishing a movement probability judgment model: Wherein, the method comprises the steps of, wherein, AndRepresenting a fitting coefficient, wherein x represents a variable representing a deflection angle in a model, y represents a variable representing probability in the model, calling current target video data, acquiring a moving track of the current target after the current target disappears in a monitoring range of first monitoring equipment, and analyzing to obtain the deflection angle of the current target and a monitoring range of second monitoring equipment as followsThe deflection angle calculation mode of the current target and the monitoring range of the second monitoring equipment is the same as the deflection angle calculation mode, and the current target and the monitoring range of the second monitoring equipment are calculatedSubstituting into the movement probability judgment model: order thePredicting that the probability that the current target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device isSetting the probability threshold value as q, and comparingAnd q: if it isPredicting that the current target moves into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment; if it isPredicting that the current target does not move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, and selecting to transmit video data monitored by the second monitoring equipment to the monitoring terminal;
for example: obtaining the deflection angle set as ={The unit is radian, and the deflection angles counted to the monitoring range of the second monitoring device are respectively setThe number of historical targets of deflection angles in the monitoring range is {15, 20,7, 12,9}, the number of historical targets of the corresponding historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is {12, 15,6,4,5}, the probability that the historical targets corresponding to different deflection angles move to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,P3,P4,P5 = {0.80,0.75,0.86,0.33,0.56}, the deflection angles and the probabilities form training samples { (0.35,0.80), (0.44,0.75), (0.17,0.86), (0.91,0.33), (0.79,0.56) }, the training samples are fitted, and a movement probability judging model is established: Obtaining Obtaining the deflection angle of the current target and the monitoring range of the second monitoring equipment asOrder-makingPredicting that the probability that the current target moves to the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment is 0.85, setting a probability threshold value q=0.80, and comparingAnd q:0.85 to 0.80 percent of the total weight of the composite,And predicting that the current target moves to the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment.
S30: after the linkage monitoring equipment is selected, intercepting the video data monitored by the monitoring equipment representing the transmitting party, transmitting the intercepted video data to the monitoring equipment representing the receiving party, connecting the first monitoring equipment with the second monitoring equipment through a local area network, directly transmitting the video data monitored by the second monitoring equipment to the first monitoring equipment after intercepting, wherein the intercepting mode is as follows: the method comprises the steps of calling the number of historical targets which are in a second monitoring device monitoring range after disappearance in a first monitoring device monitoring range to be k, collecting a straight line distance set from the end point position of a moving track before disappearance of k historical targets in the first monitoring device monitoring range to the center point of the second monitoring device monitoring range to be d= { d 1,d2,…,dk }, collecting the moving speed set of k historical targets monitored by the first monitoring device to be v= { v 1,v2,…,vk }, calculating a moving coefficient L j of one historical target randomly according to a formula L j=dj/vj to obtain a moving coefficient set of L= { L 1,L2,…,Lk }, collecting an interval duration set between the time point of k historical targets in the second monitoring device monitoring range and the time point of disappearance in the first monitoring device monitoring range to be t= { t 1,t2,…,tk }, performing straight line fitting on the data points { (L 1,t1),(L2,t2),…,(Lk,tk) }, and establishing a target appearance time pre-judging model: Wherein, the method comprises the steps of, wherein, Representing the bias of the target time-of-occurrence pre-judgment model,Representing the intercept, X represents the independent variable of the target appearance time pre-judging model which refers to the movement coefficient, Y represents the dependent variable of the target appearance time pre-judging model which refers to the interval duration, obtaining the movement coefficient of the current target to be L ', enabling X=L ', and predicting to obtain the interval duration between the time point of the current target in the monitoring range of the second monitoring equipment and the time point disappeared in the monitoring range of the first monitoring equipment to beThe method comprises the steps of obtaining a time point T when a current target disappears in a monitoring range of first monitoring equipment, and selecting a starting time point of video data transmission as follows: intercepting the monitoring video data of the second monitoring equipment Video data thereafter;
s40: the method comprises the steps of identifying and tracking a target by using monitoring equipment representing a receiver, controlling first monitoring equipment to receive intercepted video data, analyzing the video data by using the first monitoring equipment, extracting object features appearing in the video data by using a convolutional neural network, comparing the extracted object features with target features needing to be tracked currently, judging whether the object appearing in the video is the target needing to be tracked currently, and identifying and tracking the target.
Finally, it should be noted that: the foregoing is merely a preferred example of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. An intelligent video data interception and analysis system based on artificial intelligence is characterized in that: the system comprises: the system comprises a monitoring information acquisition module, a data transmission analysis module, a video interception management module and a target identification tracking module;
the output end of the monitoring information acquisition module is connected with the input ends of the data transmission analysis module and the video interception management module, the output end of the data transmission analysis module is connected with the input end of the video interception management module, and the output end of the video interception management module is connected with the input end of the target identification tracking module;
the monitoring information acquisition module is used for acquiring monitored historical video data, current video data and monitoring environment information;
The data transmission analysis module is used for selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment;
The video interception management module is used for intercepting video data monitored by monitoring equipment representing a transmitting party after the linkage monitoring equipment is selected, and transmitting the intercepted video data to the monitoring equipment representing a receiving party;
The target recognition and tracking module is used for recognizing and tracking the target by using monitoring equipment representing the receiver.
2. The intelligent video data interception and analysis system based on artificial intelligence as claimed in claim 1, wherein: the monitoring information acquisition module comprises a video data acquisition unit, a target data acquisition unit and an environment data acquisition unit;
The video data acquisition unit is used for acquiring video data of different targets monitored and tracked in the past by the monitoring equipment with the target recognition function;
the target data acquisition unit is used for acquiring target characteristic data to be tracked currently, and acquiring current target video data after the monitoring equipment with the target recognition function recognizes a target to be tracked currently;
the environment data acquisition unit is used for acquiring the type, the monitoring range and the position information of the monitoring equipment nearest to the monitoring equipment for identifying the target needing to be tracked currently.
3. An artificial intelligence based video data intelligent intercepting and analyzing system according to claim 2, wherein: the data transmission analysis module comprises a mobile probability analysis unit and a video transmission selection unit;
The input end of the mobile probability analysis unit is connected with the output ends of the video data acquisition unit and the environment data acquisition unit, and the output end of the mobile probability analysis unit is connected with the input end of the video transmission selection unit;
The mobile probability analysis unit is used for taking the monitoring equipment which is closest to the first monitoring equipment and does not have the target recognition function as first monitoring equipment, taking the monitoring equipment which is closest to the first monitoring equipment and does not have the target recognition function as second monitoring equipment, taking video data of different targets which are monitored and tracked in the past by the first monitoring equipment, acquiring a moving track before the different targets which are monitored and tracked in the past disappear in the monitoring range of the first monitoring equipment, setting sampling points at equal intervals on the moving track, connecting the sampling points and constructing moving vectors of historical targets, taking a moving vector which is formed by two adjacent sampling points and is a starting point and an ending point of a random moving vector respectively, taking the moving vector which is formed by the previous sampling point of the moving track end point and the end point of the target which are disappeared in the monitoring range of the first monitoring equipment as first moving vector, taking the monitoring range information of the second monitoring equipment, connecting the moving track end point and the monitoring range center point, calculating a second moving vector, calculating the included angle between the first moving vector and the second moving track and the historical target which are disappeared in the monitoring range, and calculating the corresponding range of the second moving vector and the second moving vector which is different from the first moving track to the historical target which is disappeared in the monitoring range, and the historical target which is different from the past, analyzing range which is different from the historical target which is different from the first monitoring equipment;
The video transmission selection unit is used for forming a training sample from the deflection angle and the probability, fitting the training sample, establishing a movement probability judgment model, calling the video data of the current target, acquiring the movement track of the current target after the current target disappears in the monitoring range of the first monitoring device, analyzing the deflection angle of the current target and the monitoring range of the second monitoring device, substituting the deflection angle into the movement probability judgment model, predicting the probability that the current target disappears in the monitoring range of the first monitoring device and moves into the monitoring range of the second monitoring device, setting a probability threshold, comparing the predicted probability with the probability threshold, and if the predicted probability exceeds the probability threshold, predicting that the current target moves into the monitoring range of the second monitoring device after the current target disappears in the monitoring range of the first monitoring device, and selecting the video data monitored by the second monitoring device to be directly transmitted to the first monitoring device; if the predicted probability does not exceed the probability threshold, predicting that the current target does not move into the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device, and selecting to transmit video data monitored by the second monitoring device to the monitoring terminal.
4. An artificial intelligence based video data intelligent intercepting and analyzing system according to claim 3, wherein: the video interception management module comprises a monitoring equipment connection unit, a mobile time prediction unit and a video interception transmission unit;
the input end of the monitoring equipment connecting unit is connected with the output end of the video transmission selecting unit, the input end of the moving time predicting unit is connected with the output ends of the monitoring equipment connecting unit and the target data obtaining unit, and the output end of the moving time predicting unit is connected with the input end of the video intercepting and transmitting unit;
The monitoring equipment connection unit is used for connecting the first monitoring equipment with the second monitoring equipment through a local area network if the current target is predicted to be moved into the monitoring range of the second monitoring equipment after being disappeared in the monitoring range of the first monitoring equipment, linking the monitoring equipment through the local area network by connecting the first monitoring equipment with the second monitoring equipment, directly transmitting video data monitored by the second monitoring equipment to the first monitoring equipment, wherein the first monitoring equipment is the monitoring equipment representing a receiver, and the second monitoring equipment is the monitoring equipment representing a transmission party;
The moving time prediction unit is used for analyzing the interval duration between the time point of the current target in the monitoring range of the second monitoring device and the time point of the current target in the monitoring range of the first monitoring device if the current target is predicted to be moved to the monitoring range of the second monitoring device after being disappeared in the monitoring range of the first monitoring device, and selecting the starting time point of video data transmission;
The video interception and transmission unit is used for intercepting and processing video data directly transmitted to the first monitoring equipment: and intercepting video data after the starting time point in the video data monitored by the second monitoring equipment, and transmitting the intercepted video data to the first monitoring equipment.
5. The intelligent video data interception and analysis system based on artificial intelligence as claimed in claim 4, wherein: the target recognition tracking module comprises a video data receiving unit and a target recognition unit;
the input end of the video data receiving unit is connected with the output end of the video intercepting and transmitting unit, and the output end of the video data receiving unit is connected with the input end of the target identifying unit;
the video data receiving unit is used for controlling the first monitoring equipment to receive the intercepted video data;
The target recognition unit is used for analyzing the video data by using the first monitoring equipment, extracting object features appearing in the video data by using the convolutional neural network, comparing the extracted object features with target features needing to be tracked currently, judging whether the object appearing in the video is a target needing to be tracked currently, and recognizing and tracking the target.
6. An intelligent video data interception and analysis method based on artificial intelligence is characterized in that: the method comprises the following steps:
s10: collecting monitored historical video data, current video data and monitoring environment information;
s20: selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment;
S30: after the linkage monitoring equipment is selected, intercepting video data monitored by the monitoring equipment representing a transmitting party, and transmitting the intercepted video data to the monitoring equipment representing a receiving party;
S40: the target is identified and tracked using a monitoring device on behalf of the recipient.
7. The intelligent intercepting and analyzing method for video data based on artificial intelligence according to claim 6, wherein the method comprises the following steps: in S10: the method comprises the steps of acquiring video data of different targets monitored and tracked in the past by monitoring equipment with a target recognition function, acquiring target feature data required to be tracked currently, acquiring current target video data after the monitoring equipment with the target recognition function recognizes the target required to be tracked currently, and acquiring monitoring equipment type, monitoring range and monitoring equipment position information closest to the monitoring equipment recognizing the target required to be tracked currently.
8. The intelligent video data interception and analysis method based on artificial intelligence as claimed in claim 7, wherein the method comprises the following steps: in S20: if the type of the monitoring equipment closest to the monitoring equipment for identifying the target needing to be tracked is the monitoring equipment without the target identification function, the monitoring equipment for identifying the target needing to be tracked is used as first monitoring equipment, the monitoring equipment closest to the first monitoring equipment and without the target identification function is used as second monitoring equipment, video data of different targets monitored and tracked by the first monitoring equipment in the past are acquired, moving tracks of different targets monitored and tracked in the past before disappearance of the monitoring range of the first monitoring equipment are acquired, sampling points are arranged on the moving tracks at equal intervals, the sampling points are connected and a moving vector of a historical target is constructed, two adjacent sampling points are respectively a starting point and an ending point of a random moving vector formed, the moving vector formed by a moving track end point before disappearance of the target in the monitoring range of the first monitoring equipment and a previous sampling point of the end point is used as a first moving vector, coordinates of the first moving vector are acquired as (E and F), the monitoring range information of the second monitoring equipment is acquired, the center point of the monitoring range is confirmed, the moving track before disappearance of the different targets in the past is monitored and the moving track and the moving vector is connected with the center point of the monitoring range are acquired as a second moving vector (F) according to a coordinate formulaCalculating the included angle between the first motion vector and the second motion vectorTaking the calculated included angle as a deviation angle between a historical target and a monitoring range of the second monitoring equipment to obtain a deviation angle set as={,…,,…,N different deflection angles, i representing a random term,Representing a collectionChecking whether the history target appears in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device according to the i-th deflection angle in the monitoring range, and counting that the deflection angle with the monitoring range of the second monitoring device isThe number of historical targets in the (1) is Q, the number of historical targets in the (Q) historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is w, and the deflection angle is calculated according to the formula P i =w/QThe probability P i that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is obtained, and the probability set that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,…,Pi,…,Pn }.
9. The intelligent video data interception and analysis method based on artificial intelligence as claimed in claim 8, wherein the method comprises the following steps: the deflection angle and probability are combined into a training sample {,P1),(,P2),…,(P n), fitting the training samples and establishing a movement probability judgment model: Wherein, the method comprises the steps of, wherein, AndRepresenting a fitting coefficient, wherein x represents a variable representing a deflection angle in a model, y represents a variable representing probability in the model, calling current target video data, acquiring a moving track of the current target after the current target disappears in a monitoring range of first monitoring equipment, and analyzing to obtain the deflection angle of the current target and a monitoring range of second monitoring equipment as followsWill beSubstituting into the movement probability judgment model: order thePredicting that the probability that the current target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device isSetting the probability threshold value as q, and comparingAnd q: if it isPredicting that the current target moves into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment; if it isPredicting that the current target does not move into the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device, and selecting to transmit video data monitored by the second monitoring device to the monitoring terminal.
10. The intelligent video data interception and analysis method based on artificial intelligence as claimed in claim 9, wherein: in S30: the first monitoring equipment and the second monitoring equipment are connected through a local area network, video data monitored by the second monitoring equipment are directly transmitted to the first monitoring equipment after being intercepted, and the intercepting processing mode is as follows: the method comprises the steps of calling the number of historical targets which are in a second monitoring device monitoring range after disappearance in a first monitoring device monitoring range to be k, collecting a straight line distance set from the end point position of a moving track before disappearance of k historical targets in the first monitoring device monitoring range to the center point of the second monitoring device monitoring range to be d= { d 1,d2,…,dk }, collecting the moving speed set of k historical targets monitored by the first monitoring device to be v= { v 1,v2,…,vk }, calculating a moving coefficient L j of one historical target randomly according to a formula L j=dj/vj to obtain a moving coefficient set of L= { L 1,L2,…,Lk }, collecting an interval duration set between the time point of k historical targets in the second monitoring device monitoring range and the time point of disappearance in the first monitoring device monitoring range to be t= { t 1,t2,…,tk }, performing straight line fitting on the data points { (L 1,t1),(L2,t2),…,(Lk,tk) }, and establishing a target appearance time pre-judging model: Wherein, the method comprises the steps of, wherein, Representing the bias of the target time-of-occurrence pre-judgment model,Representing the intercept, X represents the independent variable of the target appearance time pre-judging model which refers to the movement coefficient, Y represents the dependent variable of the target appearance time pre-judging model which refers to the interval duration, obtaining the movement coefficient of the current target to be L ', enabling X=L ', and predicting to obtain the interval duration between the time point of the current target in the monitoring range of the second monitoring equipment and the time point disappeared in the monitoring range of the first monitoring equipment to beThe method comprises the steps of obtaining a time point T when a current target disappears in a monitoring range of first monitoring equipment, and selecting a starting time point of video data transmission as follows: intercepting the monitoring video data of the second monitoring equipment Video data thereafter;
in S40: and controlling the first monitoring equipment to receive the intercepted video data, analyzing the video data by using the first monitoring equipment, extracting object features appearing in the video data by using a convolutional neural network, comparing the extracted object features with target features needing to be tracked currently, judging whether the object appearing in the video is a target needing to be tracked currently, and identifying and tracking the target.
CN202410386687.0A 2024-04-01 2024-04-01 Intelligent video data interception and analysis system and method based on artificial intelligence Active CN118301287B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410386687.0A CN118301287B (en) 2024-04-01 2024-04-01 Intelligent video data interception and analysis system and method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410386687.0A CN118301287B (en) 2024-04-01 2024-04-01 Intelligent video data interception and analysis system and method based on artificial intelligence

Publications (2)

Publication Number Publication Date
CN118301287A true CN118301287A (en) 2024-07-05
CN118301287B CN118301287B (en) 2024-09-10

Family

ID=91675407

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410386687.0A Active CN118301287B (en) 2024-04-01 2024-04-01 Intelligent video data interception and analysis system and method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN118301287B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011107839A (en) * 2009-11-13 2011-06-02 Fujitsu Ltd Tracking method, monitoring system, and program
CN111836009A (en) * 2020-06-18 2020-10-27 浙江大华技术股份有限公司 Method for tracking target by multiple cameras, electronic equipment and storage medium
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN112132315A (en) * 2020-08-18 2020-12-25 华为技术有限公司 Escape route prediction method and deployment and control platform of target object
CN112802058A (en) * 2021-01-21 2021-05-14 北京首都机场航空安保有限公司 Method and device for tracking illegal moving target
CN113438450A (en) * 2021-06-11 2021-09-24 深圳市大工创新技术有限公司 Dynamic target tracking monitoring method, monitoring system, electronic device and storage medium
US20220180641A1 (en) * 2020-12-07 2022-06-09 Vivotek Inc. Object counting method and surveillance camera
CN114615474A (en) * 2022-03-24 2022-06-10 杭州登虹科技有限公司 Monitoring system for distributed storage monitoring video
CN114979561A (en) * 2022-04-07 2022-08-30 河南工学院 A monitoring and identification system for hotel front desk
CN115760910A (en) * 2022-10-20 2023-03-07 浙江大华技术股份有限公司 Target tracking method and device of gun and ball linkage equipment, terminal and storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011107839A (en) * 2009-11-13 2011-06-02 Fujitsu Ltd Tracking method, monitoring system, and program
CN111836009A (en) * 2020-06-18 2020-10-27 浙江大华技术股份有限公司 Method for tracking target by multiple cameras, electronic equipment and storage medium
CN111935450A (en) * 2020-07-15 2020-11-13 长江大学 Intelligent suspect tracking method and system and computer readable storage medium
CN112132315A (en) * 2020-08-18 2020-12-25 华为技术有限公司 Escape route prediction method and deployment and control platform of target object
US20220180641A1 (en) * 2020-12-07 2022-06-09 Vivotek Inc. Object counting method and surveillance camera
CN112802058A (en) * 2021-01-21 2021-05-14 北京首都机场航空安保有限公司 Method and device for tracking illegal moving target
CN113438450A (en) * 2021-06-11 2021-09-24 深圳市大工创新技术有限公司 Dynamic target tracking monitoring method, monitoring system, electronic device and storage medium
CN114615474A (en) * 2022-03-24 2022-06-10 杭州登虹科技有限公司 Monitoring system for distributed storage monitoring video
CN114979561A (en) * 2022-04-07 2022-08-30 河南工学院 A monitoring and identification system for hotel front desk
CN115760910A (en) * 2022-10-20 2023-03-07 浙江大华技术股份有限公司 Target tracking method and device of gun and ball linkage equipment, terminal and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邬舒益: ""基于目标检测与路径预测的园区反恐系统软件研发"", 中国优秀硕士学位论文全文数据库, 1 March 2021 (2021-03-01) *

Also Published As

Publication number Publication date
CN118301287B (en) 2024-09-10

Similar Documents

Publication Publication Date Title
CN113284168A (en) Target tracking method and device, electronic equipment and storage medium
CN102724773B (en) Based on the wireless intelligent monitoring device and method of M2M communication
CN106707273B (en) Fusion detection method of multi-station radar signal based on quantification of Neiman-Pearson criterion
CN112770265B (en) Pedestrian identity information acquisition method, system, server and storage medium
CN114066490B (en) GoIP fraud nest point identification method, system and computer readable storage medium
CN115393681A (en) Target fusion method and device, electronic equipment and storage medium
CN114741185A (en) Edge computing system for multi-target video monitoring and working method thereof
CN111935820A (en) Positioning implementation method based on wireless network and related equipment
CN112180903A (en) Vehicle state real-time detection system based on edge calculation
CN117132002A (en) Multi-mode space-time track prediction method, device, equipment and medium
WO2024012367A1 (en) Visual-target tracking method and apparatus, and device and storage medium
CN110099442B (en) Method and device for determining position change of network equipment, computer equipment and medium
US10671050B2 (en) Surveillance system with intelligent robotic surveillance device
CN108093213B (en) Target track fuzzy data fusion method based on video monitoring
CN117320048B (en) Channel prediction method and device
CN112486676A (en) Data sharing and distributing method and device based on edge calculation
CN118301287A (en) Intelligent video data interception and analysis system and method based on artificial intelligence
CN111354016A (en) Unmanned aerial vehicle ship tracking method and system based on deep learning and difference value hashing
CN115774870A (en) Equipment authorization cheating detection method and device, electronic equipment and storage medium
CN112733170B (en) Active trust evaluation method based on evidence sequence extraction
CN118055364A (en) Safety helmet wearing recognition system based on LoRa and 4G
CN119996621B (en) Unmanned reconnaissance system in airtight narrow space
CN114679779A (en) A WIFI localization method based on improved KNN fusion random forest algorithm
CN118433330B (en) Method for reducing false alarm rate of side monitoring by using large model
CN112954689A (en) Lightweight network intrusion detection system and method for Bluetooth wireless transmission

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant