Disclosure of Invention
The invention aims to provide an intelligent video data interception and analysis system and method based on artificial intelligence, which are used for solving the problems in the background technology.
In order to solve the technical problems, the invention provides the following technical scheme: an artificial intelligence based video data intelligent intercept and analysis system, the system comprising: the system comprises a monitoring information acquisition module, a data transmission analysis module, a video interception management module and a target identification tracking module;
the output end of the monitoring information acquisition module is connected with the input ends of the data transmission analysis module and the video interception management module, the output end of the data transmission analysis module is connected with the input end of the video interception management module, and the output end of the video interception management module is connected with the input end of the target identification tracking module;
the monitoring information acquisition module is used for acquiring monitored historical video data, current video data and monitoring environment information;
The data transmission analysis module is used for selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment;
The video interception management module is used for intercepting video data monitored by monitoring equipment representing a transmitting party after the linkage monitoring equipment is selected, and transmitting the intercepted video data to the monitoring equipment representing a receiving party;
The target recognition and tracking module is used for recognizing and tracking the target by using monitoring equipment representing the receiver.
Further, the monitoring information acquisition module comprises a video data acquisition unit, a target data acquisition unit and an environment data acquisition unit;
The video data acquisition unit is used for acquiring video data of different targets monitored and tracked in the past by the monitoring equipment with the target recognition function;
the target data acquisition unit is used for acquiring target characteristic data to be tracked currently, and acquiring current target video data after the monitoring equipment with the target recognition function recognizes a target to be tracked currently;
The environment data acquisition unit is used for acquiring the type, the monitoring range and the position information of the monitoring equipment which are nearest to the monitoring equipment for identifying the target to be tracked currently, and the monitoring equipment type comprises two types of monitoring equipment with an identification function and monitoring equipment without a target identification function.
Further, the data transmission analysis module comprises a mobile probability analysis unit and a video transmission selection unit;
The input end of the mobile probability analysis unit is connected with the output ends of the video data acquisition unit and the environment data acquisition unit, and the output end of the mobile probability analysis unit is connected with the input end of the video transmission selection unit;
The mobile probability analysis unit is used for taking the monitoring equipment which is closest to the monitoring equipment and is not provided with the target recognition function as first monitoring equipment, taking the monitoring equipment which is closest to the first monitoring equipment and is not provided with the target recognition function as second monitoring equipment, taking video data of different targets which are monitored and tracked in the past by the first monitoring equipment, acquiring a mobile track before the different targets which are monitored and tracked in the past disappear in the monitoring range of the first monitoring equipment, setting sampling points at equal intervals on the mobile track, connecting the sampling points and constructing a mobile vector of a historical target, wherein two adjacent sampling points are respectively a starting point and an ending point of a random mobile vector formed, the historical target is the different targets which are monitored and tracked in the past, taking a mobile vector formed by a mobile track end point of the target before the target disappears in the monitoring range of the first monitoring equipment and a previous sampling point of the end point as first mobile vector, taking the mobile vector formed by the mobile track end point before the target disappears in the monitoring range of the first monitoring equipment as second mobile vector, confirming the center point of the monitoring range, connecting the mobile track and the monitoring range of the different targets after the monitoring and the monitoring range disappear in the monitoring range in the first monitoring equipment, calculating the second inclined angle is formed by the mobile vector between the second moving vector and the second moving range of the historical target, calculating whether the mobile vector is in the first moving range and the second moving range is in the moving range of the second moving range, analyzing the probability that the historical targets corresponding to different deflection angles move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment in the past;
The video transmission selection unit is used for forming a training sample from the deflection angle and the probability, fitting the training sample, establishing a movement probability judgment model, calling the video data of the current target, acquiring the movement track of the current target after the current target disappears in the monitoring range of the first monitoring device, analyzing the deflection angle of the current target and the monitoring range of the second monitoring device, substituting the deflection angle into the movement probability judgment model, predicting the probability that the current target disappears in the monitoring range of the first monitoring device and moves into the monitoring range of the second monitoring device, setting a probability threshold, comparing the predicted probability with the probability threshold, and if the predicted probability exceeds the probability threshold, predicting that the current target moves into the monitoring range of the second monitoring device after the current target disappears in the monitoring range of the first monitoring device, and selecting the video data monitored by the second monitoring device to be directly transmitted to the first monitoring device; if the predicted probability does not exceed the probability threshold, predicting that the current target does not move into the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device, and selecting to transmit video data monitored by the second monitoring device to the monitoring terminal.
Further, the video interception management module comprises a monitoring equipment connection unit, a moving time prediction unit and a video interception transmission unit;
the input end of the monitoring equipment connecting unit is connected with the output end of the video transmission selecting unit, the input end of the moving time predicting unit is connected with the output ends of the monitoring equipment connecting unit and the target data obtaining unit, and the output end of the moving time predicting unit is connected with the input end of the video intercepting and transmitting unit;
The monitoring equipment connection unit is used for connecting the first monitoring equipment with the second monitoring equipment through a local area network if the current target is predicted to be moved into the monitoring range of the second monitoring equipment after being disappeared in the monitoring range of the first monitoring equipment, linking the monitoring equipment through the local area network by connecting the first monitoring equipment with the second monitoring equipment, directly transmitting video data monitored by the second monitoring equipment to the first monitoring equipment, wherein the first monitoring equipment is the monitoring equipment representing a receiver, and the second monitoring equipment is the monitoring equipment representing a transmission party;
The moving time prediction unit is used for analyzing the interval duration between the time point of the current target in the monitoring range of the second monitoring device and the time point of the current target in the monitoring range of the first monitoring device if the current target is predicted to be moved to the monitoring range of the second monitoring device after being disappeared in the monitoring range of the first monitoring device, and selecting the starting time point of video data transmission;
The video interception and transmission unit is used for intercepting and processing video data directly transmitted to the first monitoring equipment: and intercepting video data after the starting time point in the video data monitored by the second monitoring equipment, and transmitting the intercepted video data to the first monitoring equipment.
Further, the target recognition tracking module comprises a video data receiving unit and a target recognition unit;
the input end of the video data receiving unit is connected with the output end of the video intercepting and transmitting unit, and the output end of the video data receiving unit is connected with the input end of the target identifying unit;
the video data receiving unit is used for controlling the first monitoring equipment to receive the intercepted video data;
The target recognition unit is used for analyzing video data by using first monitoring equipment and extracting object features appearing in the video data by using an artificial intelligence technology, comparing the extracted object features with target features needing to be tracked currently, recognizing and tracking targets, and the first monitoring equipment has a target recognition function.
An intelligent video data interception and analysis method based on artificial intelligence comprises the following steps:
s10: collecting monitored historical video data, current video data and monitoring environment information;
s20: selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment;
S30: after the linkage monitoring equipment is selected, intercepting video data monitored by the monitoring equipment representing a transmitting party, and transmitting the intercepted video data to the monitoring equipment representing a receiving party;
S40: the target is identified and tracked using a monitoring device on behalf of the recipient.
Further, in S10: the method comprises the steps of acquiring video data of different targets monitored and tracked in the past by monitoring equipment with a target recognition function, acquiring target feature data required to be tracked currently, acquiring current target video data after the monitoring equipment with the target recognition function recognizes the target required to be tracked currently, and acquiring monitoring equipment type, monitoring range and monitoring equipment position information closest to the monitoring equipment recognizing the target required to be tracked currently.
Further, in S20: if the type of the monitoring equipment closest to the monitoring equipment for identifying the target needing to be tracked is the monitoring equipment without the target identification function, the monitoring equipment for identifying the target needing to be tracked is used as first monitoring equipment, the monitoring equipment closest to the first monitoring equipment and without the target identification function is used as second monitoring equipment, video data of different targets monitored and tracked by the first monitoring equipment in the past are acquired, moving tracks of different targets monitored and tracked in the past before disappearance of the monitoring range of the first monitoring equipment are acquired, sampling points are arranged on the moving tracks at equal intervals, the sampling points are connected and a moving vector of a historical target is constructed, two adjacent sampling points are respectively a starting point and an ending point of a random moving vector formed, the moving vector formed by a moving track end point before disappearance of the target in the monitoring range of the first monitoring equipment and a previous sampling point of the end point is used as a first moving vector, coordinates of the first moving vector are acquired as (E and F), the monitoring range information of the second monitoring equipment is acquired, the center point of the monitoring range is confirmed, the moving track before disappearance of the different targets in the past is monitored and the moving track and the moving vector is connected with the center point of the monitoring range are acquired as a second moving vector (F) according to a coordinate formulaCalculating the included angle between the first motion vector and the second motion vectorTaking the calculated included angle as a deviation angle between a historical target and a monitoring range of the second monitoring equipment to obtain a deviation angle set as={,,…,,…,N different deflection angles, i representing a random term,Representing a collectionChecking whether the history target appears in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device according to the i-th deflection angle in the monitoring range, and counting that the deflection angle with the monitoring range of the second monitoring device isThe number of historical targets in the (1) is Q, the number of historical targets in the (Q) historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is w, and the deflection angle is calculated according to the formula P i =w/QThe probability P i that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is obtained, the probability set that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,…,Pi,…,Pn }, and the deflection angle and the probability form a training sample { about {,P1),(,P2),…,(P n), fitting the training samples and establishing a movement probability judgment model: Wherein, the method comprises the steps of, wherein, AndRepresenting a fitting coefficient, wherein x represents a variable representing a deflection angle in a model, y represents a variable representing probability in the model, calling current target video data, acquiring a moving track of the current target after the current target disappears in a monitoring range of first monitoring equipment, and analyzing to obtain the deflection angle of the current target and a monitoring range of second monitoring equipment as followsThe deflection angle calculation mode of the current target and the monitoring range of the second monitoring equipment is the same as the deflection angle calculation mode, and the current target and the monitoring range of the second monitoring equipment are calculatedSubstituting into the movement probability judgment model: order thePredicting that the probability that the current target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device isSetting the probability threshold value as q, and comparingAnd q: if it isPredicting that the current target moves into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment; if it isPredicting that the current target does not move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, and selecting to transmit video data monitored by the second monitoring equipment to the monitoring terminal;
When the type of the monitoring equipment closest to the monitoring equipment which identifies the current target to be tracked is the monitoring equipment without the target identification function, the probability that the historical target which disappears in different positions in the monitoring range of the first monitoring equipment moves to the second monitoring equipment is acquired and analyzed through big data, the deflection angles of the historical target and the monitoring range of the second monitoring equipment are analyzed, the quantity proportion that the targets corresponding to the different deflection angles move to the second monitoring equipment in the past is counted, the deflection angles and the probabilities in the historical data form a training sample to establish a movement probability judging model, the current target data are substituted to predict whether the current target moves to the monitoring range of the second monitoring equipment after the current target disappears in the monitoring range of the first monitoring equipment, if yes, the video monitored by the second monitoring equipment is directly transmitted to the first monitoring equipment, and the probability that the current target does not move to the monitoring range of the second monitoring equipment is pre-judged in priority, so that the invalid linkage of the monitoring equipment is favorable for reducing the monitoring equipment is avoided, the video data monitored by the monitoring equipment without the identification function are directly transmitted to the monitoring equipment and the monitoring equipment with the recognition function is favorable for quickly connecting the video data with the target before the monitoring equipment is monitored, and the dynamic data is fast and the dynamic data is favorable for grasping the target.
Further, in S30: the first monitoring equipment and the second monitoring equipment are connected through a local area network, video data monitored by the second monitoring equipment are directly transmitted to the first monitoring equipment after being intercepted, and the intercepting processing mode is as follows: the method comprises the steps of calling the number of historical targets which are in a second monitoring device monitoring range after disappearance in a first monitoring device monitoring range to be k, collecting a straight line distance set from the end point position of a moving track before disappearance of k historical targets in the first monitoring device monitoring range to the center point of the second monitoring device monitoring range to be d= { d 1,d2,…,dk }, collecting the moving speed set of k historical targets monitored by the first monitoring device to be v= { v 1,v2,…,vk }, calculating a moving coefficient L j of one historical target randomly according to a formula L j=dj/vj to obtain a moving coefficient set of L= { L 1,L2,…,Lk }, collecting an interval duration set between the time point of k historical targets in the second monitoring device monitoring range and the time point of disappearance in the first monitoring device monitoring range to be t= { t 1,t2,…,tk }, performing straight line fitting on the data points { (L 1,t1),(L2,t2),…,(Lk,tk) }, and establishing a target appearance time pre-judging model: Wherein, the method comprises the steps of, wherein, Representing the bias of the target time-of-occurrence pre-judgment model,Representing the intercept, X represents the independent variable of the target appearance time pre-judging model which refers to the movement coefficient, Y represents the dependent variable of the target appearance time pre-judging model which refers to the interval duration, obtaining the movement coefficient of the current target to be L ', enabling X=L ', and predicting to obtain the interval duration between the time point of the current target in the monitoring range of the second monitoring equipment and the time point disappeared in the monitoring range of the first monitoring equipment to beThe method comprises the steps of obtaining a time point T when a current target disappears in a monitoring range of first monitoring equipment, and selecting a starting time point of video data transmission as follows: intercepting the monitoring video data of the second monitoring equipment Video data thereafter;
Considering that the first monitoring equipment receives limited data quantity, if the second monitoring equipment monitors all video data to be transmitted to the first monitoring equipment, excessive invalid video data exist, the speed of identifying targets in the video is reduced, historical target moving data which is moved to the monitoring range of the second monitoring equipment in the past is analyzed through a big data technology, a target occurrence time pre-judging model is built, the current target moving data is substituted to pre-judge the occurrence time of the current target in the monitoring range of the second monitoring equipment, and the video monitored by the second monitoring equipment is intercepted in advance to retransmit the data, so that invalid data in the video data are reduced, and the speed of identifying the targets in the video is increased.
Further, in S40: controlling a first monitoring device to receive intercepted video data, analyzing the video data by using the first monitoring device, extracting object features appearing in the video data by using a convolutional neural network, comparing the extracted object features with target features needing to be tracked currently, judging whether the object appearing in the video is a target needing to be tracked currently, and identifying and tracking the target;
The target recognition and tracking is completed at the front end by utilizing the artificial intelligence technology, so that the speed and accuracy of target recognition and tracking are improved, and meanwhile, the video data storage pressure of the terminal is reduced.
Compared with the prior art, the invention has the following beneficial effects:
When the type of the monitoring equipment closest to the monitoring equipment for identifying the current target to be tracked is the monitoring equipment without the target identification function, acquiring and analyzing the probability of moving the historical target which disappears at different positions in the monitoring range of the first monitoring equipment to the second monitoring equipment through big data, analyzing the deflection angle of the historical target and the monitoring range of the second monitoring equipment, forming a training sample by the deflection angle and the probability in the historical data to establish a movement probability judging model, predicting whether the current target can move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, if so, directly transmitting the video monitored by the second monitoring equipment to the first monitoring equipment, preferentially predicting whether the current target can move into the monitoring range of the second monitoring equipment, thereby being beneficial to reducing invalid linkage of the monitoring equipment, directly transmitting the video data monitored by the monitoring equipment without the identification function to the monitoring equipment with the identification function and being beneficial to quickly connecting the dynamic data of the target so as to help quickly grasp the dynamic state of the target;
Historical target movement data which is moved to the monitoring range of the second monitoring equipment in the past is analyzed through a big data technology, a target occurrence time pre-judging model is established, the current target movement data is substituted to pre-judge the occurrence time of the current target in the monitoring range of the second monitoring equipment, and video monitored by the second monitoring equipment is intercepted in advance to retransmit data, so that invalid video data received by the monitoring equipment with the identification function is reduced, and the speed of identifying the target in the video is increased;
The target recognition and tracking is completed at the front end by utilizing the artificial intelligence technology, so that the speed and accuracy of target recognition and tracking are improved, and meanwhile, the video data storage pressure of the terminal is reduced.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
The invention is further described below with reference to fig. 1-2 and the specific embodiments.
Embodiment one:
As shown in fig. 1, the present embodiment provides an artificial intelligence based video data intelligent interception and analysis system, which includes: the system comprises a monitoring information acquisition module, a data transmission analysis module, a video interception management module and a target identification tracking module; the monitoring information acquisition module is used for acquiring monitored historical video data, current video data and monitoring environment information; the data transmission analysis module is used for selecting whether to link the monitoring equipment or not by analyzing the moving data of the target in the video data and the monitoring environment information, and selecting a data transmission mode of the monitoring equipment; the video interception management module is used for intercepting video data monitored by monitoring equipment representing a transmitting party after the linkage monitoring equipment is selected, and transmitting the intercepted video data to the monitoring equipment representing a receiving party; the target recognition and tracking module is used for recognizing and tracking the target by using the monitoring equipment representing the receiver.
The monitoring information acquisition module comprises a video data acquisition unit, a target data acquisition unit and an environment data acquisition unit; the video data acquisition unit is used for acquiring video data of different targets which are monitored and tracked in the past by the monitoring equipment with the target identification function; the target data acquisition unit is used for acquiring target characteristic data to be tracked currently, and acquiring current target video data after the monitoring equipment with the target recognition function recognizes a target to be tracked currently; the environment data acquisition unit is used for acquiring the type, the monitoring range and the position information of the monitoring equipment which are nearest to the monitoring equipment for identifying the target which needs to be tracked currently, and the monitoring equipment type comprises two types of monitoring equipment with an identification function and monitoring equipment without a target identification function.
The data transmission analysis module comprises a mobile probability analysis unit and a video transmission selection unit; the mobile probability analysis unit is used for taking the monitoring equipment which is closest to the monitoring equipment and is not provided with the target recognition function and is used for recognizing the target to be tracked as the first monitoring equipment, taking the monitoring equipment which is closest to the first monitoring equipment and is not provided with the target recognition function as the second monitoring equipment, taking video data of different targets which are monitored and tracked in the past by the first monitoring equipment, acquiring a mobile track before the different targets which are monitored and tracked in the past disappear in the monitoring range of the first monitoring equipment, setting sampling points at equal intervals on the mobile track, connecting the sampling points and constructing a mobile vector of a historical target, wherein two adjacent sampling points are respectively a starting point and an ending point of a random mobile vector formed, the historical target is the different targets which are monitored and tracked in the past, taking the mobile vector formed by the mobile track end point of the target before the target disappears in the monitoring range of the first monitoring equipment as the first mobile vector, taking the mobile vector formed by the first sampling point before the target disappears in the monitoring range of the first monitoring equipment as the first mobile vector, confirming the center point of the monitoring range of the second monitoring equipment, connecting the mobile track end point and the mobile track before the second mobile track disappears in the monitoring range of the monitoring equipment, calculating the second mobile vector is the second mobile vector in the second moving range of the second moving range, calculating the included angle between the mobile track and the second mobile end point and the second mobile vector is in the first moving range of the monitoring equipment, analyzing the probability that the historical targets corresponding to different deflection angles move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment in the past; the video transmission selection unit is used for forming a training sample from the deflection angle and the probability, fitting the training sample, establishing a movement probability judgment model, calling the video data of the current target, acquiring the movement track of the current target after the current target disappears in the monitoring range of the first monitoring device, analyzing the deflection angle of the current target and the monitoring range of the second monitoring device, substituting the deflection angle into the movement probability judgment model, predicting the probability that the current target disappears in the monitoring range of the first monitoring device and moves into the monitoring range of the second monitoring device, setting a probability threshold value, comparing the predicted probability with the probability threshold value, and if the predicted probability exceeds the probability threshold value, predicting that the current target can move into the monitoring range of the second monitoring device after the current target disappears in the monitoring range of the first monitoring device, and selecting the video data monitored by the second monitoring device to be directly transmitted to the first monitoring device; if the predicted probability does not exceed the probability threshold, predicting that the current target does not move into the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device, and selecting to transmit video data monitored by the second monitoring device to the monitoring terminal.
The video interception management module comprises a monitoring equipment connection unit, a moving time prediction unit and a video interception transmission unit; the monitoring equipment connection unit is used for connecting the first monitoring equipment with the second monitoring equipment through a local area network if the current target is predicted to be moved into the monitoring range of the second monitoring equipment after being disappeared in the monitoring range of the first monitoring equipment, linking the monitoring equipment through the local area network by connecting the first monitoring equipment with the second monitoring equipment, directly transmitting video data monitored by the second monitoring equipment to the first monitoring equipment, wherein the first monitoring equipment is the monitoring equipment representing a receiver, and the second monitoring equipment is the monitoring equipment representing a transmission party; the mobile time prediction unit is used for analyzing the interval duration between the time point of the current target in the monitoring range of the second monitoring device and the time point of the current target in the monitoring range of the first monitoring device if the current target is predicted to be moved to the monitoring range of the second monitoring device after being disappeared in the monitoring range of the first monitoring device, and selecting the starting time point of video data transmission; the video interception and transmission unit is used for intercepting and processing video data directly transmitted to the first monitoring equipment: and intercepting video data after the starting time point in the video data monitored by the second monitoring equipment, and transmitting the intercepted video data to the first monitoring equipment.
The target recognition tracking module comprises a video data receiving unit and a target recognition unit; the video data receiving unit is used for controlling the first monitoring equipment to receive the intercepted video data; the target recognition unit is used for analyzing the video data by using the first monitoring equipment, extracting object features appearing in the video data by using an artificial intelligence technology, comparing the extracted object features with target features needing to be tracked currently, recognizing and tracking the target, and the first monitoring equipment has a target recognition function.
Embodiment two:
As shown in fig. 2, the present embodiment provides an artificial intelligence based video data intelligent interception and analysis method, which is implemented based on the analysis system in the embodiment, and specifically includes the following steps:
S10: collecting monitored historical video data, current video data and monitoring environment information, collecting video data of different targets monitored and tracked by monitoring equipment with a target recognition function in the past, obtaining target feature data required to be tracked currently, obtaining current target video data after the monitoring equipment with the target recognition function recognizes the target required to be tracked currently, and collecting monitoring equipment type, monitoring range and monitoring equipment position information closest to the monitoring equipment recognizing the target required to be tracked currently;
S20: selecting linked monitoring equipment by analyzing moving data of targets in video data and monitoring environment information, selecting a data transmission mode of the monitoring equipment, if the type of the monitoring equipment closest to the monitoring equipment identifying the targets needing to be tracked currently is the monitoring equipment without a target identification function, taking the monitoring equipment identifying the targets needing to be tracked currently as first monitoring equipment, taking the monitoring equipment closest to the first monitoring equipment without the target identification function as second monitoring equipment, taking video data of different targets monitored by the first monitoring equipment in the past, acquiring moving tracks of the different targets monitored and tracked in the past before disappearance in the monitoring range of the first monitoring equipment, setting sampling points on a moving track at equal intervals, connecting the sampling points and constructing a moving vector of a historical target, wherein two adjacent sampling points are a starting point and an ending point of a random moving vector formed by the two adjacent sampling points respectively, taking the moving vector formed by a moving track end point before the target disappears in a monitoring range of first monitoring equipment and a previous sampling point of the end point as a first moving vector, acquiring coordinates of the first moving vector as (E and F), acquiring monitoring range information of second monitoring equipment, and confirming a monitoring range center point, wherein the monitoring range center point of the monitoring equipment refers to a center point of a plane monitored by the monitoring equipment, such as: if the monitoring range of the second monitoring device is a circular area range with a point on the monitored plane as a circle center and a radius r, a second motion vector is constructed by connecting a motion track end point and the monitoring range center point corresponding to the monitoring range center point of the second monitoring device, coordinates of the second motion vector are (e, f) are obtained, a two-dimensional coordinate system is established by taking any point on the plane of the target as an origin, and then the motion vector coordinates are obtained, for example: if the tracked target is a vehicle, establishing a two-dimensional coordinate system by taking any point on the ground where the vehicle runs as an origin, acquiring coordinate information, and according to a formula Calculating the included angle between the first motion vector and the second motion vectorTaking the calculated included angle as a deviation angle between a historical target and a monitoring range of the second monitoring equipment to obtain a deviation angle set as={,,…,,…,N different deflection angles, i representing a random term,Representing a collectionChecking whether the history target appears in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device according to the i-th deflection angle in the monitoring range, and counting that the deflection angle with the monitoring range of the second monitoring device isThe number of historical targets in the (1) is Q, the number of historical targets in the (Q) historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is w, and the deflection angle is calculated according to the formula P i =w/QThe probability P i that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is obtained, the probability set that the corresponding historical target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,…,Pi,…,Pn }, and the deflection angle and the probability form a training sample { about {,P1),(,P2),…,(P n), fitting the training samples and establishing a movement probability judgment model: Wherein, the method comprises the steps of, wherein, AndRepresenting a fitting coefficient, wherein x represents a variable representing a deflection angle in a model, y represents a variable representing probability in the model, calling current target video data, acquiring a moving track of the current target after the current target disappears in a monitoring range of first monitoring equipment, and analyzing to obtain the deflection angle of the current target and a monitoring range of second monitoring equipment as followsThe deflection angle calculation mode of the current target and the monitoring range of the second monitoring equipment is the same as the deflection angle calculation mode, and the current target and the monitoring range of the second monitoring equipment are calculatedSubstituting into the movement probability judgment model: order thePredicting that the probability that the current target moves to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device isSetting the probability threshold value as q, and comparingAnd q: if it isPredicting that the current target moves into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment; if it isPredicting that the current target does not move into the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, and selecting to transmit video data monitored by the second monitoring equipment to the monitoring terminal;
for example: obtaining the deflection angle set as ={,,,,The unit is radian, and the deflection angles counted to the monitoring range of the second monitoring device are respectively setThe number of historical targets of deflection angles in the monitoring range is {15, 20,7, 12,9}, the number of historical targets of the corresponding historical targets which appear in the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is {12, 15,6,4,5}, the probability that the historical targets corresponding to different deflection angles move to the monitoring range of the second monitoring device after disappearing in the monitoring range of the first monitoring device is P= { P 1,P2,P3,P4,P5 = {0.80,0.75,0.86,0.33,0.56}, the deflection angles and the probabilities form training samples { (0.35,0.80), (0.44,0.75), (0.17,0.86), (0.91,0.33), (0.79,0.56) }, the training samples are fitted, and a movement probability judging model is established:,, Obtaining Obtaining the deflection angle of the current target and the monitoring range of the second monitoring equipment asOrder-makingPredicting that the probability that the current target moves to the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment is 0.85, setting a probability threshold value q=0.80, and comparingAnd q:0.85 to 0.80 percent of the total weight of the composite,And predicting that the current target moves to the monitoring range of the second monitoring equipment after disappearing in the monitoring range of the first monitoring equipment, selecting linkage of the first monitoring equipment and the second monitoring equipment, and selecting video data monitored by the second monitoring equipment to be directly transmitted to the first monitoring equipment.
S30: after the linkage monitoring equipment is selected, intercepting the video data monitored by the monitoring equipment representing the transmitting party, transmitting the intercepted video data to the monitoring equipment representing the receiving party, connecting the first monitoring equipment with the second monitoring equipment through a local area network, directly transmitting the video data monitored by the second monitoring equipment to the first monitoring equipment after intercepting, wherein the intercepting mode is as follows: the method comprises the steps of calling the number of historical targets which are in a second monitoring device monitoring range after disappearance in a first monitoring device monitoring range to be k, collecting a straight line distance set from the end point position of a moving track before disappearance of k historical targets in the first monitoring device monitoring range to the center point of the second monitoring device monitoring range to be d= { d 1,d2,…,dk }, collecting the moving speed set of k historical targets monitored by the first monitoring device to be v= { v 1,v2,…,vk }, calculating a moving coefficient L j of one historical target randomly according to a formula L j=dj/vj to obtain a moving coefficient set of L= { L 1,L2,…,Lk }, collecting an interval duration set between the time point of k historical targets in the second monitoring device monitoring range and the time point of disappearance in the first monitoring device monitoring range to be t= { t 1,t2,…,tk }, performing straight line fitting on the data points { (L 1,t1),(L2,t2),…,(Lk,tk) }, and establishing a target appearance time pre-judging model: Wherein, the method comprises the steps of, wherein, Representing the bias of the target time-of-occurrence pre-judgment model,Representing the intercept, X represents the independent variable of the target appearance time pre-judging model which refers to the movement coefficient, Y represents the dependent variable of the target appearance time pre-judging model which refers to the interval duration, obtaining the movement coefficient of the current target to be L ', enabling X=L ', and predicting to obtain the interval duration between the time point of the current target in the monitoring range of the second monitoring equipment and the time point disappeared in the monitoring range of the first monitoring equipment to beThe method comprises the steps of obtaining a time point T when a current target disappears in a monitoring range of first monitoring equipment, and selecting a starting time point of video data transmission as follows: intercepting the monitoring video data of the second monitoring equipment Video data thereafter;
s40: the method comprises the steps of identifying and tracking a target by using monitoring equipment representing a receiver, controlling first monitoring equipment to receive intercepted video data, analyzing the video data by using the first monitoring equipment, extracting object features appearing in the video data by using a convolutional neural network, comparing the extracted object features with target features needing to be tracked currently, judging whether the object appearing in the video is the target needing to be tracked currently, and identifying and tracking the target.
Finally, it should be noted that: the foregoing is merely a preferred example of the present invention, and the present invention is not limited thereto, but it is to be understood that modifications and equivalents of some of the technical features described in the foregoing embodiments may be made by those skilled in the art, although the present invention has been described in detail with reference to the foregoing embodiments. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.