[go: up one dir, main page]

CN111126153A - Safety monitoring method, system, server and storage medium based on deep learning - Google Patents

Safety monitoring method, system, server and storage medium based on deep learning Download PDF

Info

Publication number
CN111126153A
CN111126153A CN201911165549.5A CN201911165549A CN111126153A CN 111126153 A CN111126153 A CN 111126153A CN 201911165549 A CN201911165549 A CN 201911165549A CN 111126153 A CN111126153 A CN 111126153A
Authority
CN
China
Prior art keywords
target
monitoring
monitored
information
video data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911165549.5A
Other languages
Chinese (zh)
Other versions
CN111126153B (en
Inventor
马延旭
火一莽
万月亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Ruian Technology Co Ltd
Original Assignee
Beijing Ruian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Ruian Technology Co Ltd filed Critical Beijing Ruian Technology Co Ltd
Priority to CN201911165549.5A priority Critical patent/CN111126153B/en
Publication of CN111126153A publication Critical patent/CN111126153A/en
Application granted granted Critical
Publication of CN111126153B publication Critical patent/CN111126153B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/08Detecting or categorising vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Alarm Systems (AREA)

Abstract

The embodiment of the invention discloses a safety monitoring method, a safety monitoring system, a safety monitoring server and a storage medium based on deep learning. The method comprises the following steps: acquiring video data of a target to be monitored in a monitoring area; analyzing the video data through a pre-trained first model to acquire monitoring characteristic information of the target to be monitored in a monitoring area; judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information; and when the target to be monitored is abnormal, alarming operation is carried out. According to the invention, through deep learning-based training of the video data, semantic structural extraction of the video data and effective management of the video data are realized.

Description

Safety monitoring method, system, server and storage medium based on deep learning
Technical Field
The embodiment of the invention relates to a video image analysis technology, in particular to a safety monitoring method, a safety monitoring system, a safety monitoring server and a storage medium based on deep learning.
Background
Currently, video surveillance systems have become an important tool for related field applications. Through video monitoring, monitoring personnel or security personnel can more directly effectual carry out safety protection and investigation to the control area. However, the video data is huge in amount and complex in format, the storage cost is high, the video data is difficult to manage, time and labor are consumed in manual retrieval modes due to the fact that massive video information, unstructured data forms and content ambiguity exist, a large amount of videos are lost without being combed, and the construction effect of the monitoring system is seriously influenced.
Disclosure of Invention
The invention provides a safety monitoring method, a system, a server and a storage medium based on deep learning, which are used for realizing semantic structural extraction of video data and effective management of the video data.
In a first aspect, an embodiment of the present invention provides a safety monitoring method based on deep learning, including:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a pre-trained first model to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and when the target to be monitored is abnormal, alarming operation is carried out.
Further, the monitoring characteristic information includes one or more of monitoring target action information, monitoring target stay time information, smoke detection information and monitoring target clothing information.
Further, analyzing the video data through the pre-trained first model to obtain monitoring characteristic information of the target to be monitored in the monitoring area includes:
and training the convolutional neural network on the video data to obtain a first model corresponding to the analysis monitoring target action information, the monitoring target stay time information, the smoke detection information and the monitoring target clothing information.
Further, analyzing the video data through the pre-trained first model to acquire monitoring characteristic information of the target to be monitored in the monitoring area includes:
identifying the video data based on the motion information of the monitoring target by using a first model to acquire the motion information of the monitoring target of the target to be monitored;
calculating a first confidence coefficient of the motion information of the monitoring target according to a first preset weight value;
and judging the action state of the target to be monitored according to the preset parameter threshold and the first confidence of the action information of the monitored target.
Further, analyzing the video data through the pre-trained first model to acquire monitoring characteristic information of the target to be monitored in the monitoring area further comprises:
identifying the video data based on the stay time information of the monitored target by using a first model to obtain the stay time information of the monitored target of the target to be monitored;
calculating a second confidence coefficient of the monitoring target staying time length information according to a second preset weight value and the monitoring target staying time length information;
and judging the staying state of the target to be monitored according to the preset parameter threshold and the second confidence of the staying time information of the monitored target.
Further, analyzing the video data through the pre-trained first model to acquire monitoring characteristic information of the target to be monitored in the monitoring area further comprises:
identifying the video data based on the smoke detection information by using a first model to acquire the smoke detection information of the target to be monitored;
calculating smoke concentration according to the smoke detection information and determining the area where the smoke is located and the smoking action of the target to be monitored;
calculating a third confidence coefficient of the smoke detection information according to the smoke concentration, the area where the smoke is located and the smoking action of the target to be monitored;
and judging the smoking state of the target to be monitored according to the preset parameter threshold and the third confidence of the smoke detection information.
Further, analyzing the video data through the pre-trained first model to acquire monitoring characteristic information of the target to be monitored in the monitoring area further comprises:
identifying the video data based on the clothing information of the monitoring target by using the first model to acquire the clothing information of the monitoring target of the target to be monitored;
calculating a fourth confidence coefficient of the monitored target clothing information according to the monitored target clothing information and a preset weight value of the monitored target clothing information;
and judging the dressing state of the target to be monitored according to the preset parameter threshold and the fourth confidence of the clothing information of the target to be monitored.
Further, the step of judging whether the target to be monitored is abnormal according to the state of the target to be monitored comprises:
if the action state of the target to be monitored is fighting, the target to be monitored is abnormal;
if the stay state of the target to be monitored is overlong stay, the target to be monitored is abnormal;
if the smoking state of the target to be monitored is smoking, the target to be monitored has abnormity;
if the dressing state of the target to be monitored is abnormal, the target to be monitored has abnormality
In a second aspect, an embodiment of the present invention further provides a safety monitoring system based on deep learning, including:
the first acquisition module is used for acquiring video data of a target to be monitored in a monitoring area;
the second acquisition module is used for analyzing the video data through the pre-trained first model so as to acquire monitoring characteristic information of a target to be monitored in the monitoring area;
the judging module is used for judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and the alarm module is used for carrying out alarm operation when the target to be monitored is abnormal.
In a third aspect, an embodiment of the present invention further provides a server, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor, when executing the computer program, implements the steps of any one of the deep learning based security monitoring methods in the foregoing embodiments.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the steps of any one of the above-mentioned embodiments of the deep learning based security monitoring method.
By training the video data based on deep learning, the invention solves the technical problem that the data loss is caused by the fact that the unstructured data form and content in the video cannot be analyzed and combed in time aiming at a large amount of video information in the prior art, and realizes the technical effects of semantic structured extraction of the video data and effective management of the video data.
Drawings
Fig. 1 is a flowchart of a safety monitoring method based on deep learning according to an embodiment of the present invention;
fig. 2 is a flowchart of a safety monitoring method based on deep learning according to a second embodiment of the present invention;
fig. 3 is a flowchart of a deep learning based security monitoring method according to an alternative embodiment of the second embodiment of the present invention;
fig. 4 is a flowchart of a deep learning based security monitoring method according to an alternative embodiment of the second embodiment of the present invention;
fig. 5 is a flowchart of a deep learning based security monitoring method according to an alternative embodiment of the second embodiment of the present invention;
fig. 6 is a flowchart of another safety monitoring method based on deep learning according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a safety monitoring system based on deep learning according to a third embodiment of the present invention;
fig. 8 is a schematic structural diagram of a server according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the steps as a sequential process, many of the steps can be performed in parallel, concurrently or simultaneously. In addition, the order of the steps may be rearranged. A process may be terminated when its operations are completed, but may have additional steps not included in the figure. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc.
Furthermore, the terms "first," "second," and the like may be used herein to describe various orientations, actions, steps, elements, or the like, but the orientations, actions, steps, or elements are not limited by these terms. These terms are only used to distinguish one direction, action, step or element from another direction, action, step or element. For example, a first acquisition module may be referred to as a second acquisition module, and similarly, a second acquisition module may be referred to as a first acquisition module, without departing from the scope of the present application. The first acquisition module and the second acquisition module are both acquisition modules, but they are not the same acquisition module. The terms "first", "second", etc. are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means at least two, e.g., two, three, etc., unless specifically limited otherwise.
Example one
Fig. 1 is a flowchart of a safety monitoring method based on deep learning according to an embodiment of the present invention, which is applicable to video analysis of an object to be monitored in a monitored area, and the method can be executed by a processor. As shown in fig. 1, the safety monitoring method based on deep learning specifically includes the following steps:
step S110, acquiring video data of a target to be monitored in a monitoring area;
specifically, monitoring personnel can install monitoring equipment such as a monitoring camera in an area needing to be monitored, monitor the monitoring area in real time and obtain a corresponding monitoring video.
Step S120, analyzing the video data through a pre-trained first model to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
specifically, the monitoring personnel can establish a training model for performing data analysis on the video data in advance, and after the video data are obtained, the monitoring personnel can analyze the video data through the pre-established training model, so that data (such as behavior data of a target to be monitored) contained in the video content, namely monitoring characteristic information is identified or extracted.
Step S130, judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
specifically, after the detection feature information obtained by training the model in step S120, it is determined whether the target to be monitored is abnormal according to the monitoring feature information. In this embodiment, the monitoring characteristic information may include one or more of monitoring target action information, monitoring target dwell time information, smoke detection information, and monitoring target garment information.
And step S140, performing alarm operation when the target to be monitored is abnormal.
Specifically, whether the target to be monitored is abnormal or not can be judged according to the threshold value or the confidence level of the monitoring feature information, for example, when the monitoring feature information is the monitoring target action information, if the confidence level of the monitoring target action information is greater than or equal to the preset confidence level threshold value, it is indicated that the monitoring target action information is not abnormal, and if the confidence level of the monitoring target information is less than the preset confidence level threshold value, it is indicated that the target to be monitored is abnormal, at this time, the safety monitoring system needs to perform an alarm operation to remind a monitoring person that the monitoring area may be abnormal, such as suspicious people or smoking people.
The method and the device have the advantages that through deep learning-based training of the video data, the technical problem that data loss is caused by the fact that unstructured data forms and contents in videos cannot be analyzed and combed timely for a large amount of video information in the prior art is solved, and the technical effects of semantic structural extraction of the video data and effective management of the video data are achieved.
Example two
The second embodiment of the invention is further optimized on the basis of the first embodiment. Fig. 2 is a flowchart of a safety monitoring method based on deep learning according to a second embodiment of the present invention. As shown in fig. 2, the safety monitoring method based on deep learning of this embodiment includes:
step S210, video data of a target to be monitored in a monitoring area is obtained;
specifically, monitoring personnel can install monitoring equipment such as a monitoring camera in an area needing to be monitored, monitor the monitoring area in real time and obtain a corresponding monitoring video.
In this embodiment, the monitoring characteristic information may include one or more of monitoring target action information, monitoring target dwell time information, smoke detection information, and monitoring target garment information.
Step S220, performing convolutional neural network training on the video data to obtain a first model corresponding to analysis monitoring target action information, monitoring target stay time information, smoke detection information and monitoring target clothing information;
specifically, the monitoring personnel or the working personnel can preset a training model for analyzing the video data. For example, attention targets such as people, vehicles, non-motor vehicles and the like in the monitoring video image are continuously monitored and tracked, and the key frame image is preferentially selected to identify attributes of people and vehicles, so that monitoring characteristic information such as license plates, vehicle types, brands, people's gender and age clothes is obtained. In this embodiment, after obtaining the structured information such as the monitoring feature information of the motion information, the dwell time information, the smoke detection information, and the clothing information of the monitoring target, the monitoring personnel or the staff may implement storage, calculation, and application of the structured video data through a massively parallel processing database, a data mining, a distributed file system, a written test database, and an extensible storage system of a cloud computing platform, that is, after performing deep learning training on the video data, an unstructured data format (such as a human body and a vehicle) and video semantic content in the video may be obtained, and then the obtained data information is stored in a corresponding storage system according to the structured format, thereby implementing effective analysis, organization, and management of the video data.
S231, identifying the video data based on the motion information of the monitoring target by using the first model to acquire the motion information of the monitoring target of the target to be monitored;
specifically, in this embodiment, the human behavior recognition may adopt skeleton behavior detection, i.e., joint point Estimation (position Estimation) performed by Red Green Blue (RGB) images. Each time (frame) skeleton corresponds to coordinate position information of 18 joint points of a human body, one time sequence consists of a plurality of frames, and behavior identification is to judge the type of behavior action of the time-domain pre-segmented sequence, namely the 'reading behavior'. The method is characterized in that the skeleton of people in a specific area is identified, particularly whether people in a monitoring area have fighting or not is identified, and the whole process goes through a starting approaching stage, a climax stage of waving a fist foot and an ending stage. In contrast, the climax stage of swinging a fist contains more information, and is most helpful for distinguishing actions. According to the time domain attention model, the importance of different frames in the sequence is automatically learned and known through a sub-network, so that the important frames play a greater role in classification to optimize the recognition accuracy.
Step S232, calculating a first confidence coefficient of the motion information of the monitoring target according to a first preset weight value;
specifically, in this embodiment, the monitoring of the motion of the target to be monitored mainly depends on the body motion, the relative position and the moving speed of the human body to make a judgment. In calculating the first confidence, the relative position, motion vector, relative motion velocity, and limb contact velocity of the human body can be calculated by using an optimized optical flow method (optical flow method refers to a simple and practical expression of image motion, which is generally defined as apparent motion of an image brightness pattern in an image sequence, that is, an expression of motion velocity of a point on the surface of a spatial object on an imaging plane of a visual sensor). Confidence, also called reliability, or confidence level, confidence coefficient, i.e. when a sample estimates an overall parameter, its conclusion is always uncertain due to the randomness of the sample. Therefore, a probabilistic statement method, i.e. interval estimation in mathematical statistics, is used, i.e. how large the corresponding probability of the estimated value and the overall parameter are within a certain allowable error range, and this corresponding probability is called confidence or confidence.
And step S233, judging the action state of the target to be monitored according to the preset parameter threshold and the first confidence of the action information of the target to be monitored.
Specifically, parameter thresholds of the algorithm may be preset, corresponding weight values may be set for different parameter thresholds, and after a first confidence of the monitoring target action information is obtained by calculating according to different weight values, parameter thresholds and monitoring target action information, the action state of the target to be monitored is determined according to the first confidence. For example, when a monitoring person or a worker needs to identify a fighting behavior, if the first confidence is greater than or equal to a confidence threshold (which may be a preset parameter threshold of the monitoring target action information), it is indicated that no abnormality occurs in the monitoring area, that is, no abnormality exists in the action state of the target to be monitored, that is, no fighting occurs; if the first confidence is smaller than the confidence threshold, it indicates that there is an abnormality in the monitored area, that is, the action state of the target to be monitored is a fighting state, and at this time, the safety monitoring system sends an alarm signal, so that monitoring personnel or staff are notified to perform safety troubleshooting on the monitored area, and the safety in the monitored area is ensured. In this embodiment, facial expression recognition can be additionally adopted, so that the action state of the target to be monitored can be judged more accurately.
Fig. 3 is a flowchart of a safety monitoring method based on deep learning according to an alternative embodiment of the second embodiment of the present invention. Fig. 4 is a flowchart of a safety monitoring method based on deep learning according to an alternative embodiment of the second embodiment of the present invention. Fig. 5 is a flowchart of a safety monitoring method based on deep learning according to an alternative embodiment of the second embodiment of the present invention.
As shown in fig. 3, an alternative embodiment of steps S231 to S233 may be:
step S241, identifying the video data based on the stay time information of the monitoring target by using the first model to obtain the stay time information of the monitoring target of the target to be monitored;
step S242, calculating a second confidence coefficient of the monitoring target staying time length information according to a second preset weight value and the monitoring target staying time length information;
and S243, judging the staying state of the target to be monitored according to the preset parameter threshold and the second confidence of the staying time information of the monitored target.
Specifically, in this embodiment, the lingering time of the target to be monitored in the monitoring area may be monitored. And in the first model for monitoring the stay time of the target, calculating a second confidence coefficient according to a second preset weight value and the stay time information of the target, and judging the stay state of the target to be monitored according to the second confidence coefficient, a preset parameter threshold and the second confidence coefficient. When the second confidence coefficient is greater than or equal to a preset confidence coefficient threshold value (which may be a preset parameter threshold value of the monitoring target stay time length information), it is indicated that the stay state of the target to be monitored is normal stay, and when the second confidence coefficient is less than the preset confidence coefficient threshold value, it is indicated that the stay state of the target to be monitored is overlong stay, and at this time, the safety monitoring system sends an alarm signal, so that monitoring personnel or staff are notified to perform safety troubleshooting on the stay personnel or vehicles in the monitoring area, and the safety in the monitoring area is ensured.
As shown in fig. 4, an alternative embodiment of steps S231 to S233 may also be:
step S251, identifying the video data based on smoke detection information by using a first model to acquire smoke detection information of a target to be monitored;
step S252, calculating smoke concentration according to the smoke detection information and determining the area where the smoke is located and the smoking action of the target to be monitored;
step S253, calculating a third confidence coefficient of the smoke detection information according to the smoke concentration, the area where the smoke is located and the smoking action of the target to be monitored;
and step S254, judging the smoking state of the target to be monitored according to the preset parameter threshold value and the third confidence coefficient of the smoke detection information.
Specifically, in this embodiment, the smoking status of the target to be monitored can be detected. The smoking detection mainly depends on the detection of smoke concentration, the detection of smoke position and the detection of smoking action, so that three neural networks are needed to detect the smoke concentration, the smoke area and the smoking action of a target to be monitored respectively, and third confidence coefficients of the smoke concentration, the smoke area and the smoking action are obtained under the corresponding neural networks respectively, and the third confidence coefficients of the three are calculated according to different weights to obtain a third confidence coefficient of smoke detection information. When the third confidence is greater than or equal to the confidence threshold (which may be a preset parameter threshold of smoke detection information), it indicates that the smoking state of the target to be monitored is no smoking, and when the third confidence is less than the confidence threshold, it indicates that the smoking state of the target to be monitored is smoking, and at this time, the safety monitoring system sends an alarm signal, so as to notify monitoring personnel or staff to investigate personnel in the monitored area, thereby ensuring the health of the air in the monitored area and the safety of facilities.
As shown in fig. 5, an alternative embodiment of steps S231 to S233 may also be:
step S261, recognizing the video data based on the clothing information of the monitoring target by using the first model to obtain the clothing information of the monitoring target of the target to be monitored;
step S262, calculating a fourth confidence coefficient of the monitored target clothing information according to the monitored target clothing information and the preset weight value of the monitored target clothing information;
and step S263, judging the dressing state of the target to be monitored according to the preset parameter threshold value and the fourth confidence coefficient of the clothing information of the target to be monitored.
Specifically, in this embodiment, the dressing state of the object to be monitored can be detected. When detecting the wearing state, the clothes are mainly identified by the colors of the clothes and the special marks of the clothes, and the error problem caused by the environment and the color difference needs to be considered for detecting the clothes color. Therefore, before the dressing state of the target to be monitored is detected, the data sets of different clothing colors and the data sets of different clothing special marks can be identified through the neural network, and when the error range of the training model for the dressing state of the target to be monitored is within the allowable error range, the method is applied to the detection of the dressing state of the target to be monitored. After the monitoring target clothing information of the target to be monitored is obtained, the fourth confidence coefficient can be calculated according to different weight values. When the fourth confidence coefficient is greater than or equal to the confidence coefficient threshold value (which may be a preset parameter threshold value of the clothing information of the monitored target), the clothing state of the monitored target is normal clothing, and when the fourth confidence coefficient is smaller than the confidence coefficient threshold value, the clothing state of the monitored target is abnormal clothing, and at this time, the safety monitoring system sends out an alarm signal, so that monitoring personnel or staff are notified to investigate personnel in the monitored area, and the suspicious personnel are prevented from harming the safety of other people or public facilities.
Fig. 7 is a schematic structural diagram of a safety monitoring system based on deep learning according to a third embodiment of the present invention. In this embodiment, the safety monitoring method based on deep learning further includes:
step S271, if the action state of the target to be monitored is fighting, the target to be monitored is abnormal;
step S272, if the stay state of the target to be monitored is overlong stay, the target to be monitored is abnormal;
step S273, if the smoking state of the target to be monitored is smoking, the target to be monitored is abnormal;
step S274, if the dressing state of the target to be monitored is abnormal dressing, the target to be monitored is abnormal;
and step S280, performing alarm operation when the target to be monitored is abnormal.
Specifically, the action state, the stay state, the smoking state, and the dressing state of the target to be monitored may be determined according to the first, second, third, and fourth confidence levels, respectively. When the state of any target to be monitored is abnormal, the safety monitoring system can perform alarm operation, so that monitoring personnel or security personnel can be informed of potential safety hazards in the monitoring area, and further precautionary measures can be taken in a targeted manner.
The second embodiment of the invention has the advantages that the training based on deep learning is carried out on the video data, and the data analysis is respectively carried out in the corresponding neural networks according to the information of the target to be monitored in the video data, so that the technical problem that the data loss is caused by the fact that the unstructured data form and content in the video cannot be analyzed and combed in time aiming at a large amount of video information in the prior art is solved, and the technical effects of semantic structural extraction of the video data and effective management of the video data are realized.
EXAMPLE III
Fig. 3 is a schematic structural diagram of a safety monitoring system based on deep learning according to a third embodiment of the present invention. As shown in fig. 3, the safety monitoring system 300 based on deep learning of the present embodiment includes:
a first obtaining module 310, configured to obtain video data of a target to be monitored in a monitoring area;
the second obtaining module 320 is configured to analyze the video data through the pre-trained first model to obtain monitoring feature information of the target to be monitored in the monitoring area;
the judging module 330 is configured to judge whether the target to be monitored is abnormal according to the monitoring feature information;
and the alarm module 340 is configured to perform an alarm operation when the target to be monitored is abnormal.
In this embodiment, the monitoring characteristic information includes one or more of monitoring target action information, monitoring target dwell time information, smoke detection information, and monitoring target clothing information.
In this embodiment, the deep learning based safety monitoring system 300 further includes:
the training module 350 is configured to perform convolutional neural network training on the video data to obtain a first model corresponding to analysis monitoring target motion information, monitoring target dwell time information, smoke detection information, and monitoring target clothing information.
In this embodiment, the second obtaining module 320 includes:
the first training unit is used for identifying the video data based on the motion information of the monitoring target by using the first model so as to acquire the motion information of the monitoring target of the target to be monitored;
the first calculation unit is used for calculating a first confidence coefficient of the monitoring target action information according to a first preset weight value;
and the first judgment unit is used for judging the action state of the target to be monitored according to the preset parameter threshold and the first confidence of the action information of the monitored target.
In this embodiment, the second obtaining module 320 further includes:
the second training unit is used for identifying the video data based on the stay time information of the monitoring target by using the first model so as to obtain the stay time information of the monitoring target of the target to be monitored;
the second calculation unit is used for calculating a second confidence coefficient of the monitoring target staying time length information according to a second preset weight value and the monitoring target staying time length information;
and the second judging unit is used for judging the staying state of the target to be monitored according to the preset parameter threshold and the second confidence coefficient of the staying time information of the monitored target.
In this embodiment, the second obtaining module 320 further includes:
the third training unit is used for identifying the video data based on the smoke detection information by using the first model so as to acquire the smoke detection information of the target to be monitored;
the third calculating unit is used for calculating the smoke concentration according to the smoke detection information and determining the area where the smoke is located and the smoking action of the target to be monitored;
the fourth calculation unit is used for calculating a third confidence coefficient of the smoke detection information according to the smoke concentration, the area where the smoke is located and the smoking action of the target to be monitored;
and the third judging unit is used for judging the smoking state of the target to be monitored according to the preset parameter threshold and the third confidence of the smoke detection information.
In this embodiment, the second obtaining module 320 further includes:
the fourth training unit is used for identifying the video data based on the clothing information of the monitoring target by using the first model so as to acquire the clothing information of the monitoring target of the target to be monitored;
the fifth calculation unit is used for calculating a fourth confidence coefficient of the monitored target clothing information according to the monitored target clothing information and the preset weight value of the monitored target clothing information;
and the fourth judging unit is used for judging the dressing state of the target to be monitored according to the preset parameter threshold value and the fourth confidence coefficient of the clothing information of the target to be monitored.
In this embodiment, the determining module 330 includes:
the fifth judging unit is used for judging that the target to be monitored is abnormal if the action state of the target to be monitored is fighting;
the sixth judging unit is used for judging that the target to be monitored is abnormal if the stay state of the target to be monitored is overlong stay;
a seventh judging unit, configured to determine that the target to be monitored is abnormal if the smoking status of the target to be monitored is smoking;
and the eighth judging unit is used for judging that the target to be monitored is abnormal if the dressing state of the target to be monitored is abnormal.
The safety monitoring system based on deep learning provided by the embodiment of the invention can execute the safety monitoring method based on deep learning provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 4 is a schematic structural diagram of a server according to a fourth embodiment of the present invention, as shown in fig. 4, the server includes a processor 410, a memory 420, an input device 430, and an output device 440; the number of the processors 410 in the server may be one or more, and one processor 410 is taken as an example in fig. 4; the processor 410, the memory 420, the input device 430 and the output device 440 in the server may be connected by a bus or other means, and the bus connection is exemplified in fig. 4.
The memory 410 is a computer-readable storage medium, and can be used for storing software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the deep learning based security monitoring system in the embodiment of the present invention (for example, the first obtaining module, the second obtaining module, the determining module, the alarm module, and the training module of the deep learning based security monitoring system). The processor 410 executes various functional applications and data processing of the server by executing software programs, instructions and modules stored in the memory 420, that is, implements the deep learning based security monitoring method described above.
Namely:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a pre-trained first model to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and when the target to be monitored is abnormal, alarming operation is carried out.
The memory 420 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 420 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 420 may further include memory located remotely from processor 410, which may be connected to a server over a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input device 430 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the server. The output device 440 may include a display device such as a display screen.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, where the computer-executable instructions are executed by a computer processor to perform a deep learning-based security monitoring method, and the method includes:
acquiring video data of a target to be monitored in a monitoring area;
analyzing the video data through a pre-trained first model to acquire monitoring characteristic information of a target to be monitored in a monitoring area;
judging whether the target to be monitored is abnormal or not according to the monitoring characteristic information;
and when the target to be monitored is abnormal, alarming operation is carried out.
Of course, the storage medium provided by the embodiment of the present invention contains computer-executable instructions, and the computer-executable instructions are not limited to the above method operations, and may also perform related operations in the deep learning based security monitoring method provided by any embodiment of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly, can also be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the safety monitoring system based on deep learning, the included units and modules are only divided according to functional logic, but are not limited to the above division, as long as the corresponding functions can be realized; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments illustrated herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1.一种基于深度学习的安全监测方法,其特征在于,包括:1. a safety monitoring method based on deep learning, is characterized in that, comprises: 获取监测区域内待监测目标的视频数据;Obtain the video data of the target to be monitored in the monitoring area; 通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息;Analyze the video data by using the pre-trained first model to obtain monitoring feature information of the target to be monitored in the monitoring area; 根据所述监测特征信息判断待监测目标是否存在异常;Judging whether the target to be monitored is abnormal according to the monitoring feature information; 当所述待监测目标存在异常时进行报警操作。When the target to be monitored is abnormal, an alarm operation is performed. 2.根据权利要求1所述的一种基于深度学习的安全监测方法,其特征在于,所述监测特征信息包括监测目标动作信息、监测目标停留时长信息、烟雾检测信息和监测目标服装信息中的一种或多种。2. A deep learning-based safety monitoring method according to claim 1, wherein the monitoring feature information comprises monitoring target action information, monitoring target stay duration information, smoke detection information and monitoring target clothing information. one or more. 3.根据权利要求2所述的一种基于深度学习的安全监测方法,其特征在于,所述通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息之前包括:3. The deep learning-based safety monitoring method according to claim 2, wherein the video data is analyzed by the pre-trained first model to obtain the to-be-monitored data in the monitoring area. The target's monitoring characteristics information previously includes: 对所述视频数据进行卷积神经网络的训练,以得到分析所述监测目标动作信息、监测目标停留时长信息、烟雾检测信息和监测目标服装信息对应的第一模型。A convolutional neural network is trained on the video data to obtain a first model corresponding to analyzing the monitoring target action information, monitoring target staying time information, smoke detection information and monitoring target clothing information. 4.根据权利要求3所述的一种基于深度学习的安全监测方法,其特征在于,所述通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息包括:4. A deep learning-based security monitoring method according to claim 3, wherein the video data is analyzed by the pre-trained first model to obtain the to-be-monitored data in the monitoring area. The monitoring characteristic information of the target includes: 使用所述第一模型对所述视频数据进行基于所述监测目标动作信息的识别,以获取所述待监测目标的监测目标动作信息;Using the first model to identify the video data based on the monitoring target action information, to obtain the monitoring target action information of the to-be-monitored target; 根据第一预设权重值计算所述监测目标动作信息的第一置信度;Calculate the first confidence level of the monitoring target action information according to the first preset weight value; 根据所述监测目标动作信息的预设参数阈值和所述第一置信度判断所述待监测目标的动作状态。The action state of the to-be-monitored target is determined according to the preset parameter threshold of the monitoring target action information and the first confidence level. 5.根据权利要求3所述的一种基于深度学习的安全监测方法,其特征在于,所述通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息还包括:5. The deep learning-based security monitoring method according to claim 3, wherein the video data is analyzed by the pre-trained first model to obtain the to-be-monitored data in the monitoring area. The monitoring characteristic information of the target also includes: 使用所述第一模型对所述视频数据进行基于所述监测目标停留时长信息的识别,以获取所述待监测目标的监测目标停留时长信息;Using the first model to identify the video data based on the monitoring target staying duration information, to obtain the monitoring target staying duration information of the to-be-monitored target; 根据第二预设权重值和所述监测目标停留时长信息计算所述监测目标停留时长信息的第二置信度;Calculate the second confidence level of the monitoring target staying duration information according to the second preset weight value and the monitoring target staying duration information; 根据所述监测目标停留时长信息的预设参数阈值和所述第二置信度判断所述待监测目标的逗留状态。The stay state of the to-be-monitored target is determined according to the preset parameter threshold of the stay duration information of the monitoring target and the second confidence level. 6.根据权利要求3所述的一种基于深度学习的安全监测方法,其特征在于,所述通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息还包括:6. The deep learning-based safety monitoring method according to claim 3, wherein the video data is analyzed by the pre-trained first model to obtain the to-be-monitored data in the monitoring area. The monitoring characteristic information of the target also includes: 使用所述第一模型对所述视频数据进行基于所述烟雾检测信息的识别,以获取所述待监测目标的烟雾检测信息;Using the first model to identify the video data based on the smoke detection information to obtain smoke detection information of the target to be monitored; 根据所述烟雾检测信息计算烟雾浓度并确定烟雾所在区域和所述待监测目标的吸烟动作;Calculate the smoke concentration according to the smoke detection information and determine the area where the smoke is located and the smoking action of the target to be monitored; 根据所述烟雾浓度、烟雾所在区域和所述待监测目标的吸烟动作计算所述烟雾检测信息的第三置信度;Calculate the third confidence level of the smoke detection information according to the smoke concentration, the area where the smoke is located, and the smoking action of the target to be monitored; 根据所述烟雾检测信息的预设参数阈值和所述第三置信度判断所述待监测目标的吸烟状态。The smoking state of the target to be monitored is determined according to the preset parameter threshold of the smoke detection information and the third confidence level. 7.根据权利要求3所述的一种基于深度学习的安全监测方法,其特征在于,所述通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息还包括:7. The deep learning-based safety monitoring method according to claim 3, wherein the video data is analyzed by the pre-trained first model to obtain the to-be-monitored data in the monitoring area. The monitoring characteristic information of the target also includes: 使用所述第一模型对所述视频数据进行基于所述监测目标服装信息的识别,以获取所述待监测目标的监测目标服装信息;Using the first model to identify the video data based on the monitoring target clothing information to obtain the monitoring target clothing information of the to-be-monitored target; 根据所述监测目标服装信息和所述监测目标服装信息的预设权重值计算所述监测目标服装信息的第四置信度;Calculate the fourth confidence level of the monitoring target clothing information according to the monitoring target clothing information and the preset weight value of the monitoring target clothing information; 根据所述监测目标服装信息的预设参数阈值和所述第四置信度判断所述待监测目标的着装状态。The clothing state of the to-be-monitored target is judged according to the preset parameter threshold of the clothing information of the monitoring target and the fourth confidence level. 8.根据权利要求3-7所述的一种基于深度学习的安全监测方法,其特征在于,所述根据所述待监测目标的状态判断所述待监测目标是否存在异常包括:8. A deep learning-based security monitoring method according to claims 3-7, wherein the judging according to the state of the to-be-monitored target whether there is an abnormality in the to-be-monitored target comprises: 若所述待监测目标的动作状态为打架时,则所述待监测目标存在异常;If the action state of the to-be-monitored target is fighting, the to-be-monitored target is abnormal; 若所述待监测目标的逗留状态为过长逗留时,则所述待监测目标存在异常;If the stay status of the to-be-monitored target is too long, the to-be-monitored target is abnormal; 若所述待监测目标的吸烟状态为吸烟时,则所述待监测目标存在异常;If the smoking state of the target to be monitored is smoking, the target to be monitored is abnormal; 若所述待监测目标的着装状态为异常着装时,则所述待监测目标存在异常。If the dressing state of the target to be monitored is abnormally dressed, the target to be monitored is abnormal. 9.一种基于深度学习的安全监测系统,其特征在于,包括:9. A security monitoring system based on deep learning, characterized in that, comprising: 第一获取模块,用于获取监测区域内待监测目标的视频数据;The first acquisition module is used to acquire the video data of the target to be monitored in the monitoring area; 第二获取模块,用于通过预先训练的第一模型对所述视频数据进行分析,以获取监测区域内的所述待监测目标的监测特征信息;a second acquisition module, configured to analyze the video data by using the pre-trained first model to acquire monitoring feature information of the to-be-monitored target in the monitoring area; 判断模块,用于根据所述监测特征信息判断待监测目标是否存在异常;a judgment module for judging whether the target to be monitored is abnormal according to the monitoring feature information; 报警模块,用于当所述待监测目标存在异常时进行报警操作。An alarm module is used to perform an alarm operation when the target to be monitored is abnormal. 10.一种服务器,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现权利要求1-8中任一项所述基于深度学习的安全监测方法的步骤。10. A server, comprising a memory, a processor and a computer program stored on the memory and running on the processor, wherein the processor implements any one of claims 1-8 when executing the computer program The steps of the deep learning-based security monitoring method described in item. 11.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被所述处理器执行时实现权利要求1-8中任一项所述基于深度学习的安全监测方法的步骤。11. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by the processor, the deep learning-based security monitoring according to any one of claims 1-8 is realized steps of the method.
CN201911165549.5A 2019-11-25 2019-11-25 Security monitoring method, system, server and storage medium based on deep learning Expired - Fee Related CN111126153B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911165549.5A CN111126153B (en) 2019-11-25 2019-11-25 Security monitoring method, system, server and storage medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911165549.5A CN111126153B (en) 2019-11-25 2019-11-25 Security monitoring method, system, server and storage medium based on deep learning

Publications (2)

Publication Number Publication Date
CN111126153A true CN111126153A (en) 2020-05-08
CN111126153B CN111126153B (en) 2023-07-21

Family

ID=70496626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911165549.5A Expired - Fee Related CN111126153B (en) 2019-11-25 2019-11-25 Security monitoring method, system, server and storage medium based on deep learning

Country Status (1)

Country Link
CN (1) CN111126153B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079193A (en) * 2020-05-22 2021-07-06 江苏濠汉信息技术有限公司 Seal appearance monitored control system based on deep learning neural network model
CN113609937A (en) * 2021-07-24 2021-11-05 全图通位置网络有限公司 Emergency processing method, system and storage medium for urban rail transit
CN113889287A (en) * 2021-10-19 2022-01-04 成都万维科技有限责任公司 Data processing method, device, system and storage medium
CN114037139A (en) * 2021-11-04 2022-02-11 华东师范大学 Freight vehicle warehouse stay time length prediction method based on attention mechanism
CN114885119A (en) * 2022-03-29 2022-08-09 西北大学 Intelligent monitoring alarm system and method based on computer vision
CN115460433A (en) * 2021-06-08 2022-12-09 京东方科技集团股份有限公司 A video processing method, device, electronic equipment and storage medium
CN116343112A (en) * 2023-03-03 2023-06-27 苏州浪潮智能科技有限公司 Scene monitoring method, device, electronic device and storage medium
CN117253333A (en) * 2023-11-20 2023-12-19 深圳市美安科技有限公司 Fire camera shooting detection device, fire detection alarm method and system

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101778260A (en) * 2009-12-29 2010-07-14 公安部第三研究所 Method and system for monitoring and managing videos on basis of structured description
US20100245532A1 (en) * 2009-03-26 2010-09-30 Kurtz Andrew F Automated videography based communications
JP2010246000A (en) * 2009-04-09 2010-10-28 Panasonic Corp Video search playback device
US20110302137A1 (en) * 2010-06-08 2011-12-08 Dell Products L.P. Systems and methods for improving storage efficiency in an information handling system
US20120105533A1 (en) * 2010-10-29 2012-05-03 Wozniak Terry A Method of controlling print density
CN102572215A (en) * 2011-12-14 2012-07-11 深圳市贝尔信智能系统有限公司 City-class visual video analysis method and server
CN102665071A (en) * 2012-05-14 2012-09-12 安徽三联交通应用技术股份有限公司 Intelligent processing and search method for social security video monitoring images
JP2013125469A (en) * 2011-12-15 2013-06-24 Sogo Keibi Hosho Co Ltd Security device and security action switching method
US20150098609A1 (en) * 2013-10-09 2015-04-09 Honda Motor Co., Ltd. Real-Time Multiclass Driver Action Recognition Using Random Forests
CN105578126A (en) * 2014-11-11 2016-05-11 杜向阳 Monitoring camera automatic alarm system
US20160203367A1 (en) * 2013-08-23 2016-07-14 Nec Corporation Video processing apparatus, video processing method, and video processing program
CN105788364A (en) * 2014-12-25 2016-07-20 中国移动通信集团公司 Early warning information publishing method and early warning information publishing device
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
CN108021891A (en) * 2017-12-05 2018-05-11 广州大学 The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system
CN108288032A (en) * 2018-01-08 2018-07-17 深圳市腾讯计算机系统有限公司 Motion characteristic acquisition methods, device and storage medium
US20180285767A1 (en) * 2017-03-30 2018-10-04 Intel Corporation Cloud assisted machine learning
CN108734055A (en) * 2017-04-17 2018-11-02 杭州海康威视数字技术股份有限公司 A kind of exception personnel detection method, apparatus and system
CN108764148A (en) * 2018-05-30 2018-11-06 东北大学 Multizone real-time action detection method based on monitor video
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
CN109241946A (en) * 2018-10-11 2019-01-18 平安科技(深圳)有限公司 Abnormal behaviour monitoring method, device, computer equipment and storage medium
CN109614948A (en) * 2018-12-19 2019-04-12 北京锐安科技有限公司 Detection method, device, device and storage medium for abnormal behavior
CN110033007A (en) * 2019-04-19 2019-07-19 福州大学 Attribute recognition approach is worn clothes based on the pedestrian of depth attitude prediction and multiple features fusion
CN110163171A (en) * 2019-05-27 2019-08-23 北京字节跳动网络技术有限公司 The method and apparatus of face character for identification
JP2019149039A (en) * 2018-02-27 2019-09-05 パナソニックIpマネジメント株式会社 Monitoring system and monitoring method
CN110414313A (en) * 2019-06-06 2019-11-05 平安科技(深圳)有限公司 Abnormal behaviour alarm method, device, server and storage medium
CN110472492A (en) * 2019-07-05 2019-11-19 平安国际智慧城市科技股份有限公司 Target organism detection method, device, computer equipment and storage medium

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100245532A1 (en) * 2009-03-26 2010-09-30 Kurtz Andrew F Automated videography based communications
JP2010246000A (en) * 2009-04-09 2010-10-28 Panasonic Corp Video search playback device
CN101778260A (en) * 2009-12-29 2010-07-14 公安部第三研究所 Method and system for monitoring and managing videos on basis of structured description
US20110302137A1 (en) * 2010-06-08 2011-12-08 Dell Products L.P. Systems and methods for improving storage efficiency in an information handling system
US20120105533A1 (en) * 2010-10-29 2012-05-03 Wozniak Terry A Method of controlling print density
CN102572215A (en) * 2011-12-14 2012-07-11 深圳市贝尔信智能系统有限公司 City-class visual video analysis method and server
JP2013125469A (en) * 2011-12-15 2013-06-24 Sogo Keibi Hosho Co Ltd Security device and security action switching method
CN102665071A (en) * 2012-05-14 2012-09-12 安徽三联交通应用技术股份有限公司 Intelligent processing and search method for social security video monitoring images
US20160203367A1 (en) * 2013-08-23 2016-07-14 Nec Corporation Video processing apparatus, video processing method, and video processing program
US20150098609A1 (en) * 2013-10-09 2015-04-09 Honda Motor Co., Ltd. Real-Time Multiclass Driver Action Recognition Using Random Forests
CN105578126A (en) * 2014-11-11 2016-05-11 杜向阳 Monitoring camera automatic alarm system
CN105788364A (en) * 2014-12-25 2016-07-20 中国移动通信集团公司 Early warning information publishing method and early warning information publishing device
CN106530331A (en) * 2016-11-23 2017-03-22 北京锐安科技有限公司 Video monitoring system and method
US20180285767A1 (en) * 2017-03-30 2018-10-04 Intel Corporation Cloud assisted machine learning
CN108734055A (en) * 2017-04-17 2018-11-02 杭州海康威视数字技术股份有限公司 A kind of exception personnel detection method, apparatus and system
CN108021891A (en) * 2017-12-05 2018-05-11 广州大学 The vehicle environmental recognition methods combined based on deep learning with traditional algorithm and system
CN108288032A (en) * 2018-01-08 2018-07-17 深圳市腾讯计算机系统有限公司 Motion characteristic acquisition methods, device and storage medium
JP2019149039A (en) * 2018-02-27 2019-09-05 パナソニックIpマネジメント株式会社 Monitoring system and monitoring method
CN108764148A (en) * 2018-05-30 2018-11-06 东北大学 Multizone real-time action detection method based on monitor video
CN108960065A (en) * 2018-06-01 2018-12-07 浙江零跑科技有限公司 A kind of driving behavior detection method of view-based access control model
CN109241946A (en) * 2018-10-11 2019-01-18 平安科技(深圳)有限公司 Abnormal behaviour monitoring method, device, computer equipment and storage medium
CN109614948A (en) * 2018-12-19 2019-04-12 北京锐安科技有限公司 Detection method, device, device and storage medium for abnormal behavior
CN110033007A (en) * 2019-04-19 2019-07-19 福州大学 Attribute recognition approach is worn clothes based on the pedestrian of depth attitude prediction and multiple features fusion
CN110163171A (en) * 2019-05-27 2019-08-23 北京字节跳动网络技术有限公司 The method and apparatus of face character for identification
CN110414313A (en) * 2019-06-06 2019-11-05 平安科技(深圳)有限公司 Abnormal behaviour alarm method, device, server and storage medium
CN110472492A (en) * 2019-07-05 2019-11-19 平安国际智慧城市科技股份有限公司 Target organism detection method, device, computer equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
KANOKSAK WATTANACHOTE: "Preliminary Investigation on Stationarity of Dynamic Smoke Texture and Dynamic Fire Texture Based on Motion Coherent Metric" *
WENHUA MA等: "Crowd Estimation using Multi-scale Local Texture Analysis and Confidence-based Soft Classification" *
佟瑞鹏;陈策;刘思路;卢恒;马建华;: "面向行为安全的泛场景数据理论与应用研究", no. 02 *
刘勇;: "基于光流场分析与深度学习的视频监控系统", no. 02 *
赵梦;: "基于大数据环境的网络安全态势感知", no. 09 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113079194A (en) * 2020-05-22 2021-07-06 江苏濠汉信息技术有限公司 Seal appearance monitored control system based on vehicle state analysis
CN113079193A (en) * 2020-05-22 2021-07-06 江苏濠汉信息技术有限公司 Seal appearance monitored control system based on deep learning neural network model
CN113079193B (en) * 2020-05-22 2022-08-05 江苏濠汉信息技术有限公司 Seal appearance monitored control system based on deep learning neural network model
CN113079194B (en) * 2020-05-22 2022-08-05 江苏濠汉信息技术有限公司 Seal appearance monitored control system based on vehicle state analysis
CN115460433A (en) * 2021-06-08 2022-12-09 京东方科技集团股份有限公司 A video processing method, device, electronic equipment and storage medium
CN115460433B (en) * 2021-06-08 2024-05-28 京东方科技集团股份有限公司 Video processing method, device, electronic device and storage medium
CN113609937A (en) * 2021-07-24 2021-11-05 全图通位置网络有限公司 Emergency processing method, system and storage medium for urban rail transit
CN113609937B (en) * 2021-07-24 2023-12-22 全图通位置网络有限公司 Emergency processing method, system and storage medium for urban rail transit
CN113889287A (en) * 2021-10-19 2022-01-04 成都万维科技有限责任公司 Data processing method, device, system and storage medium
CN114037139A (en) * 2021-11-04 2022-02-11 华东师范大学 Freight vehicle warehouse stay time length prediction method based on attention mechanism
CN114885119A (en) * 2022-03-29 2022-08-09 西北大学 Intelligent monitoring alarm system and method based on computer vision
CN116343112A (en) * 2023-03-03 2023-06-27 苏州浪潮智能科技有限公司 Scene monitoring method, device, electronic device and storage medium
CN117253333A (en) * 2023-11-20 2023-12-19 深圳市美安科技有限公司 Fire camera shooting detection device, fire detection alarm method and system

Also Published As

Publication number Publication date
CN111126153B (en) 2023-07-21

Similar Documents

Publication Publication Date Title
CN111126153B (en) Security monitoring method, system, server and storage medium based on deep learning
CN110210302B (en) Multi-target tracking method, device, computer equipment and storage medium
CN108898104A (en) A kind of item identification method, device, system and computer storage medium
CN114218992A (en) Abnormal object detection method and related device
CN119720064B (en) Security method and device based on full-range feature recognition and four-dimensional track tracking
CN112017323A (en) Patrol alarm method and device, readable storage medium and terminal equipment
CN113792595A (en) Target behavior detection method, device, computer equipment and storage medium
KR101454644B1 (en) Loitering Detection Using a Pedestrian Tracker
CN115482502A (en) Abnormal behavior identification method, system and medium based on characteristic object and human body key point
Umoh et al. Support vector machine-based fire outbreak detection system
CN116129350A (en) Intelligent monitoring method, device, equipment and medium for safety operation of data center
CN115965913A (en) Security monitoring method, device and system and computer readable storage medium
CN118777538A (en) Smoke monitoring method and related equipment based on MEMS multi-channel intelligent gas sensor
CN115661735A (en) Target detection method and device and computer readable storage medium
CN120375251A (en) Intelligent campus security monitoring method based on image processing
JP2018142137A (en) Information processing device, information processing method and program
WO2024012607A2 (en) Personnel detection method and apparatus, device, and storage medium
CN104077571A (en) Method for detecting abnormal behavior of throng by adopting single-class serialization model
CN120808275A (en) Security camera abnormal behavior identification method and system based on multi-mode fusion
CN114119531A (en) Fire detection method, device and computer equipment applied to campus smart platform
WO2021068589A1 (en) Method and apparatus for determining object and key points thereof in image
CN118521872A (en) Personnel state intelligent recognition method and system based on independent visual unit
CN113177452B (en) A sample sealing method and device based on image processing and radio frequency technology
CN117392611A (en) Construction site safety monitoring methods, systems, equipment and storage media
CN109815921A (en) Method and device for predicting activity category in hydrogen refueling station

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20230721