CN110659624A - Group personnel behavior identification method and device and computer storage medium - Google Patents
Group personnel behavior identification method and device and computer storage medium Download PDFInfo
- Publication number
- CN110659624A CN110659624A CN201910935090.6A CN201910935090A CN110659624A CN 110659624 A CN110659624 A CN 110659624A CN 201910935090 A CN201910935090 A CN 201910935090A CN 110659624 A CN110659624 A CN 110659624A
- Authority
- CN
- China
- Prior art keywords
- human body
- video
- video frames
- determining
- body images
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a group personnel behavior identification method, a group personnel behavior identification device and a computer storage medium, wherein all video frames of a video stream in a certain time period are obtained, and the number of human body images is determined to be greater than or equal to a first threshold value and is continuous with a preset number of video frames; carrying out crowd behavior classification on human body images included in each video frame of a predetermined number of continuous video frames; determining whether the number of human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is larger than or equal to a second threshold value; if the value is larger than or equal to the second threshold value, determining the occurrence of group personnel behaviors; and if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than a second threshold value, determining that the group personnel behaviors do not occur. The method and the device for identifying the group personnel behaviors can identify the group unconventional behaviors with high sensitivity, do not need to consider the background stability, and have high accuracy.
Description
Technical Field
The present application relates to the field of Artificial Intelligence (AI), and in particular, to a group personnel behavior identification method, apparatus, and computer storage medium.
Background
With the development and application of computer science and artificial intelligence, video analysis technology is rapidly emerging and has gained wide attention. The core of video analysis is human behavior recognition, which has wide applications in science and technology and life, such as video monitoring, human-computer interaction, intelligent robots, virtual reality, video retrieval and the like, so that the intelligent human recognition technology has high research value and application prospect.
The human body identification technology mainly collects main key points of a human body and comprises the following steps: left and right elbows, left and right wrists, left and right shoulders, top of the head, five sense organs, neck, left and right ankles, left and right knees, left and right hips and the like can estimate a plurality of postures of standing, sitting and moving under a plurality of scene forms, so that the detection and the recognition of action postures are realized, and the accuracy and the rapidity of behavior recognition directly influence the result of the subsequent work of a video analysis system.
Therefore, how to improve the accuracy and the rapidity of human behavior recognition in videos has become a key issue in video analysis system research.
At present, typical video human behavior identification methods mainly include: spatio-temporal interest points, dense trajectories, etc.
The spatial and temporal interest points are used for identifying human body behaviors by detecting angular points in a video and extracting characteristics of the angular points, but a part of the angular points are generated by background noise, so that the final result is influenced, and the identification running speed is reduced.
Dense track, which is to perform dense sampling on multiple scales for each frame of video, then track the sampled points to obtain track, and then extract the features of the track to perform behavior recognition. However, the method has high computational complexity and high dimension of generated features, occupies a large amount of memory, and is difficult to identify in real time.
The prior art is mainly based on the traditional image processing and probabilistic machine learning methods, and defines the behavior that does not match the known normal behavior in probability density as the unconventional behavior. Extracting space-time characteristic points in a video through Gabor wavelet transform; calculating Harr characteristics of the characteristic points to generate an input vector; training an SVM classifier by using input vectors of conventional behaviors and unconventional behaviors; and judging whether the input video corresponding to the input vector is irregular by using the classifier obtained by training.
However, the prior art requires a high level of stability against a background outside the population.
Disclosure of Invention
In view of the foregoing problems in the prior art, aspects of the present disclosure provide a group personnel behavior identification method, apparatus, and computer storage medium, which can identify a group unconventional behavior with high sensitivity, without considering background stability, and with high accuracy.
A first aspect of the present application provides a group personnel behavior identification method, including:
acquiring all video frames of a video stream in a certain time period, and determining the number of the human body images which are contained in the video stream is greater than or equal to a first threshold value and the number of the continuous video frames is predetermined;
carrying out crowd behavior classification on the human body image included in each video frame of the continuous preset number of video frames;
determining whether the number of human body images of the category of which the human body images are the most contained in each video frame of the continuous preset number of video frames is larger than or equal to a second threshold value;
if the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is larger than or equal to the second threshold value, determining that group personnel behaviors occur;
and if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than the second threshold value, determining that no group personnel behaviors occur.
In an embodiment, after acquiring all video frames of the video stream, the method further comprises: and identifying the human body image in each video frame through a neural network server for human body identification.
In an embodiment, after acquiring all video frames of the video stream, the method further comprises: and determining the geographical position information of a camera for acquiring the video stream, and determining a scene and the first threshold corresponding to the scene according to the geographical position information.
In an embodiment, the determining the first threshold according to the geographic location information specifically includes: and determining the threshold corresponding to the geographical position information as the first threshold according to the corresponding relation between the geographical position information and the stored geographical position and threshold.
A second aspect of the present application provides a group person behavior recognition apparatus, including:
the acquisition module is used for acquiring all video frames of the video stream in a certain time period and determining the number of the included human body images which is greater than or equal to a first threshold value and is continuous with a preset number of video frames;
the classification module is used for classifying the human body images included in each video frame of the continuous preset number of video frames according to the crowd behaviors;
the determining module is used for determining whether the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is larger than or equal to a second threshold value or not;
the processing module is used for determining the occurrence of group personnel behaviors if the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is greater than or equal to the second threshold value; and if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than the second threshold value, determining that no group personnel behaviors occur.
In one embodiment, the group personnel behavior identification apparatus further comprises: and the identification module is used for identifying the human body image in each video frame.
In an embodiment, the determining module is further configured to determine geographic position information of a camera that acquires the video stream, and determine a scene and the first threshold corresponding to the scene according to the geographic position information.
In an embodiment, to determine the first threshold according to the geographic location information, the determining module is configured to determine, according to a correspondence between the geographic location information and a stored geographic location and a threshold, that the threshold corresponding to the geographic location information is the first threshold.
A third aspect of the present application provides a computer device comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is used for storing at least one executable instruction, and the executable instruction causes the processor to execute the steps of the group personnel behavior identification method.
A fourth aspect of the present application provides a computer storage medium having at least one executable instruction stored therein, the executable instruction causing a processor to perform the steps of the group human behavior identification method.
Compared with the prior art, the method has the following beneficial effects: carrying out crowd behavior classification on the human body image included in each video frame of the continuous preset number of video frames; and determining whether the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is greater than or equal to a second threshold, and determining that the group personnel behaviors occur when the number of the human body images is greater than or equal to the second threshold, so that the group unconventional behaviors can be identified with high sensitivity, the background stability does not need to be considered, and the accuracy is high.
Drawings
The above features and advantages of the present disclosure can be better understood upon reading the detailed description of embodiments of the disclosure in conjunction with the following drawings. In the drawings, components are not necessarily drawn to scale, and components having similar relative characteristics or features may have the same or similar reference numerals.
Fig. 1 is a schematic flow chart of a group personnel behavior identification method according to an embodiment of the present application;
FIG. 2 is a schematic structural diagram of a group personnel behavior identification system according to another embodiment of the present application;
fig. 3 is a schematic structural diagram of a computer device according to another embodiment of the present application.
Detailed description of the preferred embodiments
The present application is described in detail below with reference to the attached drawings and specific embodiments so that the objects, features and advantages of the present application can be more clearly understood. It should be understood that the aspects described below in connection with the figures and the specific embodiments are exemplary only, and should not be construed as limiting the scope of the application in any way. The singular forms "a", "an" and "the" include plural referents unless the context clearly dictates otherwise. As used herein, the terms "first" and "second" are used interchangeably to distinguish one element or class of elements from another element or class of elements, respectively, and are not intended to denote the position or importance of the individual elements.
Fig. 1 is a schematic flow chart of a group personnel behavior identification method according to an embodiment of the present application, where the group personnel behavior identification method may be executed by a neural network server connected to a police system server.
To monitor the behavior of group members at some locations, cameras may be provided at those locations, for example, multiple cameras may be deployed at a location such as indoors, at a stadium, on a street, on a plaza, on a road, at a concert, etc.
The video stream obtained by the camera can be stored in a large-capacity video memory, and the neural network server is connected with the video memory.
The neural network server reads a video stream of a certain time period acquired by a certain camera in the video memory, for example, the video stream of the camera in the time period is read according to the input camera identification code and time, and the neural network server acquires all video frames of the video stream.
The video is based on the principle of human visual persistence, and makes human eyes generate motion feeling by playing a series of pictures. The amount of video is very large by transmitting video pictures only, and is unacceptable for the existing network and storage. In order to facilitate transmission and storage of videos, the fact that the videos have a large amount of repeated information is found, if the repeated information is removed at a sending end and recovered at a receiving end, files of video data are greatly reduced, and therefore video frames are adopted for data transmission of the video streams.
Thus, the neural network server acquires all video frames of a video stream for a certain period of time.
And identifying the human body image in each video frame through a neural network server for human body identification.
When the video stream is acquired, the geographic position information of the camera acquiring the video stream can be determined according to the camera identifier of the video stream source and the stored relationship table between the camera identifier and the geographic position, a scene is determined according to the geographic position information, for example, the geographic position corresponds to a square, a stadium, a concert place, a street or a mall, and the threshold corresponding to the geographic position information is determined to be the first threshold according to the corresponding relationship between the geographic position information and the stored geographic position and the threshold. For example, in one video frame, the first threshold for gyms is 20 people, the first threshold for streets is 10 people, and the first threshold for indoors is 5 people.
When the human body image of a video frame of a certain scene exceeds the first threshold, it can be determined that a group may exist.
When the human body images in the continuous preset number of video frames are determined to exceed the threshold value corresponding to the scene, a crowd is determined to exist, and the video frames are extracted and stored.
And 102, carrying out crowd behavior classification on the human body image included in each video frame of the continuous preset number of video frames.
The neural network server establishes different crowd behavior classifications aiming at different scenes, the different crowd behavior classifications correspond to different crowd behavior classification images, for example, the human behavior training images corresponding to the running crowd behavior classification are divided into two feet, one foot is bent to a certain angle (for example, 60-90 degrees), the other foot is in a straight line or a similar straight line (for example, 150 degrees and 180 degrees), the human behavior training images corresponding to the walking crowd behavior classification are one hand or two hands are higher than the head, and the like, and different human behavior training images are established for the different crowd behavior classifications.
And for each video frame of the continuous preset number of video frames, carrying out crowd behavior classification on the human body image of each video frame.
After the human body images of each video frame are classified according to the crowd behaviors, the number of the human body images included in each class is determined, and the human body images are queued from high to low. Then, it is determined whether the number of human body images of the category in which each video frame of the predetermined number of consecutive video frames includes the most human body images is greater than or equal to a second threshold, wherein the second threshold is preconfigured and may be, for example, 10, 20, or 30.
And 104, if the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is larger than or equal to the second threshold value, determining that group personnel behaviors occur.
In another embodiment of the present application, when it is determined that the group personnel behavior occurs, alarm information may be further sent to the alarm system server, where the alarm information is used to indicate that the group personnel behavior occurs.
For example, an alarm trigger for certain group personnel behaviors is set, for example, the category of the most human body images included in each video frame of the continuous preset number of video frames is the tourist personnel behavior, an alarm mechanism is triggered, and alarm information is sent to an alarm system server, wherein the alarm information is used for indicating the occurrence of the group personnel behavior.
And 105, if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than the second threshold value, determining that the group personnel behaviors do not occur.
In another embodiment of the present application, when it is determined that no group personnel behavior has occurred, no alert information may be sent to the alert system server.
And if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is smaller than the second threshold value, determining that no group personnel behaviors occur, and waiting for group behavior identification of the next video stream instead of sending alarm information to an alarm system server.
Therefore, the group personnel behavior identification method described in the above embodiment can identify the group unconventional behavior with high sensitivity, does not need to consider the background stability, and has high accuracy.
Corresponding to the group personnel behavior identification method described above, another embodiment of the present application further provides a group personnel behavior identification device, as shown in fig. 2, which is a schematic structural diagram of a group personnel behavior identification system according to another embodiment of the present application.
The group personnel behavior recognition system comprises a group personnel behavior recognition device 21, a video memory 22 and an alarm system server 23 which are connected through a network 20. The network 20 may be a mobile communication network or the internet.
The group personnel behavior identification device 21 comprises an acquisition module 211, a classification module 212, a determination module 213, a processing module 214 and an identification module 215 which are connected with each other through a bus, wherein the acquisition module 211, the classification module 212, the determination module 213, the processing module 214 and the identification module 215 can be realized through a chip, a processor or a circuit.
The obtaining module 211 is configured to obtain all video frames of the video stream in a certain time period, and determine that the number of the included human body images is greater than or equal to a first threshold and the number of the consecutive video frames is a predetermined number. The specific function of the obtaining module 211 may refer to step 101 in the foregoing group person behavior identification method embodiment, and details are not described here.
The classification module 212 is configured to classify the human body image included in each video frame of the predetermined number of consecutive video frames into a crowd behavior. The specific function of the classification module 212 may refer to step 102 of the aforementioned group person behavior identification method embodiment, which is not described herein again.
The determining module 213 is configured to determine whether the number of human body images in the category where the human body image is the most included in each of the consecutive predetermined number of video frames is greater than or equal to a second threshold. The specific function of the determining module 213 may refer to step 103 of the foregoing group person behavior identification method embodiment, which is not described herein again.
The determining module 213 is further configured to determine, after the obtaining module 211 obtains all video frames of the video stream in a certain time period, geographic position information of a camera that obtains the video stream, and determine a scene and the first threshold corresponding to the scene according to the geographic position information. For example, the determining module 213 is configured to determine, according to the correspondence between the geographic location information and the stored geographic location and a threshold, that the threshold corresponding to the geographic location information is the first threshold.
The processing module 214 is configured to determine that group human behavior occurs if the number of human images in the category in which the human image is the most included in each video frame of the predetermined number of consecutive video frames is greater than or equal to the second threshold; and if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than the second threshold value, determining that no group personnel behaviors occur. The specific functions of the processing module 214 may refer to steps 104 and 105 of the aforementioned group person behavior identification method embodiment, and are not described herein again.
In another embodiment of the present application, when the processing module 214 determines that the group personnel behavior occurs, the processing module 214 is further configured to send alarm information to an alarm system server, where the alarm information is used to indicate that the group personnel behavior occurs.
The identifying module 215 is configured to identify a human body image in each video frame when the obtaining module 211 obtains all video frames of the video stream for a certain time period.
Therefore, the group personnel behavior recognition device described in the above embodiment can recognize the irregular behavior of the group with high sensitivity, without considering the background stability, and has high accuracy.
To solve the foregoing technical problem, an embodiment of the present application further provides a computer device, where the computer device may be a neural network server, and specifically refer to fig. 3, and fig. 3 is a block diagram of a basic structure of the computer device according to the embodiment.
The computer device 3 comprises a memory 31, a processor 32, a network interface 33 communicatively connected to each other via a system bus. It is noted that only the computer device 3 having the components 31-33 is shown in the figure, but it is to be understood that not all of the shown components are required to be implemented, and that more or less components may be implemented instead. As will be understood by those skilled in the art, the computer device 3 is a device capable of automatically performing numerical calculation and/or information processing according to instructions set or stored in advance, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable gate array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The computer device 3 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The computer device 3 can perform man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch pad, a voice control device or the like.
The memory 31 includes at least one type of readable storage medium including a non-volatile memory (non-volatile memory) or a volatile memory, for example, a flash memory (flash memory), a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM-on memory, PROM), a magnetic memory, a magnetic disk, an optical disk, etc., and the RAM may include a static RAM or a dynamic RAM. In some embodiments, the memory 31 may be an internal storage unit of the computer device 3, for example, a hard disk or a memory of the computer device 3. In other embodiments, the memory 31 may also be an external storage device of the computer device 3, such as a plug-in hard disk, a Smart Memory Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the computer device 3. Of course, the memory 31 may also comprise both an internal storage unit of the computer device 3 and an external storage device thereof. In this embodiment, the memory 31 is generally used for storing an operating system and various types of application software installed in the computer device 3, such as program codes for executing a group personnel behavior identification method. Further, the memory 31 may also be used to temporarily store various types of data that have been output or are to be output.
In the embodiment of the present application, the processor 32 may be a Central Processing Unit (CPU), a controller, a microcontroller, a microprocessor, or other data Processing chip. The processor 32 may also be other general purpose processors, Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor may be any conventional processor such as a single chip or the like.
The processor 32 is typically used to control the overall operation of the computer device 3. In this embodiment, the memory 31 is used for storing program codes or instructions, the program codes include computer operation instructions, and the processor 32 is used for executing the program codes or instructions stored in the memory 31 or processing data, such as program codes for executing a group personnel behavior identification method.
The bus described herein may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus system may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Another embodiment of the present application also provides a computer readable medium, which may be a computer readable signal medium or a computer readable medium. A processor in the computer reads computer readable program code stored in a computer readable medium, so that the processor can execute the functional actions specified in each step or the combination of the steps in the group human behavior identification method corresponding to the flowchart 1; and means for generating a block diagram that implements the functional operation specified in each block or a combination of blocks.
A computer readable medium includes, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing, the memory storing program code or instructions, the program code including computer-executable instructions, and the processor executing the program code or instructions stored by the memory.
The definitions of the memory and the processor may refer to the description of the foregoing embodiments of the computer device, and are not repeated here.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Moreover, those skilled in the art will appreciate that while some embodiments herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the application and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
It should be noted that the above-mentioned embodiments illustrate rather than limit the application, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The application may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names. The steps in the above embodiments should not be construed as limiting the order of execution unless specified otherwise.
The above-described embodiments are provided to enable persons skilled in the art to make or use the present application and that modifications or variations may be made to the above-described embodiments by persons skilled in the art without departing from the inventive concept of the present application, and therefore the scope of protection of the present application is not limited by the above-described embodiments but should be accorded the widest scope consistent with the innovative features set forth in the claims.
Claims (10)
1. A group personnel behavior identification method is characterized by comprising the following steps:
acquiring all video frames of a video stream in a certain time period, and determining the number of the human body images which are contained in the video stream is greater than or equal to a first threshold value and the number of the continuous video frames is predetermined;
carrying out crowd behavior classification on the human body image included in each video frame of the continuous preset number of video frames;
determining whether the number of human body images of the category of which the human body images are the most contained in each video frame of the continuous preset number of video frames is larger than or equal to a second threshold value;
if the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is larger than or equal to the second threshold value, determining that group personnel behaviors occur;
and if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than the second threshold value, determining that no group personnel behaviors occur.
2. The method of claim 1, wherein after obtaining all video frames of the video stream, the method further comprises:
and identifying the human body image in each video frame through a neural network server for human body identification.
3. The method of claim 1, wherein after obtaining all video frames of the video stream, the method further comprises:
and determining the geographical position information of a camera for acquiring the video stream, and determining a scene and the first threshold corresponding to the scene according to the geographical position information.
4. The method of claim 3, wherein the determining the first threshold value according to the geographic location information specifically comprises:
and determining the threshold corresponding to the geographical position information as the first threshold according to the corresponding relation between the geographical position information and the stored geographical position and threshold.
5. A group personnel behavior recognition device, comprising:
the acquisition module is used for acquiring all video frames of the video stream in a certain time period and determining the number of the included human body images which is greater than or equal to a first threshold value and is continuous with a preset number of video frames;
the classification module is used for classifying the human body images included in each video frame of the continuous preset number of video frames according to the crowd behaviors;
the determining module is used for determining whether the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is larger than or equal to a second threshold value or not;
the processing module is used for determining the occurrence of group personnel behaviors if the number of the human body images of the category with the largest human body image included in each video frame of the continuous preset number of video frames is greater than or equal to the second threshold value; and if the number of the human body images of the category with the most human body images included in each video frame of the continuous preset number of video frames is less than the second threshold value, determining that no group personnel behaviors occur.
6. The apparatus of claim 5, wherein the group personnel behavior identification means further comprises:
and the identification module is used for identifying the human body image in each video frame.
7. The apparatus according to claim 5, wherein the determining module is further configured to determine geographic location information of a camera that acquires the video stream, and determine a scene and the first threshold corresponding to the scene according to the geographic location information.
8. The apparatus according to claim 7, wherein to determine the first threshold according to the geographic location information, the determining module is configured to determine the threshold corresponding to the geographic location information as the first threshold according to a correspondence between the geographic location information and a stored geographic location and a threshold.
9. A computer device, comprising: the system comprises a processor, a memory, a communication interface and a communication bus, wherein the processor, the memory and the communication interface complete mutual communication through the communication bus;
the memory is configured to store at least one executable instruction that causes the processor to perform the steps of the group human behavior recognition method of any one of claims 1-4.
10. A computer storage medium having stored therein at least one executable instruction for causing a processor to perform the steps of the group human behavior identification method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910935090.6A CN110659624A (en) | 2019-09-29 | 2019-09-29 | Group personnel behavior identification method and device and computer storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910935090.6A CN110659624A (en) | 2019-09-29 | 2019-09-29 | Group personnel behavior identification method and device and computer storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110659624A true CN110659624A (en) | 2020-01-07 |
Family
ID=69038428
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910935090.6A Pending CN110659624A (en) | 2019-09-29 | 2019-09-29 | Group personnel behavior identification method and device and computer storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110659624A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178323A (en) * | 2020-01-10 | 2020-05-19 | 北京百度网讯科技有限公司 | Video-based group behavior identification method, device, equipment and storage medium |
CN116824456A (en) * | 2023-07-19 | 2023-09-29 | 北京升哲科技有限公司 | Video stream-based abnormal behavior detection method, device, equipment and medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
WO2009090584A2 (en) * | 2008-01-18 | 2009-07-23 | Koninklijke Philips Electronics N.V. | Method and system for activity recognition and its application in fall detection |
CN102236796A (en) * | 2011-07-13 | 2011-11-09 | Tcl集团股份有限公司 | Method and system for sorting defective contents of digital video |
CN104966052A (en) * | 2015-06-09 | 2015-10-07 | 南京邮电大学 | Attributive characteristic representation-based group behavior identification method |
CN105447458A (en) * | 2015-11-17 | 2016-03-30 | 深圳市商汤科技有限公司 | Large scale crowd video analysis system and method thereof |
CN105561492A (en) * | 2014-11-07 | 2016-05-11 | 开利公司 | Dynamic acquisition terminal for behavior statistical information of humans as well as evacuation system and method |
US9361705B2 (en) * | 2013-03-15 | 2016-06-07 | Disney Enterprises, Inc. | Methods and systems for measuring group behavior |
KR101695127B1 (en) * | 2016-03-10 | 2017-01-10 | (주)디지탈라인 | Group action analysis method by image |
CN106331657A (en) * | 2016-11-02 | 2017-01-11 | 北京弘恒科技有限公司 | Video analysis and detection method and system for crowd gathering and moving |
CN108229280A (en) * | 2017-04-20 | 2018-06-29 | 北京市商汤科技开发有限公司 | Time domain motion detection method and system, electronic equipment, computer storage media |
CN109559008A (en) * | 2018-09-19 | 2019-04-02 | 中建科技有限公司深圳分公司 | Construction monitoring method, apparatus and system |
CN109697438A (en) * | 2018-03-08 | 2019-04-30 | 中国科学院大学 | A kind of specific group's Assembling Behavior early detection and aggregation ground prediction technique and system |
-
2019
- 2019-09-29 CN CN201910935090.6A patent/CN110659624A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101325690A (en) * | 2007-06-12 | 2008-12-17 | 上海正电科技发展有限公司 | Method and system for detecting human flow analysis and crowd accumulation process of monitoring video flow |
WO2009090584A2 (en) * | 2008-01-18 | 2009-07-23 | Koninklijke Philips Electronics N.V. | Method and system for activity recognition and its application in fall detection |
CN102236796A (en) * | 2011-07-13 | 2011-11-09 | Tcl集团股份有限公司 | Method and system for sorting defective contents of digital video |
US9361705B2 (en) * | 2013-03-15 | 2016-06-07 | Disney Enterprises, Inc. | Methods and systems for measuring group behavior |
CN105561492A (en) * | 2014-11-07 | 2016-05-11 | 开利公司 | Dynamic acquisition terminal for behavior statistical information of humans as well as evacuation system and method |
CN104966052A (en) * | 2015-06-09 | 2015-10-07 | 南京邮电大学 | Attributive characteristic representation-based group behavior identification method |
CN105447458A (en) * | 2015-11-17 | 2016-03-30 | 深圳市商汤科技有限公司 | Large scale crowd video analysis system and method thereof |
KR101695127B1 (en) * | 2016-03-10 | 2017-01-10 | (주)디지탈라인 | Group action analysis method by image |
CN106331657A (en) * | 2016-11-02 | 2017-01-11 | 北京弘恒科技有限公司 | Video analysis and detection method and system for crowd gathering and moving |
CN108229280A (en) * | 2017-04-20 | 2018-06-29 | 北京市商汤科技开发有限公司 | Time domain motion detection method and system, electronic equipment, computer storage media |
CN109697438A (en) * | 2018-03-08 | 2019-04-30 | 中国科学院大学 | A kind of specific group's Assembling Behavior early detection and aggregation ground prediction technique and system |
CN109559008A (en) * | 2018-09-19 | 2019-04-02 | 中建科技有限公司深圳分公司 | Construction monitoring method, apparatus and system |
Non-Patent Citations (2)
Title |
---|
XIAOQING ZHANG 等: "Group Action Recognition Using Space-Time Interest Points", 《RESEARCHGATE》 * |
叶程: "群体异常行为识别方法", 《信息与电脑》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111178323A (en) * | 2020-01-10 | 2020-05-19 | 北京百度网讯科技有限公司 | Video-based group behavior identification method, device, equipment and storage medium |
CN111178323B (en) * | 2020-01-10 | 2023-08-29 | 北京百度网讯科技有限公司 | Group behavior recognition method, device, equipment and storage medium based on video |
CN116824456A (en) * | 2023-07-19 | 2023-09-29 | 北京升哲科技有限公司 | Video stream-based abnormal behavior detection method, device, equipment and medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Feng et al. | Spatio-temporal fall event detection in complex scenes using attention guided LSTM | |
US10009579B2 (en) | Method and system for counting people using depth sensor | |
Yang et al. | Effective 3d action recognition using eigenjoints | |
CN111383421A (en) | Privacy-preserving fall detection method and system | |
CN111770317B (en) | Video monitoring method, device, equipment and medium for intelligent community | |
CN108875708A (en) | Video-based behavior analysis method, device, equipment, system and storage medium | |
TW202121233A (en) | Image processing method, processor, electronic device, and storage medium | |
Yi et al. | Finding objects for assisting blind people | |
Kwon et al. | Toward an online continual learning architecture for intrusion detection of video surveillance | |
CN103679189A (en) | Method and device for recognizing scene | |
WO2022156317A1 (en) | Video frame processing method and apparatus, electronic device, and storage medium | |
CN113139415A (en) | Video key frame extraction method, computer device and storage medium | |
CN109902550A (en) | The recognition methods of pedestrian's attribute and device | |
Iazzi et al. | Fall detection based on posture analysis and support vector machine | |
CN114360182B (en) | Intelligent alarm method, device, equipment and storage medium | |
CN110717432B (en) | Article detection method, apparatus and computer storage medium | |
CN110659624A (en) | Group personnel behavior identification method and device and computer storage medium | |
Wang et al. | Action recognition using edge trajectories and motion acceleration descriptor | |
CN111753601A (en) | An image processing method, device and storage medium | |
CN113837066A (en) | Behavior recognition method and device, electronic equipment and computer storage medium | |
Fabbri et al. | Inter-homines: Distance-based risk estimation for human safety | |
Kushwaha et al. | Multiview human activity recognition system based on spatiotemporal template for video surveillance system | |
WO2018210039A1 (en) | Data processing method, data processing device, and storage medium | |
Wang et al. | Detecting action-relevant regions for action recognition using a three-stage saliency detection technique | |
Pramerdorfer et al. | Effective deep-learning-based depth data analysis on low-power hardware for supporting elderly care |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200107 |