CN111832526A - Behavior detection method and device - Google Patents
Behavior detection method and device Download PDFInfo
- Publication number
- CN111832526A CN111832526A CN202010717764.8A CN202010717764A CN111832526A CN 111832526 A CN111832526 A CN 111832526A CN 202010717764 A CN202010717764 A CN 202010717764A CN 111832526 A CN111832526 A CN 111832526A
- Authority
- CN
- China
- Prior art keywords
- key points
- monitored person
- feature
- image
- belonging
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
Abstract
The application provides a behavior detection method and a behavior detection device, which are used for acquiring an image of a monitoring area acquired by a camera device; performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, smoking behaviors of monitored personnel belonging to the connection diagram are detected, the smoking behaviors of the monitored personnel in the monitoring area are detected through characteristic data analysis corresponding to the image of the monitoring area, automatic monitoring of the smoking behaviors through the image analysis is achieved, the missing probability is reduced, and all-weather detection can be achieved.
Description
Technical Field
The present application belongs to the technical field of data processing, and in particular, to a behavior detection method and apparatus.
Background
At present, a smoking behavior exists in the production process of a plurality of people, when smoking, the people usually operate the machine with one hand and hold cigarettes with the other hand, so that the body is not inclined from the side independently, the center of gravity shifts, the force is uneven, and the irregular or deformed action is easily caused; and smoking easily causes sticky mouth, itchy throat or cough, and even lowers the head and bends over in severe cases, so that smoking in the process inevitably affects the accuracy of personnel operation, and the production safety is endangered.
Therefore, the smoking behavior of the personnel needs to be monitored in the production process, the smoking behavior of the personnel can be monitored by security personnel through visual judgment or manual patrol for inspection through a camera at present, but the visual judgment or the manual patrol inspection of the security personnel is the manual monitoring smoking behavior, so that omission is easy to occur and all-weather detection cannot be realized.
Disclosure of Invention
In view of this, an object of the present application is to provide a behavior detection method and apparatus, which are used to implement detection of smoking behavior of monitored personnel in a monitoring area through analysis of feature data corresponding to an image of the monitoring area, and implement automatic monitoring of smoking behavior, so as to reduce omission probability and enable all-weather detection.
In one aspect, the present application provides a behavior detection method, including:
acquiring an image of a monitoring area acquired by a camera device;
performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong;
obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram.
Optionally, the obtaining a connection diagram of key points belonging to the same monitored person according to the first feature data and the second feature data includes:
determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and connecting the key points belonging to the body part of the same monitored person according to the body part to obtain a connection diagram of the key points belonging to the same monitored person.
Optionally, the determining key points of body parts belonging to the same monitored person according to the first feature data and the second feature data includes:
determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data;
calculating the vector average value of all key points belonging to the same monitored person;
calculating the correlation among the key points according to the vector average value;
and determining key points of the body part belonging to the same monitored person from all the key points belonging to the same body part according to the correlation among the key points.
Optionally, the extracting the key point features of the image to obtain first feature data and second feature data corresponding to the image includes:
acquiring a characteristic map of the image; obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and feature data output by the previous layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
Optionally, the detecting, according to the connection diagram, the smoking behavior of the monitored person to which the connection diagram belongs includes:
acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection graph;
calculating the distance between two specific key points of the monitored person according to the deflection angle, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face;
and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points.
In another aspect, the present application provides a behavior detection apparatus, the apparatus comprising:
the acquisition unit is used for acquiring the image of the monitoring area acquired by the camera device;
the extraction unit is used for extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong;
the obtaining unit is used for obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and the detection unit is used for detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram.
Optionally, the obtaining unit includes:
the determining subunit is used for determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and the connecting subunit is used for connecting the key points of the body parts belonging to the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
Optionally, the determining subunit is configured to determine, according to the first feature data and the second feature data, all key points belonging to the same body part; calculating the vector average value of all key points belonging to the same monitored person; calculating the correlation among the key points according to the vector average value; and determining key points of the body part belonging to the same monitored person from all the key points belonging to the same body part according to the correlation among the key points.
Optionally, the extracting unit is configured to obtain a feature map of the image; obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and feature data output by the previous layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
Optionally, the detection unit is configured to obtain a deflection angle of the face of the monitored person according to a key point belonging to the face in the connection graph; calculating the distance between two specific key points of the monitored person according to the deflection angle, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points.
The behavior detection method and the behavior detection device acquire the image of the monitoring area acquired by the camera device; performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, smoking behaviors of monitored personnel belonging to the connection diagram are detected, the smoking behaviors of the monitored personnel in the monitoring area are detected through characteristic data analysis corresponding to the image of the monitoring area, automatic monitoring of the smoking behaviors through the image analysis is achieved, the missing probability is reduced, and all-weather detection can be achieved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a behavior detection method provided in an embodiment of the present application;
FIG. 2 is a schematic diagram of a neural network model according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram of a connection diagram provided by an embodiment of the present application;
fig. 4 is a schematic structural diagram of a behavior detection apparatus according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, a flowchart of a behavior detection method provided in an embodiment of the present application is shown, which may include the following steps:
101: and acquiring an image of the monitoring area acquired by the camera device. Wherein the monitoring area is the region of monitored personnel, monitors the smoking action of monitored personnel in the monitoring area through obtaining the image in monitoring area. The camera device can be a camera with a shooting range pointing to the monitoring area, the camera collects images of the monitoring area, the camera can collect videos of the monitoring area, the images are obtained from the videos, and the embodiment is not limited.
102: and performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong.
The embodiment monitors the smoking behavior of the monitored person through image analysis, and when the monitored person has the smoking behavior, the human face of the monitored person deflects and the distance between a certain point in the human face and the hand head (namely the foremost end of the hand, such as the foremost end of the middle finger) changes, so that in the process of extracting the characteristics of the key points, all the key points possibly attached to the body part are obtained from the image, and the first characteristic data and the second characteristic data are obtained according to all the key points possibly attached to the body part.
For example, one mode is that, by using an image recognition technology, the monitored person in the image is determined, feature extraction is performed on the monitored person to obtain key points on the body part of the monitored person, and the correspondence between the key points and the body part is recorded, so that the key points on the body parts of all the monitored persons in the image are used as the first feature data, and the correspondence between the key points and the body part is used as the second feature data.
The other mode is that a feature map of the image is obtained, and first feature data and second feature data corresponding to the image are obtained through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the first layer of the plurality of feature extraction layers takes the feature map as input, the other layers except the first layer of the plurality of feature extraction layers take the feature map and feature data output by the previous layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
As shown in fig. 2, F denotes a feature map, Stage 1 denotes a first layer of the plurality of feature extraction layers, Stage T (T ≧ 2) denotes layers other than the first layer among the plurality of feature extraction layers, the total number of feature extraction layers of the predictive neural network model is denoted as T, Branch 1 denotes a first feature extraction Branch, Branch 2 denotes a second feature extraction Branch, the first feature extraction Branch and the second feature extraction Branch can process an input by at least one of Convolution, pooling, and full concatenation, and C in fig. 2 denotes Convolution (Convolution). As can be seen from the preset neural network model shown in fig. 2, the operation principle is as follows:
the first layer of a plurality of characteristic extraction layers of the preset neural network model takes the characteristic diagram as input, and the characteristic diagram is convoluted, pooled and fully connected through the first characteristic extraction branch of the first layer to obtain S output by the first characteristic extraction branch of the first layer1And performing convolution, pooling and full-connection processing on the feature map through the second feature extraction branch of the first layer to obtain L output by the second feature extraction branch of the first layer1。
Output S of the first layer1、L1And the characteristic diagram is used as the input of the second layer, and the input is convoluted, pooled and fully connected through the first characteristic extraction branch and the second characteristic extraction branch of the second layer to obtain S output by the first characteristic extraction branch of the second layer2And L of the second feature extraction branch output2… …, and so on, the output S of the T-1 th layerT-1、LT-1And taking the feature map as the input of the T-th layer (the T-th layer is the last layer), performing convolution, pooling and full connection processing on the input through the first feature extraction branch and the second feature extraction branch of the T-th layer to obtain S output by the first feature extraction branch of the T-th layerTAnd L of the second feature extraction branch outputTExtracting the first feature of the T-th layer and outputting STAs the first feature data, the second feature extraction branch of the T-th layer is output as LTAs second characteristic data.
As can be known from the working principle of the preset neural network model shown in fig. 2, the relationship between two adjacent layers, i.e. the t-th layer and the t + 1-th layer, in the plurality of feature extraction layers is as follows: output S of the t-th layert、LtAnd the profile output as input to layer t + 1. S of the T-th layerTThe output predicts the respective key points, L, in the imageTThe body part to which the key point belongs is predicted, and even the trend of any key point in the body part can be predicted, if the key point is a left shoulder, the left elbow can be predicted, and the left shoulder and the left elbow belong to two points in the left arm of one body part, so that the body part to which the key point belongs can be predicted, and which point in the body part is predicted.
The preset neural network model can comprise 10 layers of feature extraction layers, the first feature data and the second feature data obtained through the 10 layers of feature extraction layers can meet the requirement of smoking behavior detection, and compared with more feature extraction layers, the preset neural network model with fewer layers can be adopted to extract the feature data on the basis of ensuring the detection accuracy, so that the running time can be reduced, and the detection speed can be increased. The network architecture of the preset neural network model may adopt, but is not limited to, a network architecture that adopts a convolutional neural network, such as a vgg (visual Geometry Group network) model, to construct the network architecture of the preset neural network model.
In this embodiment, the training process of the preset neural network model is as follows: the characteristic diagram of an image and the real image with artificially marked key points are used as the input of a preset neural network model, such as the characteristic diagram of a color image with the size of w x h (width x height without limit) and the real image with artificially marked 19 key points are used as the input, and the model parameters of the preset neural network model with a plurality of characteristic extraction layers are adjusted.
In the process of adjusting the model parameters, an L2 norm between a key point predicted by a preset neural network model and an artificially labeled key point calculated by a Loss function (Loss) is used as a condition whether to end the adjustment of the model parameters, if the L2 norm is minimum, the adjustment of the model parameters is ended, and of course, other conditions may be used as an end condition of the adjustment of the model parameters, which is not always described in this embodiment.
Wherein the calculation formula of the L2 norm of each feature extraction layer is as follows:
w (p) represents the weight of a pixel point p in the image, the weight is trained, j represents the j-th key point, c represents the vector sequence number of the key point connection, if the j-th key point moves to the j + 1-th key point, the trend can connect the j-th key point with the j + 1-th key point, which is represented as j-j +1 connection, and is regarded as a vector from j to j +1, c represents the vector sequence number to identify the corresponding vector,representing predicted keypoint information (e.g. coordinates of keypoints),the key point information representing the manual annotation,indicating the predicted trend information and the predicted trend information,and representing the trend information of the manual annotation.
Summing the L2 norms of all feature extraction layers to obtain the L2 norm of the preset neural network model, e.g.In the process of training the preset neural network model, xj,kRepresenting the coordinate of the jth key point of the marked kth person, if the coordinate of a pixel point extracted by the preset neural network model is close to xj,kAnd when the peak value of the normal curve is reached, namely the maximum value of the curve, determining that the extracted pixel point can be used as a key point predicted by a preset neural network model. During the training process, if a predicted keypoint and a manually labeled keypoint are missing (i.e., no matching keypoint is found), the keypoint can be ignored in the calculation of the L2 norm.
103: and obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data. Because there may be a plurality of monitored personnel in the monitoring area, first characteristic data and second characteristic data correspond to a plurality of monitored personnel, consequently need follow first characteristic data and second characteristic data and determine the key point that belongs to a monitored personnel, utilize this monitored personnel's key point to detect its smoking action. Can detect according to the gesture that the connection diagram of monitored personnel's key point instructed in carrying out smoking action testing process, the corresponding connection diagram that needs to establish monitored personnel's key point, its process is as follows:
determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data; and connecting the key points belonging to the body part of the same monitored person according to the body part to obtain a connection diagram of the key points belonging to the same monitored person.
That is, first, key points of body parts belonging to the same monitored person, such as key points of faces and limbs belonging to the same monitored person, are selected from the key points of the monitored person, and then, the body parts belonging to the key points are connected to obtain a connection diagram of the key points of the monitored person, as shown in fig. 3.
Wherein determining key points of body parts belonging to the same monitored person comprises: determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data; calculating the vector average value of all key points belonging to the same monitored person; calculating the correlation between the key points according to the vector average value; and determining key points belonging to the body part of the same monitored person from all the key points belonging to the same body part according to the correlation among the key points.
Since the second feature data can represent the body part to which the keypoint belongs, there may be a plurality of monitored persons in one image, but which monitored person the keypoint belongs to is unknown, in this embodiment, first, according to the body part to which the keypoint represented by the second feature data belongs, all the keypoints belonging to the same body part are selected from the first feature data, for example, all the left elbow keypoints and all the left shoulder keypoints belonging to the left upper arm are selected from the first feature data.
Selecting two key points from all key points belonging to the same body part, and calculating the vector average value of the two selected key points belonging to the same monitored person, even the vector average value of one body part belonging to the same monitored person, for example, calculating a unit vector by the following formula:
v=(xj2,k-xj1,k)/||xj2,k-xj1,k||2
k denotes the kth monitored person, j1 and j2 denote two key points that can be connected (for example, the elbow and the wrist are connected by the arm), c denotes the c-th limb, such as the arm, and the coordinate x of the key point j1 of the kth monitored person is calculated by the above formulaj1,kCoordinate x pointing to keypoint j2j2,kUnit vector ofWhether the pixel point p falls on the limb or not needs to satisfy two conditions
0≤v*(p-xj1,k)≤lc,kAnd | v⊥*(p-xj1,k)|≤σl,lc,kDenotes the c th speciesLimb length of the limb, σlDenotes the limb width, v, of the c-th limb⊥Representing a vector perpendicular to the vector v.
Of the c-th limb in the imageThe calculation formula of the vector average value of the pixel point p of the kth monitored person is as follows:
ncand (p) is the number of non-zero unit vectors of the pixel point p in the c-th limb of all monitored personnel, and K is the total number of the monitored personnel in the monitoring area. Here, the calculation is only one example of the vector average value, and the present embodiment may also adopt other manners, which is not further described in this embodiment.
Obtaining a correlation between the two key points by integrating the vector average value, wherein the correlation between the two key points indicates the possibility that the two key points belong to the body part of the same monitored person, and the correlation is calculated according to the following formula:
if all the left elbow key points and all the left shoulder key points belonging to the left upper arm are selected, vector average values of all the left shoulder key points and the left elbow key points are calculated by taking one left elbow key point as a reference, the correlation between one left elbow key point and all the left shoulder key points is obtained through integration of each vector average value, one left shoulder key point is selected according to the correlation, and if the left shoulder key point with the largest correlation value is selected, the selected left shoulder key point and the selected left elbow key point belong to the left upper arm of the same monitored person.
By the method, all key points belonging to the same monitored person can be selected from the first characteristic data, the body part of each key point is known, and therefore a connection graph of the key points of the monitored person can be obtained by connecting all the key points according to the body part.
104: and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram.
The connection diagram is used for representing the posture of the monitored person to which the connection diagram belongs, if the posture of the monitored person is changed relative to the posture of the monitored person without smoking behavior, for example, when the monitored person has smoking behavior, the face of the monitored person deflects and the distance between a certain point in the face and a hand head (namely the foremost end of a hand, such as a middle finger) is changed, based on the fact that whether the face deflects and the distance between the certain point in the face and the hand head are changed, whether the posture of the monitored person to which the connection diagram belongs is the posture of the monitored person with smoking behavior is determined, and if yes, the monitored person has smoking behavior. Under the condition that it is confirmed that the monitored person has smoking behavior, the embodiment can also output the image where the monitored person is located, so that the person can conveniently monitor.
In this embodiment, one possible way to detect the smoking behavior of the monitored person to which the connection diagram belongs is as follows:
acquiring the deflection angle of the face of the monitored person according to key points belonging to the face in the connection graph; calculating the distance between two specific key points of the monitored person according to the deflection angle, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points, and determining that the smoking behavior exists in the monitored person to which the connection diagram belongs if the distance between the two specific key points is smaller than a preset distance.
In this embodiment, in the process of obtaining the deflection angle of the face of the monitored person, a calibration ellipse mode may be adopted, but is not limited to being adopted, and the calibration ellipse is adopted to determine whether the face has the up-down pitching condition, as shown in a connection diagram shown in fig. 3, a side view of the face is regarded as an ellipse, a y axis is a bisector of the ellipse, an x axis is a perpendicular bisector of a connecting line between eyes and a mouth, and when the up-down pitching condition does not exist, a1 is a2 when the perpendicular depth rotates; after the vertical depth rotation, the x-axis is no longer the perpendicular bisector of the line connecting the eye and the mouth, and the calculation formula of the depth rotation angle of the first side is that a0 is 1/2(a1-a2) according to the property of an isosceles triangle.
When the face deflects left and right, i.e. the side depth is rotated, the included angle difference between the under nose point and the left and right outer eye points is B _ eyeout, and the included angle difference between the under nose point and the left and right inner eye points, and the included angle difference between the under nose point and the left and right mouth corner points, B _ mouth can be obtained at the same time, and in order to reduce errors, the second side depth rotation angle of the face is the average of these three angles, i.e. B0 ═ 1/3(B _ eyeout + B _ mouth + B _ eyein).
And after a0 and B0 are obtained, solving the human face posture according to a quasi-Newton method to obtain the deflection angle of the human face, and recording the deflection angle as a _ face. From the deflection angle, the formula for calculating the distance between two specific key points of the monitored person is as follows:
(x1,y1) And (x)2,y2) Are the coordinates of two specific keypoints used to calculate distance, such as the coordinates of the right ear and the right hand head, as described above.
The behavior detection method comprises the steps of obtaining an image of a monitoring area acquired by a camera device; performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, smoking behaviors of monitored personnel belonging to the connection diagram are detected, the smoking behaviors of the monitored personnel in the monitoring area are detected through characteristic data analysis corresponding to the image of the monitoring area, automatic monitoring of the smoking behaviors through the image analysis is achieved, the missing probability is reduced, and all-weather detection can be achieved.
Corresponding to the above method embodiment, an embodiment of the present application further provides a behavior detection apparatus, an optional structure of which is shown in fig. 4, and may include: an acquisition unit 10, an extraction unit 20, an acquisition unit 30 and a detection unit 40.
The acquiring unit 10 is configured to acquire an image of the monitoring area acquired by the imaging device. Wherein the monitoring area is the region of monitored personnel, monitors the smoking action of monitored personnel in the monitoring area through obtaining the image in monitoring area. The camera device can be a camera with a shooting range pointing to the monitoring area, the camera collects images of the monitoring area, the camera can collect videos of the monitoring area, the images are obtained from the videos, and the embodiment is not limited.
The extraction unit 20 is configured to perform key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, where the first feature data is used to represent key points in the image, and the second feature data is used to represent body parts to which the key points belong.
The embodiment monitors the smoking behavior of the monitored person through image analysis, and when the monitored person has the smoking behavior, the human face of the monitored person deflects and the distance between a certain point in the human face and the hand head (namely the foremost end of the hand, such as the foremost end of the middle finger) changes, so that in the process of extracting the characteristics of the key points, all the key points possibly attached to the body part are obtained from the image, and the first characteristic data and the second characteristic data are obtained according to all the key points possibly attached to the body part.
One way for the extraction unit 20 to obtain the first feature data and the second feature data is: the extraction unit 20 acquires a feature map of the image; obtaining first feature data and second feature data corresponding to an image by a preset neural network model and a feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the feature map is taken as input in the first layer of the plurality of feature extraction layers, the feature maps and feature data output in the last layer of the plurality of feature extraction layers are taken as input in other layers except the first layer, the output of the first feature extraction branch in the last layer of the plurality of feature extraction layers is taken as the first feature data, the output of the second feature extraction branch in the last layer is taken as the second feature data, and reference is made to the method embodiment for the training process and the using process of the preset neural network model, which is not repeated here. The extracting unit 20 may obtain the first feature data and the second feature data in other manners, which please refer to the above embodiments, and the description thereof is omitted here.
And the obtaining unit 30 is configured to obtain a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data. Because there may be a plurality of monitored personnel in the monitoring area, first characteristic data and second characteristic data correspond to a plurality of monitored personnel, consequently need follow first characteristic data and second characteristic data and determine the key point that belongs to a monitored personnel, utilize this monitored personnel's key point to detect its smoking action. During the smoking behavior detection process, the detection can be performed according to the posture indicated by the connection diagram of the key points of the monitored person, the connection diagram of the key points of the monitored person is correspondingly constructed, and an optional structure of the obtaining unit 30 is as follows:
the obtaining unit 30 includes: a determining subunit and a connecting subunit. The determining subunit is used for determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data; and the connecting subunit is used for connecting the key points of the body parts belonging to the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
Wherein the determining subunit determines the key points of the body parts belonging to the same monitored person comprising: determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data; calculating the vector average value of all key points belonging to the same monitored person; calculating the correlation between the key points according to the vector average value; according to the correlation between the key points, the key points belonging to the same body part of the monitored person are determined from all the key points belonging to the same body part.
And the detection unit 40 is used for detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram. The connection diagram is used for representing the posture of the monitored person to which the connection diagram belongs, if the posture of the monitored person is changed relative to the posture of the monitored person without smoking behavior, for example, when the monitored person has smoking behavior, the face of the monitored person deflects and the distance between a certain point in the face and a hand head (namely the foremost end of a hand, such as a middle finger) is changed, based on the fact that whether the face deflects and the distance between the certain point in the face and the hand head are changed, whether the posture of the monitored person to which the connection diagram belongs is the posture of the monitored person with smoking behavior is determined, and if yes, the monitored person has smoking behavior. Under the condition that it is confirmed that the monitored person has smoking behavior, the embodiment can also output the image where the monitored person is located, so that the person can conveniently monitor.
In this embodiment, one possible way for the detection unit 40 to detect the smoking behavior of the monitored person to which the connection diagram belongs is as follows:
acquiring the deflection angle of the face of the monitored person according to key points belonging to the face in the connection graph; calculating the distance between two specific key points of the monitored person according to the deflection angle, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points, and determining that the monitored person to which the connection diagram belongs has the smoking behavior if the distance between the two specific key points is smaller than a preset distance.
The behavior detection device acquires an image of a monitoring area acquired by the camera device; performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, smoking behaviors of monitored personnel belonging to the connection diagram are detected, the smoking behaviors of the monitored personnel in the monitoring area are detected through characteristic data analysis corresponding to the image of the monitoring area, automatic monitoring of the smoking behaviors through the image analysis is achieved, the missing probability is reduced, and all-weather detection can be achieved.
In addition, an embodiment of the present application further provides a storage medium, in which computer program codes are stored, and when the computer program codes are executed, the behavior detection method is implemented.
The embodiment of the application also provides monitoring equipment which comprises a processor, a communication interface and a display screen. The processor acquires an image of a monitoring area acquired by the camera device through the communication interface, performs key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data is used for representing key points in the image, and the second feature data is used for representing body parts to which the key points belong; obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data; according to the connection diagram, the smoking behavior of the monitored person to which the connection diagram belongs is detected; the display screen is used for displaying the image of the monitoring area, and for the working process of the processor, reference is made to the above method embodiment, which is not described again in this embodiment.
It should be noted that, various embodiments in this specification may be described in a progressive manner, and features described in various embodiments in this specification may be replaced with or combined with each other, each embodiment focuses on differences from other embodiments, and similar parts between various embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing is only a preferred embodiment of the present application and it should be noted that those skilled in the art can make several improvements and modifications without departing from the principle of the present application, and these improvements and modifications should also be considered as the protection scope of the present application.
Claims (10)
1. A method of behavior detection, the method comprising:
acquiring an image of a monitoring area acquired by a camera device;
performing key point feature extraction on the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong;
obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram.
2. The method of claim 1, wherein obtaining a connection graph of keypoints belonging to the same monitored person from the first feature data and the second feature data comprises:
determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and connecting the key points belonging to the body part of the same monitored person according to the body part to obtain a connection diagram of the key points belonging to the same monitored person.
3. The method of claim 2, wherein determining keypoints belonging to the same monitored person's body part from the first and second feature data comprises:
determining all key points belonging to the same body part according to the first characteristic data and the second characteristic data;
calculating the vector average value of all key points belonging to the same monitored person;
calculating the correlation among the key points according to the vector average value;
and determining key points of the body part belonging to the same monitored person from all the key points belonging to the same body part according to the correlation among the key points.
4. The method according to claim 1, wherein the extracting the key point features of the image to obtain first feature data and second feature data corresponding to the image comprises:
acquiring a characteristic map of the image; obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and feature data output by the previous layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
5. The method according to any one of claims 1 to 4, wherein the detecting smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram comprises:
acquiring the deflection angle of the face of the monitored person according to the key points belonging to the face in the connection graph;
calculating the distance between two specific key points of the monitored person according to the deflection angle, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face;
and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points.
6. A behavior detection device, characterized in that the device comprises:
the acquisition unit is used for acquiring the image of the monitoring area acquired by the camera device;
the extraction unit is used for extracting key point features of the image to obtain first feature data and second feature data corresponding to the image, wherein the first feature data are used for representing key points in the image, and the second feature data are used for representing body parts to which the key points belong;
the obtaining unit is used for obtaining a connection diagram of key points belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and the detection unit is used for detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the connection diagram.
7. The apparatus of claim 6, wherein the obtaining unit comprises:
the determining subunit is used for determining key points of body parts belonging to the same monitored person according to the first characteristic data and the second characteristic data;
and the connecting subunit is used for connecting the key points of the body parts belonging to the same monitored person according to the body parts to obtain a connection diagram of the key points belonging to the same monitored person.
8. The apparatus according to claim 7, wherein the determining subunit is configured to determine all keypoints belonging to the same body part from the first feature data and the second feature data; calculating the vector average value of all key points belonging to the same monitored person; calculating the correlation among the key points according to the vector average value; and determining key points of the body part belonging to the same monitored person from all the key points belonging to the same body part according to the correlation among the key points.
9. The apparatus according to claim 6, wherein the extracting unit is configured to obtain a feature map of the image; obtaining first feature data and second feature data corresponding to the image through a preset neural network model and the feature map, wherein the preset neural network model comprises a plurality of feature extraction layers, each feature extraction layer comprises a first feature extraction branch and a second feature extraction branch, the first layer of the plurality of feature extraction layers takes the feature map as input, other layers except the first layer of the plurality of feature extraction layers take the feature map and feature data output by the previous layer as input, the output of the first feature extraction branch of the last layer of the plurality of feature extraction layers is taken as the first feature data, and the output of the second feature extraction branch of the last layer is taken as the second feature data.
10. The device according to any one of claims 6 to 9, wherein the detection unit is configured to obtain a deflection angle of the face of the monitored person according to a key point belonging to the face in the connection map; calculating the distance between two specific key points of the monitored person according to the deflection angle, wherein the two specific key points are the left ear and the left hand head of the face or the two specific key points are the right ear and the right hand head of the face; and detecting the smoking behavior of the monitored person to which the connection diagram belongs according to the distance between the two specific key points.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010717764.8A CN111832526B (en) | 2020-07-23 | 2020-07-23 | A behavior detection method and device |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN202010717764.8A CN111832526B (en) | 2020-07-23 | 2020-07-23 | A behavior detection method and device |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111832526A true CN111832526A (en) | 2020-10-27 |
| CN111832526B CN111832526B (en) | 2024-06-11 |
Family
ID=72925232
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN202010717764.8A Active CN111832526B (en) | 2020-07-23 | 2020-07-23 | A behavior detection method and device |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111832526B (en) |
Citations (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5444791A (en) * | 1991-09-17 | 1995-08-22 | Fujitsu Limited | Moving body recognition apparatus |
| CN105260703A (en) * | 2015-09-15 | 2016-01-20 | 西安邦威电子科技有限公司 | Detection method suitable for smoking behavior of driver under multiple postures |
| CN108038469A (en) * | 2017-12-27 | 2018-05-15 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting human body |
| CN108629282A (en) * | 2018-03-29 | 2018-10-09 | 福州海景科技开发有限公司 | A kind of smoking detection method, storage medium and computer |
| CN109325412A (en) * | 2018-08-17 | 2019-02-12 | 平安科技(深圳)有限公司 | Pedestrian recognition method, device, computer equipment and storage medium |
| CN109492581A (en) * | 2018-11-09 | 2019-03-19 | 中国石油大学(华东) | A kind of human motion recognition method based on TP-STG frame |
| CN109784140A (en) * | 2018-11-19 | 2019-05-21 | 深圳市华尊科技股份有限公司 | Driver attributes' recognition methods and Related product |
| CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A driver abnormal posture monitoring method based on reinforcement learning |
| CN109918975A (en) * | 2017-12-13 | 2019-06-21 | 腾讯科技(深圳)有限公司 | An augmented reality processing method, object recognition method and terminal |
| CN110298257A (en) * | 2019-06-04 | 2019-10-01 | 东南大学 | A kind of driving behavior recognition methods based on human body multiple location feature |
| CN110298332A (en) * | 2019-07-05 | 2019-10-01 | 海南大学 | Method, system, computer equipment and the storage medium of Activity recognition |
| US20190303677A1 (en) * | 2018-03-30 | 2019-10-03 | Naver Corporation | System and method for training a convolutional neural network and classifying an action performed by a subject in a video using the trained convolutional neural network |
| CN110309723A (en) * | 2019-06-04 | 2019-10-08 | 东南大学 | A Driver Behavior Recognition Method Based on Human Feature Segmentation |
| CN110399767A (en) * | 2017-08-10 | 2019-11-01 | 北京市商汤科技开发有限公司 | Occupant's dangerous play recognition methods and device, electronic equipment, storage medium |
| CN110425005A (en) * | 2019-06-21 | 2019-11-08 | 中国矿业大学 | The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning |
| US20190347826A1 (en) * | 2018-05-11 | 2019-11-14 | Samsung Electronics Co., Ltd. | Method and apparatus for pose processing |
| WO2019232894A1 (en) * | 2018-06-05 | 2019-12-12 | 中国石油大学(华东) | Complex scene-based human body key point detection system and method |
| US20190392587A1 (en) * | 2018-06-22 | 2019-12-26 | Microsoft Technology Licensing, Llc | System for predicting articulated object feature location |
| CN110688921A (en) * | 2019-09-17 | 2020-01-14 | 东南大学 | Method for detecting smoking behavior of driver based on human body action recognition technology |
| CN110781771A (en) * | 2019-10-08 | 2020-02-11 | 北京邮电大学 | A real-time monitoring method for abnormal behavior based on deep learning |
| CN110837815A (en) * | 2019-11-15 | 2020-02-25 | 济宁学院 | Driver state monitoring method based on convolutional neural network |
| CN111144263A (en) * | 2019-12-20 | 2020-05-12 | 山东大学 | Method and device for early warning of high fall accident for construction workers |
| WO2020093837A1 (en) * | 2018-11-07 | 2020-05-14 | 北京达佳互联信息技术有限公司 | Method for detecting key points in human skeleton, apparatus, electronic device, and storage medium |
| CN111178323A (en) * | 2020-01-10 | 2020-05-19 | 北京百度网讯科技有限公司 | Video-based group behavior identification method, device, equipment and storage medium |
| CN111310542A (en) * | 2019-12-02 | 2020-06-19 | 湖南中烟工业有限责任公司 | Smoking behavior detection method and system, terminal and storage medium |
| CN111414813A (en) * | 2020-03-03 | 2020-07-14 | 南京领行科技股份有限公司 | Dangerous driving behavior identification method, device, equipment and storage medium |
| US10713948B1 (en) * | 2019-01-31 | 2020-07-14 | StradVision, Inc. | Method and device for alerting abnormal driver situation detected by using humans' status recognition via V2V connection |
-
2020
- 2020-07-23 CN CN202010717764.8A patent/CN111832526B/en active Active
Patent Citations (27)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US5444791A (en) * | 1991-09-17 | 1995-08-22 | Fujitsu Limited | Moving body recognition apparatus |
| CN105260703A (en) * | 2015-09-15 | 2016-01-20 | 西安邦威电子科技有限公司 | Detection method suitable for smoking behavior of driver under multiple postures |
| CN110399767A (en) * | 2017-08-10 | 2019-11-01 | 北京市商汤科技开发有限公司 | Occupant's dangerous play recognition methods and device, electronic equipment, storage medium |
| CN109918975A (en) * | 2017-12-13 | 2019-06-21 | 腾讯科技(深圳)有限公司 | An augmented reality processing method, object recognition method and terminal |
| CN108038469A (en) * | 2017-12-27 | 2018-05-15 | 百度在线网络技术(北京)有限公司 | Method and apparatus for detecting human body |
| CN108629282A (en) * | 2018-03-29 | 2018-10-09 | 福州海景科技开发有限公司 | A kind of smoking detection method, storage medium and computer |
| US20190303677A1 (en) * | 2018-03-30 | 2019-10-03 | Naver Corporation | System and method for training a convolutional neural network and classifying an action performed by a subject in a video using the trained convolutional neural network |
| US20190347826A1 (en) * | 2018-05-11 | 2019-11-14 | Samsung Electronics Co., Ltd. | Method and apparatus for pose processing |
| WO2019232894A1 (en) * | 2018-06-05 | 2019-12-12 | 中国石油大学(华东) | Complex scene-based human body key point detection system and method |
| US20190392587A1 (en) * | 2018-06-22 | 2019-12-26 | Microsoft Technology Licensing, Llc | System for predicting articulated object feature location |
| CN109325412A (en) * | 2018-08-17 | 2019-02-12 | 平安科技(深圳)有限公司 | Pedestrian recognition method, device, computer equipment and storage medium |
| WO2020093837A1 (en) * | 2018-11-07 | 2020-05-14 | 北京达佳互联信息技术有限公司 | Method for detecting key points in human skeleton, apparatus, electronic device, and storage medium |
| CN109492581A (en) * | 2018-11-09 | 2019-03-19 | 中国石油大学(华东) | A kind of human motion recognition method based on TP-STG frame |
| CN109784140A (en) * | 2018-11-19 | 2019-05-21 | 深圳市华尊科技股份有限公司 | Driver attributes' recognition methods and Related product |
| CN109902562A (en) * | 2019-01-16 | 2019-06-18 | 重庆邮电大学 | A driver abnormal posture monitoring method based on reinforcement learning |
| US10713948B1 (en) * | 2019-01-31 | 2020-07-14 | StradVision, Inc. | Method and device for alerting abnormal driver situation detected by using humans' status recognition via V2V connection |
| CN110309723A (en) * | 2019-06-04 | 2019-10-08 | 东南大学 | A Driver Behavior Recognition Method Based on Human Feature Segmentation |
| CN110298257A (en) * | 2019-06-04 | 2019-10-01 | 东南大学 | A kind of driving behavior recognition methods based on human body multiple location feature |
| CN110425005A (en) * | 2019-06-21 | 2019-11-08 | 中国矿业大学 | The monitoring of transportation of belt below mine personnel's human-computer interaction behavior safety and method for early warning |
| CN110298332A (en) * | 2019-07-05 | 2019-10-01 | 海南大学 | Method, system, computer equipment and the storage medium of Activity recognition |
| CN110688921A (en) * | 2019-09-17 | 2020-01-14 | 东南大学 | Method for detecting smoking behavior of driver based on human body action recognition technology |
| CN110781771A (en) * | 2019-10-08 | 2020-02-11 | 北京邮电大学 | A real-time monitoring method for abnormal behavior based on deep learning |
| CN110837815A (en) * | 2019-11-15 | 2020-02-25 | 济宁学院 | Driver state monitoring method based on convolutional neural network |
| CN111310542A (en) * | 2019-12-02 | 2020-06-19 | 湖南中烟工业有限责任公司 | Smoking behavior detection method and system, terminal and storage medium |
| CN111144263A (en) * | 2019-12-20 | 2020-05-12 | 山东大学 | Method and device for early warning of high fall accident for construction workers |
| CN111178323A (en) * | 2020-01-10 | 2020-05-19 | 北京百度网讯科技有限公司 | Video-based group behavior identification method, device, equipment and storage medium |
| CN111414813A (en) * | 2020-03-03 | 2020-07-14 | 南京领行科技股份有限公司 | Dangerous driving behavior identification method, device, equipment and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 汪旭;陈仁文;黄斌;: "基于Android系统的司机驾驶安全监测系统的实现", 电子测量技术, no. 08, pages 56 - 60 * |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111832526B (en) | 2024-06-11 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN110837784B (en) | Examination room peeping and cheating detection system based on human head characteristics | |
| CN110796051A (en) | Real-time access behavior detection method and system based on container scene | |
| CN114091511B (en) | Body-building action scoring method, system and device based on space-time information | |
| US11663845B2 (en) | Method and apparatus for privacy protected assessment of movement disorder video recordings | |
| CN110532850B (en) | A Fall Detection Method Based on Video Joints and Hybrid Classifiers | |
| CN110991268B (en) | Depth image-based Parkinson hand motion quantization analysis method and system | |
| JP2000251078A (en) | Method and device for estimating three-dimensional posture of person, and method and device for estimating position of elbow of person | |
| JP2021071769A (en) | Object tracking device and object tracking method | |
| JP2011248664A (en) | Operation analysis device and operation analysis method | |
| CN114463850A (en) | Human body action recognition system suitable for multiple application scenes | |
| CN112800905A (en) | Pull-up counting method based on RGBD camera attitude estimation | |
| CN114639168B (en) | Method and system for recognizing running gesture | |
| CN116189301A (en) | A normative evaluation method for standing long jump based on attitude estimation | |
| CN111611928B (en) | A height and body size measurement method based on monocular vision and key point recognition | |
| CN113408435A (en) | Safety monitoring method, device, equipment and storage medium | |
| CN106327441A (en) | Image radial distortion automatic correction method and system | |
| JP2004303014A (en) | Gesture recognition device, gesture recognition method, and gesture recognition program | |
| CN113221815A (en) | Gait identification method based on automatic detection technology of skeletal key points | |
| CN115641646B (en) | A CPR automatic detection quality control method and system | |
| JP7211495B2 (en) | Training data generator | |
| CN114870385A (en) | Established long jump testing method based on optimized OpenPose model | |
| CN111832526A (en) | Behavior detection method and device | |
| CN110321869A (en) | Personnel's detection and extracting method based on Multiscale Fusion network | |
| JP6810442B2 (en) | A camera assembly, a finger shape detection system using the camera assembly, a finger shape detection method using the camera assembly, a program for implementing the detection method, and a storage medium for the program. | |
| CN118506443B (en) | Examinee abnormal behavior recognition method based on human body posture assessment |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |