MY203832A - A system and method for classifying level of aggressiveness - Google Patents
A system and method for classifying level of aggressivenessInfo
- Publication number
- MY203832A MY203832A MYPI2020003251A MYPI2020003251A MY203832A MY 203832 A MY203832 A MY 203832A MY PI2020003251 A MYPI2020003251 A MY PI2020003251A MY PI2020003251 A MYPI2020003251 A MY PI2020003251A MY 203832 A MY203832 A MY 203832A
- Authority
- MY
- Malaysia
- Prior art keywords
- learning model
- rectangular prisms
- aggressiveness
- video stream
- level
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/809—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of classification results, e.g. where the classifiers operate on the same input data
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B13/00—Burglar, theft or intruder alarms
- G08B13/18—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
- G08B13/189—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
- G08B13/194—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
- G08B13/196—Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
- G08B13/19602—Image analysis to detect motion of the intruder, e.g. by frame subtraction
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B29/00—Checking or monitoring of signalling or alarm systems; Prevention or correction of operating errors, e.g. preventing unauthorised operation
- G08B29/18—Prevention or correction of operating errors
- G08B29/185—Signal analysis techniques for reducing or preventing false alarms or for enhancing the reliability of the system
- G08B29/186—Fuzzy logic; neural networks
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Medical Informatics (AREA)
- Mathematical Physics (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Automation & Control Theory (AREA)
- Data Mining & Analysis (AREA)
- Fuzzy Systems (AREA)
- General Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
The present invention relates to a system (1000) and method for classifying level of aggressiveness. The system is configured to classify an aggressive behaviour into a level of aggressiveness based on a video stream. The system comprises a video acquisition unit (10) configured to acquire at least one video stream from at least one video source, an image processing unit (20) configured to convert the at least one video stream into a sequence of image frames and performs data formatting on the sequence of image frames to generate a plurality of volumetric rectangular prisms and an image representation for each of the volumetric rectangular prisms, a training unit (30) configured to perform data training on the plurality of volumetric rectangular prisms and the image representation of each of the volumetric rectangular prisms using a machine learning model and a deep learning model, and an online inferencing unit (40) configured to perform an online fusion of the machine learning model and the deep learning model.
Priority Applications (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| MYPI2020003251A MY203832A (en) | 2020-06-23 | 2020-06-23 | A system and method for classifying level of aggressiveness |
| PCT/MY2020/050159 WO2021261985A1 (en) | 2020-06-23 | 2020-11-18 | A system and method for classifying level of aggressiveness |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| MYPI2020003251A MY203832A (en) | 2020-06-23 | 2020-06-23 | A system and method for classifying level of aggressiveness |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| MY203832A true MY203832A (en) | 2024-07-19 |
Family
ID=79281536
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| MYPI2020003251A MY203832A (en) | 2020-06-23 | 2020-06-23 | A system and method for classifying level of aggressiveness |
Country Status (2)
| Country | Link |
|---|---|
| MY (1) | MY203832A (en) |
| WO (1) | WO2021261985A1 (en) |
Family Cites Families (1)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR101260847B1 (en) * | 2007-02-08 | 2013-05-06 | 비헤이버럴 레코그니션 시스템즈, 인코포레이티드 | Behavioral recognition system |
-
2020
- 2020-06-23 MY MYPI2020003251A patent/MY203832A/en unknown
- 2020-11-18 WO PCT/MY2020/050159 patent/WO2021261985A1/en not_active Ceased
Also Published As
| Publication number | Publication date |
|---|---|
| WO2021261985A1 (en) | 2021-12-30 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN114972929B (en) | A pre-training method and device for a medical multimodal model | |
| Iashin et al. | Synchformer: Efficient synchronization from sparse cues | |
| CN108229321B (en) | Face recognition model, and training method, device, apparatus, program, and medium therefor | |
| CN110135386B (en) | Human body action recognition method and system based on deep learning | |
| US20240354505A1 (en) | Perceptual associative memory for a neuro-linguistic behavior recognition system | |
| EP3805700A3 (en) | Method, apparatus, and system for predicting a pose error for a sensor system | |
| CN111370020A (en) | Method, system, device and storage medium for converting voice into lip shape | |
| EP3819820A3 (en) | Method and apparatus for recognizing key identifier in video, device and storage medium | |
| TWI707296B (en) | Smart teaching consultant generation method, system, equipment and storage medium | |
| Sinha et al. | Identity-preserving realistic talking face generation | |
| EP3998584A3 (en) | Method and apparatus for training adversarial network model, method and apparatus for building character library, and device | |
| Stillittano et al. | Lip contour segmentation and tracking compliant with lip-reading application constraints | |
| CN116052276A (en) | A Behavioral Analysis Method for Human Pose Estimation | |
| MY203832A (en) | A system and method for classifying level of aggressiveness | |
| CN112417974A (en) | Public health monitoring method | |
| Jaiswal | Facial expression classification using convolutional neural networking and its applications | |
| Kalbande et al. | Lip reading using neural networks | |
| Panagiotakis et al. | Shape-motion based athlete tracking for multilevel action recognition | |
| CN114708629B (en) | A method and system for recognizing students' facial expressions | |
| CN111340329B (en) | Actor evaluation method and device and electronic equipment | |
| CN115543075A (en) | VR teaching system with long-range interactive teaching function | |
| CN116189304B (en) | Intelligent teaching system based on AI vision technology | |
| CN114463804A (en) | Micro expression recognition method and device and computer readable storage medium | |
| KR20180104997A (en) | Crowd sourcing system | |
| Zahedi et al. | Robust sign language recognition system using ToF depth cameras |