[go: up one dir, main page]

WO2017186017A1 - Procédé et dispositif de détection de cible - Google Patents

Procédé et dispositif de détection de cible Download PDF

Info

Publication number
WO2017186017A1
WO2017186017A1 PCT/CN2017/080833 CN2017080833W WO2017186017A1 WO 2017186017 A1 WO2017186017 A1 WO 2017186017A1 CN 2017080833 W CN2017080833 W CN 2017080833W WO 2017186017 A1 WO2017186017 A1 WO 2017186017A1
Authority
WO
WIPO (PCT)
Prior art keywords
target user
target
photo
user
hidden markov
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/080833
Other languages
English (en)
Chinese (zh)
Inventor
许永昌
盛阁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ibotn Technology Co Ltd
Original Assignee
Shenzhen Ibotn Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ibotn Technology Co Ltd filed Critical Shenzhen Ibotn Technology Co Ltd
Publication of WO2017186017A1 publication Critical patent/WO2017186017A1/fr
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • G06F18/295Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to a target detection method and apparatus.
  • terminal products have recognition functions, such as robots, which can perform recognition functions on faces.
  • recognition functions such as robots
  • the terminal In the face recognition process, the terminal must capture the face information of the user to be identified, obtain the face photo of the user to be identified, and then recognize the user according to the obtained face photo. If the user is walking or facing the terminal, the terminal cannot capture the face information of the user to be identified, and the identification of the user to be identified cannot be completed.
  • the target recognition of existing terminals is relatively limited.
  • the main object of the present invention is to provide a target detection method and apparatus, which aims to solve the technical problem of the limitation of target recognition of the existing terminal.
  • the target detection method provided by the present invention includes the following steps:
  • the method before the step of acquiring the to-be-identified photo and extracting the sequence of the to-be-observed vector according to the RGB pixel value of the to-be-identified photo, the method further includes:
  • a hidden Markov model corresponding to the target user is established according to the sequence of observation vectors.
  • the method further includes:
  • the method further includes:
  • the terminal movement is controlled in accordance with the determined movement direction and distance to track the detected target user.
  • the step of acquiring a reference photo of the target user includes:
  • the object detection apparatus provided by the present invention includes:
  • An extraction module configured to extract a sequence of to-be-observed vectors according to the RGB pixel values of the to-be-identified photo
  • a calculation module configured to calculate a similarity between the sequence of the vector to be observed and the hidden Markov model
  • a determining module configured to determine that the target user is detected when the similarity reaches a preset condition, and determine the hidden user in the photo to be identified as the hidden Markov model with the similarity reaching a preset condition Corresponding to the target user.
  • the obtaining module is further configured to acquire a reference photo of the target user
  • the extracting module is further configured to extract an observation vector sequence corresponding to the target user according to the RGB pixel value of the reference photo;
  • the target detecting device further includes an establishing module, configured to establish a hidden Markov model corresponding to the target user according to the sequence of observation vectors, to perform target detection according to the hidden Markov model. .
  • the target detecting device further includes an updating module, configured to update the hidden Markov model corresponding to the detected target user according to the sequence of the vector to be observed.
  • an updating module configured to update the hidden Markov model corresponding to the detected target user according to the sequence of the vector to be observed.
  • the calculating module is further configured to calculate a model parameter according to the sequence of the vector to be observed and the detected hidden Markov model corresponding to the target user;
  • the determining module is further configured to determine a moving direction and a distance of the terminal according to the model parameter;
  • the target detecting device further includes a tracking module configured to control the terminal movement according to the determined moving direction and distance to track the detected target user.
  • the acquiring module is further configured to collect a plurality of reference photos of the target user during the rotating process; wherein, during the rotating of the target user, one of the reference photos is taken every preset time interval.
  • the object detection method and apparatus by acquiring a reference photo of a target user, extracting an observation vector sequence corresponding to the target user according to the RGB pixel value of the reference photo, and establishing the target with the target according to the observation vector sequence
  • the user-specific hidden Markov model is used for target detection according to the hidden Markov model. Since RGB pixel values are used in the modeling process, and no need to rely on face information, the RGB pixel value can be directly
  • the target user is identified, and the camera that is always tracked by the user is not required to be tracked during the identification and tracking process, and the tracked user can walk around at will, so the terminal is more intelligent and convenient in the target recognition process.
  • FIG. 1 is a schematic flow chart of a first embodiment of an object detection method according to the present invention.
  • FIG. 2 is a schematic diagram of functional modules of a first embodiment of an object detecting apparatus according to the present invention
  • FIG. 3 is a schematic diagram of functional modules of a second embodiment of the object detecting device of the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a third embodiment of the object detecting device of the present invention.
  • the present invention provides a target detection method, which can be implemented based on a terminal, and optionally, can be implemented based on a robot.
  • a target detection method which can be implemented based on a terminal, and optionally, can be implemented based on a robot.
  • the application to the robot will be described as an example.
  • 1 is a schematic flowchart of a first embodiment of an object detection method according to the present invention.
  • the target detection method provided by the present invention includes the following steps:
  • Step S10 Acquire a photo to be recognized, and extract a sequence of the to-be-observed vector according to the RGB pixel value of the photo to be recognized;
  • a plurality of sub-image blocks of height L may be sequentially extracted from top to bottom for generating a sequence of vectors to be observed. That is to say, a W ⁇ L sampling window is defined, and the sampling window is sequentially sampled from top to bottom, and each time the distance of L is moved downward to obtain several sub-image blocks.
  • the RGB pixel value of the sub-image block can be directly taken as an observation value. It is also possible to perform a K-L transform on the RGB pixel values of the sub-image block, and take the transformed coefficients as observation values. A K-L transform is performed on each sub-image block sampled from a photo to be identified, and a sequence of to-be-observed vectors corresponding to the photo to be identified is obtained.
  • Step S20 calculating a similarity of the hidden Markov model corresponding to the preset target user of the to-be-observed vector sequence
  • the hidden Markov model corresponding to the preset target user may be established in the following manner, that is, before step S10, the method further includes:
  • a hidden Markov model corresponding to the target user is established according to the sequence of observation vectors.
  • the reference photo may be a physical photo of the target user, or may be a human face photo.
  • the target users can be one, two or more.
  • the reference photo corresponding to each target user can also be one, two or more.
  • each target user has multiple reference photos. It is assumed that two target users are preset in the robot, namely, target user A and target user B, and 50 reference photos are acquired for each target.
  • step S10 includes: collecting a plurality of reference photos of the target user during the rotation; wherein, during the rotation of the target user, Take a time interval to take a picture of the reference photo for learning.
  • the reference photo collection control may be preset, and when the user triggers the reference photo collection control, the robot starts to collect the reference photo.
  • the prompt information is first output. It can be output in the form of voice, text or video to prompt the user to slowly rotate in front of the robot's camera.
  • the robot can take a reference photo at preset intervals during the user's rotation. For example, a reference photo can be taken every 0.1 seconds.
  • a plurality of sub-image blocks of height L may be sequentially extracted from top to bottom for generating an observation vector sequence. That is to say, a W ⁇ L sampling window is defined, and the sampling window is sequentially sampled from top to bottom, and each time the distance of L is moved downward to obtain several sub-image blocks.
  • the RGB pixel value of the sub-image block can be directly taken as an observation value. It is also possible to perform a K-L transform on the RGB pixel values of the sub-image block, and take the transformed coefficients as observation values. A K-L transform is performed on each sub-image block sampled from a reference photo, and an observation vector sequence corresponding to the reference photo is obtained.
  • each hidden Markov model can be trained with a single or multiple images of the same target user. Training is carried out as follows:
  • the initial hidden Markov model is recalculated using the Baum-Welch re-estimation method.
  • the individual parameters of the hidden Markov model will be re-estimated to get a new model:
  • the forward-backward algorithm or the Viterbi algorithm uses the forward-backward algorithm or the Viterbi algorithm to calculate the sequence of observations O under this model. .
  • the threshold C, Converging to C the trained hidden Markov model is obtained. Therefore, the hidden Markov model corresponding to the target user is obtained.
  • the hidden Markov model can be used to identify and track the target user. For example, for the target user A and the target user B described above, a hidden Markov model corresponding to the target user A and the target user B will be respectively established and stored in the robot in advance.
  • the target detection control can also be set on the robot.
  • the robot starts to enter the target detection mode.
  • the user can also specify whether the user to be identified is the target user A or the target user B.
  • the robot may display information corresponding to the target user A and the target user B for the user to select the current user to be identified.
  • the user may trigger the control corresponding to the target user A, and then set the user to be identified as the target user A. Therefore, the robot will detect whether the current user to be identified is the target user A in the subsequent identification and/or tracking process.
  • the robot After entering the target detection mode, the robot starts collecting the photo to be recognized through the camera. You can collect 1 photo per second, or 5 photos per second, which can be set according to actual needs.
  • the similarity between the to-be-observed vector sequence and the hidden Markov model may be calculated by a forward-backward algorithm or a Viterbi algorithm.
  • the similarity reflects the degree of similarity between the sequence of vectors to be observed and the hidden Markov model in the robot.
  • the similarity reflects the degree of similarity between the hidden vector sequence and the hidden Markov model corresponding to the target user A pre-stored in the robot.
  • Step S30 when the similarity reaches the preset condition, determining that the target user is detected, and determining the user to be identified in the to-be-identified photo as the hidden Markov model whose similarity reaches a preset condition The target user.
  • the similarity when the similarity is sufficiently high, for example, when the similarity is higher than the preset value, the similarity is considered to reach the preset condition, and it is considered that the target user A is detected, that is, the photo to be recognized is considered to be Target user A was detected.
  • the target user can be directly identified according to the similarity between the RGB pixel value of the photo to be recognized and the preset hidden Markov model. Moreover, it is not necessary to rely on the face information, and the target user can be identified directly according to the RGB pixel value.
  • the camera that is always tracked by the user is not required to be tracked, and the tracked user can move around at will, so the terminal is in target recognition. The process is smarter and more convenient.
  • the present invention further provides a second embodiment of the object detection method.
  • the target detection method further includes: A hidden Markov model corresponding to the detected target user is updated.
  • the method for training the hidden Markov model in the first embodiment of the above object detection method may be referred to to establish a hidden Markov model according to the sequence of the vector to be observed, and details are not described herein again.
  • the hidden Markov model of the target user is updated, so that the hidden Markov model of the target user can be made more accurate.
  • the accuracy of target detection is further improved.
  • the present invention further provides a third embodiment of the object detection method.
  • the target detection method further includes:
  • the terminal movement is controlled in accordance with the determined movement direction and distance to track the detected target user.
  • the model parameters may be represented by the above, and may be calculated by a formula, and details are not described herein again. For example, you can assume The larger, the closer the detected target user is to the robot, and the smaller the distance between the detected target user and the robot. allowable When the value is less than the first preset threshold, it is considered that the distance between the target user and the robot is too far, so it is necessary to control the robot to move toward the target user, for example, the robot can be controlled to move forward.
  • the moving direction of the robot may be controlled, for example, may be moved to the front, the left front, or the right front, so that the robot moves toward the actual position of the target user, thereby approaching the target user. .
  • the moving direction of the robot may be controlled, for example, may be moved to the front, the left rear, or the right rear, so that the robot moves away from the actual position of the target user, thereby moving away from the target. Users, so that the target users can be tracked more accurately.
  • the corresponding first preset threshold and the second preset threshold may be determined according to the range of the target object framed by the user, and the range size of the framed target object may affect the corresponding threshold interval range.
  • the embodiment does not need to rely on the face information of the target user for tracking, so that the target user can be more conveniently and accurately tracked.
  • the invention further provides an object detecting device. It can be implemented based on the terminal, and optional, can be implemented based on the robot. In the present embodiment and the following embodiments, the application to the robot will be described as an example.
  • FIG. 2 is a schematic diagram of a functional module of a first embodiment of an object detecting apparatus according to the present invention.
  • the object detecting apparatus provided by the present invention includes:
  • the obtaining module 10 is configured to obtain a photo to be identified
  • the extracting module 20 is configured to extract a sequence of the to-be-observed vector according to the RGB pixel value of the photo to be identified;
  • a plurality of sub-image blocks of height L may be sequentially extracted from top to bottom for generating a sequence of vectors to be observed. That is to say, a W ⁇ L sampling window is defined, and the sampling window is sequentially sampled from top to bottom, and each time the distance of L is moved downward to obtain several sub-image blocks.
  • the RGB pixel value of the sub-image block can be directly taken as an observation value. It is also possible to perform a K-L transform on the RGB pixel values of the sub-image block, and take the transformed coefficients as observation values. A K-L transform is performed on each sub-image block sampled from a photo to be identified, and a sequence of to-be-observed vectors corresponding to the photo to be identified is obtained.
  • the calculating module 30 is configured to calculate a similarity between the sequence of the to-be-observed vector and the hidden Markov model
  • the hidden Markov model corresponding to the preset target user may be established in the following manner, namely:
  • the obtaining module 10 is further configured to acquire a reference photo of the target user.
  • the extraction module 20 is further configured to extract an observation vector sequence corresponding to the target user according to the RGB pixel value of the reference photo;
  • the target detecting device further includes an establishing module, and the establishing module is configured to establish a hidden Markov model corresponding to the target user according to the sequence of observation vectors.
  • the reference photo may be a physical photo of the target user, or may be a human face photo.
  • the target users can be one, two or more.
  • the reference photo corresponding to each target user can also be one, two or more.
  • each target user has multiple reference photos. It is assumed that two target users are preset in the robot, namely, target user A and target user B, and 50 reference photos are acquired for each target.
  • step S10 includes: collecting a plurality of reference photos of the target user during the rotation; wherein, during the rotation of the target user, Take a time interval to take a picture of the reference photo for learning.
  • the reference photo collection control may be preset, and when the user triggers the reference photo collection control, the robot starts to collect the reference photo.
  • the prompt information is first output. It can be output in the form of voice, text or video to prompt the user to slowly rotate in front of the robot's camera.
  • the robot can take a reference photo at preset intervals during the user's rotation. For example, a reference photo can be taken every 0.1 seconds.
  • a plurality of sub-image blocks of height L may be sequentially extracted from top to bottom for generating an observation vector sequence. That is to say, a W ⁇ L sampling window is defined, and the sampling window is sequentially sampled from top to bottom, and each time the distance of L is moved downward to obtain several sub-image blocks.
  • the RGB pixel value of the sub-image block can be directly taken as an observation value. It is also possible to perform a K-L transform on the RGB pixel values of the sub-image block, and take the transformed coefficients as observation values. A K-L transform is performed on each sub-image block sampled from a reference photo, and an observation vector sequence corresponding to the reference photo is obtained.
  • each hidden Markov model can be trained with a single or multiple images of the same target user. Training is carried out as follows:
  • the initial hidden Markov model is recalculated using the Baum-Welch re-estimation method.
  • the individual parameters of the hidden Markov model will be re-estimated to get a new model:
  • the forward-backward algorithm or the Viterbi algorithm uses the forward-backward algorithm or the Viterbi algorithm to calculate the sequence of observations O under this model. .
  • the threshold C, Converging to C the trained hidden Markov model is obtained. Therefore, the hidden Markov model corresponding to the target user is obtained.
  • the hidden Markov model can be used to identify and track the target user. For example, for the target user A and the target user B described above, a hidden Markov model corresponding to the target user A and the target user B will be respectively established and stored in the robot in advance.
  • the target detection control can also be set on the robot.
  • the robot starts to enter the target detection mode.
  • the user can also specify whether the user to be identified is the target user A or the target user B.
  • the robot may display information corresponding to the target user A and the target user B for the user to select the current user to be identified.
  • the user may trigger the control corresponding to the target user A, and then set the user to be identified as the target user A. Therefore, the robot will detect whether the current user to be identified is the target user A in the subsequent identification and/or tracking process.
  • the robot After entering the target detection mode, the robot starts collecting the photo to be recognized through the camera. You can collect 1 photo per second, or 5 photos per second, which can be set according to actual needs.
  • the similarity between the to-be-observed vector sequence and the hidden Markov model may be calculated by a forward-backward algorithm or a Viterbi algorithm.
  • the similarity reflects the degree of similarity between the sequence of vectors to be observed and the hidden Markov model in the robot.
  • the similarity reflects the degree of similarity between the hidden vector sequence and the hidden Markov model corresponding to the target user A pre-stored in the robot.
  • a determining module 40 configured to: when the similarity reaches a preset condition, determine that the target user is detected, and determine the user to be identified in the to-be-identified photo as the hidden Markov The target user corresponding to the model.
  • the similarity when the similarity is sufficiently high, for example, when the similarity is higher than the preset value, the similarity is considered to reach the preset condition, and it is considered that the target user A is detected, that is, the photo to be recognized is considered to be Target user A was detected.
  • the target user can be directly identified according to the similarity between the RGB pixel value of the photo to be recognized and the preset hidden Markov model. Moreover, it is not necessary to rely on the face information, and the target user can be identified directly according to the RGB pixel value.
  • the camera that is always tracked by the user is not required to be tracked, and the tracked user can move around at will, so the terminal is in target recognition. The process is smarter and more convenient.
  • the present invention further provides a second embodiment of the object detecting device.
  • FIG. 3 is a schematic diagram of a functional module of the second embodiment of the object detecting device according to the present invention.
  • the target detecting device further includes an updating module 50, configured to update a hidden Markov model corresponding to the detected target user according to the to-be-observed vector sequence.
  • the method for training the hidden Markov model in the first embodiment of the above object detection apparatus may be referred to to establish a hidden Markov model according to the sequence of the vector to be observed, and details are not described herein again.
  • the hidden Markov model of the target user is updated, so that the hidden Markov model of the target user can be made more accurate.
  • the accuracy of target detection is further improved.
  • the present invention further provides a third embodiment of the object detecting device.
  • FIG. 4 is a schematic diagram of a functional module of the third embodiment of the object detecting device according to the present invention.
  • the calculation module 30 is further configured to calculate a model parameter according to the to-be-observed vector sequence and the detected hidden Markov model corresponding to the target user;
  • the determining module 40 is further configured to determine a moving direction and a distance of the terminal according to the model parameter;
  • the target detecting device further includes a tracking module 60 for controlling the terminal movement according to the determined moving direction and distance to track the detected target user.
  • the model parameters may be represented by the above, and may be calculated by a formula, and details are not described herein again. For example, you can assume The larger, the closer the detected target user is to the robot, and the smaller the distance between the detected target user and the robot. allowable When the value is less than the first preset threshold, it is considered that the distance between the target user and the robot is too far, so it is necessary to control the robot to move toward the target user, for example, the robot can be controlled to move forward.
  • the moving direction of the robot may be controlled, for example, may be moved to the front, the left front, or the right front, so that the robot moves toward the actual position of the target user, thereby approaching the target user. .
  • the moving direction of the robot may be controlled, for example, may be moved to the front, the left rear, or the right rear, so that the robot moves away from the actual position of the target user, thereby moving away from the target. Users, so that the target users can be tracked more accurately.
  • the corresponding first preset threshold and the second preset threshold may be determined according to the range of the target object framed by the user, and the range size of the framed target object may affect the corresponding threshold interval range.
  • the embodiment does not need to rely on the face information of the target user for tracking, so that the target user can be more conveniently and accurately tracked.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the methods described in various embodiments of the present invention.
  • first, second, and the like in the invention are used for descriptive purposes only, and are not to be construed as indicating or implying their relative importance or implicitly indicating the number of technical features indicated.
  • features defining “first” or “second” may include at least one of the features, either explicitly or implicitly.
  • the technical solutions between the various embodiments may be combined with each other, but must be based on the realization of those skilled in the art, and when the combination of the technical solutions is contradictory or impossible to implement, it should be considered that the combination of the technical solutions is not It is not within the scope of protection required by the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

L'invention porte sur un procédé de détection de cible, comprenant : l'obtention d'une image à identifier, et l'extraction d'une séquence de vecteurs à observer selon une valeur de pixels RGB de l'image à identifier; le calcul de la similarité entre la séquence de vecteurs à observer et un modèle de Markov caché correspondant à un utilisateur cible préétabli; et, lorsque la similarité remplit une condition préétablie, la détermination de la détection de l'utilisateur cible, et la détermination d'un utilisateur à identifier dans l'image à identifier comme étant l'utilisateur cible qui correspond au modèle de Markov caché dont la similarité remplit la condition préétablie. L'invention concerne également un dispositif de détection de cible. Puisque le processus de détection utilise une valeur de pixels RGB et ne dépend pas d'informations de visage humain, la présente invention peut identifier directement l'utilisateur cible selon la valeur de pixels RGB. Lors d'un processus d'identification et de suivi, il n'est pas nécessaire qu'un utilisateur suivi reste face à une caméra d'un terminal, et il est libre de marcher. Par conséquent, un terminal peut être plus intelligent et plus pratique au cours du processus d'identification de cible.
PCT/CN2017/080833 2016-04-28 2017-04-18 Procédé et dispositif de détection de cible Ceased WO2017186017A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610280638.4 2016-04-28
CN201610280638.4A CN105956551B (zh) 2016-04-28 2016-04-28 目标检测方法及装置

Publications (1)

Publication Number Publication Date
WO2017186017A1 true WO2017186017A1 (fr) 2017-11-02

Family

ID=56916909

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/080833 Ceased WO2017186017A1 (fr) 2016-04-28 2017-04-18 Procédé et dispositif de détection de cible

Country Status (2)

Country Link
CN (1) CN105956551B (fr)
WO (1) WO2017186017A1 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107718014A (zh) * 2017-11-09 2018-02-23 深圳市小村机器人智能科技有限公司 高仿真机器人头部结构及其动作控制方法
CN114093022A (zh) * 2020-07-07 2022-02-25 株式会社日立制作所 活动检测装置、活动检测系统及活动检测方法
CN120299091A (zh) * 2025-06-11 2025-07-11 山东财经大学 基于人工智能的肢体运动动作设备作识别方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105956551B (zh) * 2016-04-28 2018-01-30 深圳市鼎盛智能科技有限公司 目标检测方法及装置
CN109839614B (zh) * 2018-12-29 2020-11-06 深圳市天彦通信股份有限公司 固定式采集设备的定位系统及方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102592112A (zh) * 2011-12-20 2012-07-18 四川长虹电器股份有限公司 基于隐马尔科夫模型判断手势运动方向的方法
US20130136303A1 (en) * 2011-11-30 2013-05-30 Canon Kabushiki Kaisha Object detection apparatus, method for controlling the object detection apparatus, and storage medium
CN103489001A (zh) * 2013-09-25 2014-01-01 北京智诺英特科技有限公司 图像目标追踪方法和装置
CN103593680A (zh) * 2013-11-19 2014-02-19 南京大学 一种基于隐马尔科夫模型自增量学习的动态手势识别方法
CN104112122A (zh) * 2014-07-07 2014-10-22 叶茂 基于交通视频的车标自动识别方法
CN105956551A (zh) * 2016-04-28 2016-09-21 深圳市鼎盛智能科技有限公司 目标检测方法及装置

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008080341A1 (fr) * 2007-01-01 2008-07-10 Huawei Technologies Co., Ltd. Procédé, système et dispositif d'identification d'un terminal d'utilisateur
CN103761748B (zh) * 2013-12-31 2016-12-07 北京邮电大学 异常行为检测方法和装置

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130136303A1 (en) * 2011-11-30 2013-05-30 Canon Kabushiki Kaisha Object detection apparatus, method for controlling the object detection apparatus, and storage medium
CN102592112A (zh) * 2011-12-20 2012-07-18 四川长虹电器股份有限公司 基于隐马尔科夫模型判断手势运动方向的方法
CN103489001A (zh) * 2013-09-25 2014-01-01 北京智诺英特科技有限公司 图像目标追踪方法和装置
CN103593680A (zh) * 2013-11-19 2014-02-19 南京大学 一种基于隐马尔科夫模型自增量学习的动态手势识别方法
CN104112122A (zh) * 2014-07-07 2014-10-22 叶茂 基于交通视频的车标自动识别方法
CN105956551A (zh) * 2016-04-28 2016-09-21 深圳市鼎盛智能科技有限公司 目标检测方法及装置

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107718014A (zh) * 2017-11-09 2018-02-23 深圳市小村机器人智能科技有限公司 高仿真机器人头部结构及其动作控制方法
CN114093022A (zh) * 2020-07-07 2022-02-25 株式会社日立制作所 活动检测装置、活动检测系统及活动检测方法
CN120299091A (zh) * 2025-06-11 2025-07-11 山东财经大学 基于人工智能的肢体运动动作设备作识别方法及系统

Also Published As

Publication number Publication date
CN105956551A (zh) 2016-09-21
CN105956551B (zh) 2018-01-30

Similar Documents

Publication Publication Date Title
WO2017186017A1 (fr) Procédé et dispositif de détection de cible
WO2021172832A1 (fr) Procédé de modification d'image basée sur la reconnaissance des gestes, et dispositif électronique prenant en charge celui-ci
WO2015143777A1 (fr) Procédé et système de poussée pour classifier et mettre en correspondance des publicités sur la base d'une identification de visage humain
WO2021091021A1 (fr) Système de détection d'incendie
WO2019050360A1 (fr) Dispositif électronique et procédé de segmentation automatique d'être humain dans une image
WO2020207038A1 (fr) Procédé, appareil et dispositif de comptage de personnes basés sur la reconnaissance faciale, et support d'informations
WO2020196985A1 (fr) Appareil et procédé de reconnaissance d'action vidéo et de détection de section d'action
WO2020107762A1 (fr) Procédé et dispositif d'estimation de ctr et support d'enregistrement lisible par ordinateur
WO2014146431A1 (fr) Méthode, appareil et terminal d'enregistrement de position de stockage de nourriture dans un réfrigérateur, et réfrigérateur
WO2016155284A1 (fr) Procédé de collecte de données pour terminal, et terminal
WO2014146430A1 (fr) Méthode et dispositif d'enregistrement pour emplacement de stockage de nourriture dans un réfrigérateur, terminal et réfrigérateur
WO2020235804A1 (fr) Procédé pour générer un modèle de détermination de similarité de pose et dispositif pour générer un modèle de détermination de similarité de pose
WO2022182096A1 (fr) Suivi du mouvement de membre en temps réel
WO2020186774A1 (fr) Procédé et appareil de positionnement basé sur la détection d'image, et dispositif et support de stockage
WO2020218644A1 (fr) Procédé et robot permettant de redéfinir l'emplacement d'un robot à l'aide de l'intelligence artificielle
WO2025071103A1 (fr) Dispositif et procédé de calcul d'informations d'expédition de population de bétail, et système de calcul d'informations d'expédition comprenant le dispositif de calcul d'informations d'expédition
WO2021157904A1 (fr) Appareil électronique et procédé de commande associé
WO2020006886A1 (fr) Procédé et dispositif d'identification pour système de contrôle d'accès, système de contrôle d'accès et support d'informations
WO2020027584A1 (fr) Procédé et appareil pour effectuer une manipulation d'éclairage d'objet sur une image
WO2020080734A1 (fr) Procédé de reconnaissance faciale et dispositif de reconnaissance faciale
WO2015196878A1 (fr) Procédé et système de commande tactile virtuelle de téléviseur
WO2017034321A1 (fr) Technique de prise en charge de photographie dans un dispositif possédant un appareil photo et dispositif à cet effet
WO2012034469A1 (fr) Système et procédé d'interaction homme-machine à base de gestes et support de stockage informatique
WO2011157239A2 (fr) Procédé de génération et de traitement d'information, son dispositif et son terminal mobile
WO2020230921A1 (fr) Procédé d'extraction de caractéristiques d'une image à l'aide d'un motif laser, et dispositif d'identification et robot l'utilisant

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17788669

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17788669

Country of ref document: EP

Kind code of ref document: A1