TW201941099A - A method and its equipment of locking interaction target for intelligent device - Google Patents
A method and its equipment of locking interaction target for intelligent device Download PDFInfo
- Publication number
- TW201941099A TW201941099A TW108109739A TW108109739A TW201941099A TW 201941099 A TW201941099 A TW 201941099A TW 108109739 A TW108109739 A TW 108109739A TW 108109739 A TW108109739 A TW 108109739A TW 201941099 A TW201941099 A TW 201941099A
- Authority
- TW
- Taiwan
- Prior art keywords
- smart device
- target
- candidate
- candidate target
- interaction
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Image Analysis (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
Description
本發明涉及智慧型裝置技術領域,尤其涉及一種智慧型裝置的交互目標確定方法和裝置。The invention relates to the technical field of smart devices, and in particular, to a method and a device for determining an interactive target of a smart device.
隨著智慧型裝置技術的發展,已實現智慧型裝置與人主動進行交互的過程。具體地,智慧型裝置對一定範圍的物件進行檢測。當檢測到人臉時,將人確定為交互目標,進行啟動並主動與人進行交互。With the development of smart device technology, the process of active interaction between smart devices and people has been realized. Specifically, the smart device detects a certain range of objects. When a human face is detected, the person is determined as an interaction target, activated and actively interacts with the person.
但是,利用上述智慧型裝置確定交互目標的方法,智慧型裝置檢測到的人可能與智慧型裝置沒有交互意願,例如有人從智慧型裝置前經過,雖然檢測到了人,但是並無交互意圖。如果檢測到人就啟動智慧型裝置,會造成智慧型裝置的誤啟動。可見,上述智慧型裝置確定交互目標的方法,確定交互目標的準確度低,易造成誤啟動。However, using the above-mentioned method for determining an interactive target by a smart device, a person detected by the smart device may have no willingness to interact with the smart device. For example, someone passes by in front of the smart device. Although a person is detected, there is no intention of interaction. If a smart device is activated when a person is detected, it may cause the smart device to start incorrectly. It can be seen that the above-mentioned method for determining the interactive target by the smart device has low accuracy in determining the interactive target, and is likely to cause a false start.
本發明旨在至少在一定程度上解決相關技術中的技術問題之一。The present invention aims to solve at least one of the technical problems in the related technology.
本發明提出一種智慧型裝置的交互目標確定方法,實現了從候選目標中篩除沒有交互意圖的目標,進而從存在交互意圖的目標中篩選出交互目標,避免了將沒有交互意圖的目標選為交互目標,提高了交互目標的確定準確度,減少了智慧型裝置的誤啟動。The invention proposes a method for determining an interactive target of a smart device, which realizes the screening of targets without interactive intent from candidate targets, and then screens out interactive targets from targets with interactive intent, thereby avoiding selecting targets without interactive intent as The interaction target improves the accuracy of determining the interaction target and reduces the false start of the smart device.
本發明一方面實施例提出了一種智慧型裝置的交互目標確定方法,包括: 獲取在智慧型裝置的監控範圍內的環境圖像,對該環境圖像進行目標識別; 將從該環境圖像中識別出的目標作為候選目標,獲取該候選目標的狀態資訊; 針對每個候選目標,根據對應的狀態資訊,判斷是否存在與該智慧型裝置交互的交互意圖; 從存在交互意圖的候選目標中選取該智慧型裝置的交互目標。An embodiment of one aspect of the present invention provides a method for determining an interactive target of a smart device, including: acquiring an environment image within a monitoring range of the smart device, and performing target recognition on the environmental image; The identified target is used as a candidate target to obtain the status information of the candidate target; for each candidate target, according to the corresponding status information, determine whether there is an interaction intention to interact with the smart device; and to select from the candidate targets with interaction intention Interaction target of this smart device.
作為本發明一方面實施例一種可能的實現方式,該獲取該候選目標的狀態資訊,包括: 獲取該候選目標與該智慧型裝置之間的距離; 針對每個候選目標,根據對應的狀態資訊,判斷是否存在與該智慧型裝置交互的交互意圖,包括: 針對每個候選目標,判斷該候選目標與該智慧型裝置之間的距離是否小於或者等於預設的該距離臨界值,且在該距離臨界值範圍內的停留時長是否超出預設的時間臨界值; 如果該候選目標與該智慧型裝置之間的距離小於或者等於該距離臨界值且該停留時長超出該時間臨界值,則確定該該候選目標存在與該智慧型裝置交互的交互意圖。As a possible implementation manner of the embodiment of one aspect of the present invention, the obtaining the status information of the candidate target includes: obtaining a distance between the candidate target and the smart device; and for each candidate target, according to the corresponding status information, Determining whether there is an interaction intent to interact with the smart device includes: for each candidate target, determining whether the distance between the candidate target and the smart device is less than or equal to a preset threshold value for the distance, and at the distance Whether the dwell time within the threshold range exceeds a preset time threshold; if the distance between the candidate target and the smart device is less than or equal to the distance threshold and the dwell time exceeds the time threshold, determine The candidate target has an interaction intention to interact with the smart device.
作為本發明一方面實施例一種可能的實現方式,該獲取該候選目標的狀態資訊,包括: 獲取該候選目標與該智慧型裝置之間的距離,以及該候選目標的人臉角度; 針對每個候選目標,根據對應的狀態資訊,判斷是否存在與智慧型裝置交互的交互意圖,包括: 針對每個候選目標,判斷該候選目標與該智慧型裝置之間的距離是否小於等於預設的距離臨界值,且該候選目標的人臉角度是否處於預設的角度範圍內; 如果該候選目標與該智慧型裝置之間的距離小於或者等於預設的距離臨界值,且該候選目標的人臉角度處於預設的角度範圍內,則確定該候選目標存在與該智慧型裝置交互的交互意圖。As a possible implementation manner of an embodiment of the aspect of the present invention, obtaining the status information of the candidate target includes: obtaining a distance between the candidate target and the smart device, and a face angle of the candidate target; Candidate targets, according to corresponding state information, to determine whether there is an interaction intention to interact with the smart device, including: for each candidate target, determining whether the distance between the candidate target and the smart device is less than or equal to a preset distance threshold Whether the face angle of the candidate target is within a preset angle range; if the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and the face angle of the candidate target Within the preset angle range, it is determined that the candidate target has an interaction intention to interact with the smart device.
作為本發明一方面實施例一種可能的實現方式,從存在交互意圖的該候選目標中選取該智慧型裝置的交互目標,包括: 當檢測到複數候選目標時,且存在交互意圖的候選目標為複數時,從複數存在交互意圖的該候選目標中,確定出與該智慧型裝置距離最近的候選目標; 從該與該智慧型裝置距離最近的候選目標中,選取該智慧型裝置的交互目標。As a possible implementation manner of the embodiment of one aspect of the present invention, selecting an interactive target of the smart device from the candidate targets having the interaction intention includes: when a plurality of candidate targets are detected, and the candidate targets having the interaction intention are plural At this time, from among the plurality of candidate targets having an interaction intention, a candidate target closest to the smart device is determined; and among the candidate targets closest to the smart device, an interactive target of the smart device is selected.
作為本發明一方面實施例一種可能的實現方式,從該與該智慧型裝置距離最近的候選目標中,選取該智慧型裝置的交互目標,包括: 當與該智慧型裝置距離最近的候選目標為複數時,查詢該智慧型裝置的已註冊使用者人臉圖像庫中是否存在與該智慧型裝置距離最近的候選目標的人臉圖像; 如果該人臉圖像庫中存在一個與該智慧型裝置距離最近的候選目標的人臉圖像,則將該一個與該智慧型裝置距離最近的候選目標作為交互目標; 如果該人臉圖像庫中不存在與該智慧型裝置距離最近的候選目標的人臉圖像,則隨機選取一個與該智慧型裝置距離最近的候選目標作為交互目標; 如果該人臉圖像庫中存在複數與該智慧型裝置距離最近的候選目標的人臉圖像,則將最先查詢出的與該智慧型裝置距離最近的候選目標作為交互目標。As a possible implementation manner of the embodiment of one aspect of the present invention, selecting an interactive target of the smart device from the candidate targets closest to the smart device includes: when the candidate target closest to the smart device is When plural, query whether there is a face image of a candidate target closest to the smart device in the face image database of the registered user of the smart device; The face image of the closest candidate target of the smart device, then use the candidate target closest to the smart device as an interaction target; if there is no candidate closest to the smart device in the face image library A face image of the target, randomly select a candidate target closest to the smart device as an interactive target; if there are a plurality of face images of the candidate target closest to the smart device in the face image library , The candidate target closest to the smart device that is first queried is used as the interactive target.
作為本發明一方面實施例一種可能的實現方式,該獲取該候選目標與該智慧型裝置人之間的距離,包括: 通過該智慧型裝置中的深度攝像頭獲取深度圖,根據該深度圖,獲取該目標與該智慧型裝置之間的距離;或者, 通過該智慧型裝置中的雙目視覺攝像頭,對該候選目標進行拍攝,計算該雙目視覺攝像頭所拍攝圖像的視差,根據該視差計算該候選目標與該智慧型裝置之間的距離;或者, 通過該智慧型裝置中的雷射雷達,向該監控範圍內發射鐳射; 根據處於該監控範圍內的每個障礙物返回的鐳射,生成每個障礙物的二值圖; 將每個二值圖與該環境圖像進行融合,從所有的二值圖中識別出與該候選目標對應的二值圖; 根據該候選目標對應的二值圖的鐳射返回時間,確定出該候選目標與該智慧型裝置之間的距離。As a possible implementation manner of an embodiment of the aspect of the present invention, obtaining the distance between the candidate target and the smart device includes: obtaining a depth map through a depth camera in the smart device, and obtaining the depth map according to the depth map. The distance between the target and the smart device; or, shooting the candidate target through the binocular vision camera in the smart device, calculating the parallax of the image captured by the binocular vision camera, and calculating based on the parallax The distance between the candidate target and the smart device; or, a laser is emitted into the monitoring range through a laser radar in the smart device; according to the laser returned by each obstacle within the monitoring range, generated Binary map of each obstacle; Fusion each binary map with the environment image to identify the binary map corresponding to the candidate target from all binary maps; according to the binary value corresponding to the candidate target The laser return time of the figure determines the distance between the candidate target and the smart device.
作為本發明一方面實施例一種可能的實現方式,該獲取該候選目標的人臉角度,包括: 從該環境圖像中截取該候選目標的人臉圖像; 將該人臉圖像輸入預先訓練好的機器學習模型中,獲取該人臉圖像中人臉的人臉角度; 該方法還包括:採用如下方式訓練該機器學習模型: 採集攜帶樣本人臉圖像,其中,該樣本人臉圖像中攜帶標注資料,該標注資料用於表示樣本人臉的人臉角度; 將該樣本人臉圖像輸入到初始構建的機器學習模型中進行訓練,當訓練後的該機器學習模型的誤差在預設的誤差範圍內時,則得到訓練好的該機器學習模型。As a possible implementation manner of an embodiment of the aspect of the present invention, obtaining the face angle of the candidate target includes: intercepting a face image of the candidate target from the environment image; and inputting the face image into a pre-training In a good machine learning model, the face angle of the face in the face image is obtained. The method further includes: training the machine learning model in the following manner: collecting and carrying a sample face image, wherein the sample face image The image carries annotation data, which is used to represent the face angle of the sample face; the sample face image is input into the initially built machine learning model for training. When the error of the machine learning model after training is within When the preset error range is within, a trained machine learning model is obtained.
作為本發明一方面實施例一種可能的實現方式,該從存在交互意圖的候選目標中選取該智慧型裝置的交互目標之後,還包括: 控制該智慧型裝置與該交互目標進行交互; 在交互過程中,識別該交互目標的人臉圖像的中心點; 檢測該人臉圖像的中心點是否在預設的圖像區域內; 如果未在該圖像區域內,獲取該人臉圖像的中心點到達該圖像區域的中心點之間的路徑; 根據該路徑,控制該智慧型裝置,使該人臉圖像的中心點在該圖像區域內。As a possible implementation manner of the embodiment of one aspect of the present invention, after selecting the interaction target of the smart device from the candidate targets having an interaction intention, the method further includes: controlling the smart device to interact with the interaction target; during the interaction process Identifying the center point of the face image of the interaction target; detecting whether the center point of the face image is within a preset image area; if it is not within the image area, obtaining the The path between the center point reaching the center point of the image area; according to the path, controlling the smart device so that the center point of the face image is within the image area.
本發明實施例的智慧型裝置的交互目標確定方法,通過獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別,將從環境圖像中識別出的目標作為候選目標,獲取候選目標的狀態資訊,針對每個候選目標,根據對應的狀態資訊,判斷是否存在與智慧型裝置交互的交互意圖,從存在交互意圖的候選目標中選取智慧型裝置的交互目標。本實施例中,通過根據候選目標的狀態資訊,從所有候選目標中,篩選出存在交互意圖的候選目標,進一步從存在交互意圖的候選目標中,為智慧型裝置選取出交互目標,使得選取的交互目標最可能是與智慧型裝置有交互意圖的目標,避免了將沒有交互意圖的目標作為交互目標,提高了交互目標的確定準確度,減少了智慧型裝置的誤啟動。The method for determining an interactive target of a smart device according to an embodiment of the present invention obtains an environmental image within the monitoring range of the smart device, performs target recognition on the environmental image, and uses the target identified from the environmental image as a candidate target. To obtain the status information of the candidate target, and for each candidate target, determine whether there is an interaction intention to interact with the smart device according to the corresponding status information, and select the interaction target of the smart device from the candidate targets having the interaction intention. In this embodiment, according to the status information of the candidate targets, from all candidate targets, candidate targets with interaction intentions are screened out, and among the candidate targets with interaction intentions, interactive targets are selected for smart devices, so that the selected The interaction target is most likely to have the interaction intention with the smart device, avoiding the target without the interaction intention as the interaction target, improving the accuracy of determining the interaction target, and reducing the false startup of the smart device.
本發明另一方面實施例提出了一種智慧型裝置的交互目標確定裝置,包括: 第一獲取模組,用於在從存在交互意圖的候選目標中選取智慧型裝置的交互目標之後,獲取在智慧型裝置的監控範圍內的環境圖像,對該環境圖像進行目標識別; 第二獲取模組,用於將從該環境圖像中識別出的目標作為候選目標,獲取該候選目標的狀態資訊; 判斷模組,用於針對每個候選目標,根據對應的狀態資訊,判斷是否存在與該智慧型裝置交互的交互意圖; 選取模組,用於從存在交互意圖的候選目標中選取該智慧型裝置的交互目標。An embodiment of another aspect of the present invention provides an interactive target determination device for a smart device, including: a first acquisition module, configured to select an interactive target of the smart device from a candidate target having an interactive intent, and then obtain the smart target The environment image within the monitoring range of the type device performs target recognition on the environment image; the second acquisition module is configured to obtain the target information of the candidate target from the target identified in the environmental image ; A judging module for each candidate target and judging whether there is an interaction intention to interact with the smart device according to the corresponding state information; a selection module for selecting the smart type from the candidate targets having an interaction intention Interaction target of the device.
作為本發明另一方面實施例一種可能的實現方式,該第二獲取模組具體用於: 獲取該候選目標與該智慧型裝置之間的距離; 該判斷模組具體用於: 針對每個候選目標,判斷該候選目標與該智慧型裝置之間的距離是否小於或者等於預設的該距離臨界值,且在該距離臨界值範圍內的停留時長是否超出預設的時間臨界值; 如果該候選目標與該智慧型裝置之間的距離小於或者等於該距離臨界值且該停留時長超出該時間臨界值,則確定該該候選目標存在與該智慧型裝置交互的交互意圖。As a possible implementation manner of another embodiment of the present invention, the second acquisition module is specifically configured to: acquire a distance between the candidate target and the smart device; the judgment module is specifically configured to: for each candidate A target to determine whether the distance between the candidate target and the smart device is less than or equal to a preset threshold value of the distance, and whether the length of stay within the range of the threshold value exceeds the preset time threshold value; if the If the distance between the candidate target and the smart device is less than or equal to the distance critical value and the dwell time exceeds the time critical value, it is determined that the candidate target has an interaction intention to interact with the smart device.
作為本發明另一方面實施例一種可能的實現方式,該第二獲取模組具體用於: 獲取該候選目標與該智慧型裝置之間的距離,以及該候選目標的人臉角度; 該判斷模組具體用於: 針對每個候選目標,判斷該候選目標與該智慧型裝置之間的距離是否小於或者等於預設的距離臨界值,且該候選目標的人臉角度是否處於預設的角度範圍內;As a possible implementation manner of another embodiment of the present invention, the second obtaining module is specifically configured to: obtain a distance between the candidate target and the smart device, and a face angle of the candidate target; the judgment mode The group is specifically used to: For each candidate target, determine whether the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and whether the candidate's face angle is within a preset angle range Inside;
如果該候選目標與該智慧型裝置之間的距離小於或者等於預設的距離臨界值,且該候選目標的人臉角度處於預設的角度範圍內,則確定該候選目標存在與該智慧型裝置交互的交互意圖。If the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and the face angle of the candidate target is within a preset angle range, it is determined that the candidate target exists with the smart device Interactive intent.
作為本發明另一方面實施例一種可能的實現方式,該選取模組包括: 確定單元,用於當檢測到複數候選目標時,且存在交互意圖的候選目標為複數時,從複數存在交互意圖的該候選目標中,確定出與該智慧型裝置距離最近的候選目標; 選取單元,用於從該與該智慧型裝置距離最近的候選目標中,選取該智慧型裝置的交互目標。As a possible implementation manner of another embodiment of the present invention, the selection module includes: a determining unit, configured to: when a plurality of candidate targets are detected, and when the candidate targets having the interaction intention are plural, the interaction intention exists from the plural. Among the candidate targets, a candidate target closest to the smart device is determined; and a selecting unit is configured to select an interactive target of the smart device from the candidate target closest to the smart device.
作為本發明另一方面實施例一種可能的實現方式,該選取單元具體用於: 當與該智慧型裝置距離最近的候選目標為複數時,查詢該智慧型裝置的已註冊使用者人臉圖像庫中是否存在與該智慧型裝置距離最近的候選目標的人臉圖像; 如果該人臉圖像庫中存在一個與該智慧型裝置距離最近的候選目標的人臉圖像,則將該一個與該智慧型裝置距離最近的候選目標作為交互目標; 如果該人臉圖像庫中不存在與該智慧型裝置距離最近的候選目標的人臉圖像,則隨機選取一個與該智慧型裝置距離最近的候選目標作為交互目標; 如果該人臉圖像庫中存在複數與該智慧型裝置距離最近的候選目標的人臉圖像,則將最先查詢出的與該智慧型裝置距離最近的候選目標作為交互目標。As a possible implementation manner of another embodiment of the present invention, the selection unit is specifically configured to: when the candidate target closest to the smart device is plural, query the registered user face image of the smart device Whether a face image of a candidate target closest to the smart device exists in the database; if a face image of a candidate target closest to the smart device exists in the face image library, the one The candidate candidate closest to the smart device is used as the interaction target; if the face image library does not exist in the face image of the candidate candidate closest to the smart device, a random distance to the smart device is selected randomly. The nearest candidate target is used as the interactive target; if there are multiple face images of the candidate target closest to the smart device in the face image library, the candidate closest to the smart device will be searched first. Goals serve as interactive goals.
作為本發明另一方面實施例一種可能的實現方式,該第二獲取模組具體用於: 通過該智慧型裝置中的深度攝像頭獲取深度圖 ,根據該深度圖,獲取該目標與該智慧型裝置之間的距離;或者, 通過該智慧型裝置中的雙目視覺攝像頭,對該候選目標進行拍攝,計算該雙目視覺攝像頭所拍攝圖像的視差,根據該視差計算該候選目標與該智慧型裝置之間的距離;或者, 通過該智慧型裝置中的雷射雷達,向該監控範圍內發射鐳射; 根據處於該監控範圍內的每個障礙物返回的鐳射,生成每個障礙物的二值圖; 將每個二值圖與該環境圖像進行融合,從所有的二值圖中識別出與該候選目標對應的二值圖; 根據該候選目標對應的二值圖的鐳射返回時間,確定出該候選目標與該智慧型裝置之間的距離。As a possible implementation manner of another embodiment of the present invention, the second acquisition module is specifically configured to: acquire a depth map through a depth camera in the smart device, and obtain the target and the smart device according to the depth map. The distance between them; or, shooting the candidate target through the binocular vision camera in the smart device, calculating the parallax of the image captured by the binocular vision camera, and calculating the candidate target and the smart type according to the parallax The distance between the devices; or, a laser is emitted into the monitoring range by a laser radar in the smart device; according to the laser returned by each obstacle within the monitoring range, a binary value of each obstacle is generated Figure; Fusion each binary image with the environment image, identify the binary image corresponding to the candidate target from all binary images; determine the laser return time of the binary image corresponding to the candidate target, and determine Find out the distance between the candidate target and the smart device.
作為本發明另一方面實施例一種可能的實現方式,該第二獲取模組具體用於: 從該環境圖像中截取該候選目標的人臉圖像; 將該人臉圖像輸入預先訓練好的機器學習模型中,獲取該人臉圖像中人臉的人臉角度; 該裝置還包括: 採集模組,用於採集攜帶樣本人臉圖像,其中,該樣本人臉圖像中攜帶標注資料,該標注資料用於表示樣本人臉的人臉角度; 訓練模組,用於將該樣本人臉圖像輸入到初始構建的機器學習模型中進行訓練,當訓練後的該機器學習模型的誤差在預設的誤差範圍內時,則得到訓練好的該機器學習模型。As a possible implementation manner of another embodiment of the present invention, the second acquisition module is specifically configured to: intercept a face image of the candidate target from the environment image; input the face image into a pre-trained In a machine learning model, the face angle of the face in the face image is obtained; the device further includes: an acquisition module for collecting a face image carrying a sample, wherein the sample face image carries annotations Data, the labeled data is used to represent the face angle of the sample face; the training module is used to input the sample face image into the initially constructed machine learning model for training. When the machine learning model is trained, When the error is within the preset error range, the trained machine learning model is obtained.
作為本發明另一方面實施例一種可能的實現方式,該裝置還包括: 第一控制模組,用於控制該智慧型裝置與該交互目標進行交互; 識別模組,用於在交互過程中,識別該交互目標的人臉圖像的中心點; 檢測模組,用於檢測該人臉圖像的中心點是否在預設的圖像區域內; 第三獲取模組,用於在未在該圖像區域內時,獲取該人臉圖像的中心點到達該圖像區域的中心點之間的路徑; 第二控制模組,用於根據該路徑,控制該智慧型裝置,使該人臉圖像的中心點在該圖像區域內。As a possible implementation manner of another embodiment of the present invention, the device further includes: a first control module for controlling the smart device to interact with the interaction target; an identification module for use in the interaction process, Identify the center point of the face image of the interaction target; a detection module for detecting whether the center point of the face image is within a preset image area; a third acquisition module for detecting the When in the image area, the path between the center point of the face image and the center point of the image area is obtained; a second control module is used to control the smart device according to the path to make the face The center point of the image is within the image area.
本發明實施例的智慧型裝置的交互目標確定裝置,通過獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別,將從環境圖像中識別出的目標作為候選目標,獲取候選目標的狀態資訊,針對每個候選目標,根據對應的狀態資訊,判斷是否存在與智慧型裝置交互的交互意圖,從存在交互意圖的候選目標中選取智慧型裝置的交互目標。本實施例中,通過根據候選目標的狀態資訊,從所有候選目標中,篩選出存在交互意圖的候選目標,進一步從存在交互意圖的候選目標中,為智慧型裝置選取出交互目標,使得選取的交互目標最可能是與智慧型裝置有交互意圖的目標,避免了將沒有交互意圖的目標作為交互目標,提高了交互目標的確定準確度,減少了智慧型裝置的誤啟動。The interactive target determining device of the smart device according to the embodiment of the present invention obtains an environmental image within the monitoring range of the smart device, performs target recognition on the environmental image, and uses the target identified from the environmental image as a candidate target. To obtain the status information of the candidate target, and for each candidate target, determine whether there is an interaction intention to interact with the smart device according to the corresponding status information, and select the interaction target of the smart device from the candidate targets having the interaction intention. In this embodiment, according to the status information of the candidate targets, from all candidate targets, candidate targets with interaction intentions are screened out, and among the candidate targets with interaction intentions, interactive targets are selected for smart devices, so that the selected The interaction target is most likely to have the interaction intention with the smart device, avoiding the target without the interaction intention as the interaction target, improving the accuracy of determining the interaction target, and reducing the false startup of the smart device.
本發明又一方面實施例提出了一種智慧型裝置,包括:殼體、處理器、記憶體、電路板和電源電路,其中,該電路板安置在該殼體圍成的空間內部,該處理器和該記憶體設置在該電路板上;該電源電路,用於為上述智慧型裝置的各個電路或器件供電;該記憶體用於儲存可執行程式碼;其中,該處理器通過讀取該記憶體中儲存的可執行程式碼來運行與該可執行程式碼對應的程式,以用於實現上述一方面實施例該的智慧型裝置的交互目標確定方法。Another embodiment of the present invention provides a smart device, which includes a housing, a processor, a memory, a circuit board, and a power circuit. The circuit board is disposed in a space surrounded by the housing, and the processor And the memory is arranged on the circuit board; the power circuit is used for supplying power to each circuit or device of the smart device; the memory is used for storing executable program code; wherein the processor reads the memory by The executable code stored in the body is used to run a program corresponding to the executable code, so as to implement the method for determining an interactive target of the smart device according to the embodiment of the above aspect.
本發明又一方面實施例提出了一種電腦程式產品,該電腦程式產品包括儲存在電腦可讀儲存媒體上的電腦程式,該電腦程式包括程式指令,該程式指令被處理器執行時實現如上述一方面實施例該的智慧型裝置的交互目標確定方法。Another embodiment of the present invention provides a computer program product. The computer program product includes a computer program stored on a computer-readable storage medium. The computer program includes program instructions. When the program instructions are executed by a processor, the program instructions are implemented as described above. Aspects The method for determining an interactive target of a smart device.
本發明又一方面實施例提出了一種非臨時性電腦可讀儲存媒體,其上儲存有電腦程式,該程式被處理器執行時實現如上述一方面實施例所述的智慧型裝置的交互目標確定方法。Another embodiment of the present invention provides a non-transitory computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the interactive target determination of the smart device according to the embodiment of the above aspect is determined. method.
本發明附加的方面和優點將在下面的描述中部分給出,部分將從下面的描述中變得明顯,或通過本發明的實踐瞭解到。Additional aspects and advantages of the present invention will be given in part in the following description, part of which will become apparent from the following description, or be learned through the practice of the present invention.
下面詳細描述本發明的實施例,該實施例的示例在附圖中示出,其中自始至終相同或類似的標號表示相同或類似的元件或具有相同或類似功能的元件。下面通過參考附圖描述的實施例是示例性的,旨在用於解釋本發明,而不能理解為對本發明的限制。An embodiment of the present invention is described in detail below. An example of the embodiment is shown in the drawings, wherein the same or similar reference numerals indicate the same or similar elements or elements having the same or similar functions. The embodiments described below with reference to the drawings are exemplary and are intended to explain the present invention, but should not be construed as limiting the present invention.
下面參考附圖描述本發明實施例的智慧型裝置的交互目標確定方法和裝置。The following describes a method and an apparatus for determining an interactive target of a smart device according to an embodiment of the present invention with reference to the accompanying drawings.
本發明各實施例,針對智慧型裝置在檢測到人臉時,將人作為交互目標的確定交互目標的方法,可能會將與智慧型裝置沒有交互意願的目標作為交互目標,從而造成智慧型裝置誤啟動的問題,提出一種智慧型裝置的交互目標確定方法。According to the embodiments of the present invention, when a smart device detects a human face, a method for determining an interactive target using a person as an interactive target may use a target that does not have the willingness to interact with the smart device as an interactive target, thereby causing a smart device The problem of mis-starting, this paper proposes an interactive target determination method for smart devices.
本發明實施例的智慧型裝置的交互目標確定方法,通過根據候選目標的狀態資訊,從所有候選目標中,篩選出存在交互意圖的候選目標,進一步從存在交互意圖的候選目標中,為智慧型裝置選取出交互目標,使得選取的交互目標最可能是與智慧型裝置有交互意圖的目標,避免了將沒有交互意圖的目標作為交互目標,提高了交互目標的確定準確度,減少了智慧型裝置的誤啟動。The method for determining an interactive target of a smart device according to an embodiment of the present invention selects candidate targets with interaction intent from all candidate targets according to the status information of the candidate targets, and further selects intelligent targets from candidate targets with interactive intent. The device selects the interaction target, so that the selected interaction target is most likely to have the interaction intention with the smart device, avoiding the target without the interaction intention as the interaction target, improving the determination accuracy of the interaction target, and reducing the smart device. By mistake.
第1圖為本發明實施例提供的一種智慧型裝置的交互目標確定方法的流程示意圖。FIG. 1 is a schematic flowchart of a method for determining an interactive target of a smart device according to an embodiment of the present invention.
如第1圖所示,該智慧型裝置的交互目標確定方法包括: 步驟101,獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別。As shown in FIG. 1, the method for determining an interactive target of a smart device includes: Step 101: Obtain an environmental image within a monitoring range of the smart device, and perform target recognition on the environmental image.
本實施例中,智慧型裝置可以是機器人、智慧家電等。In this embodiment, the smart device may be a robot, a smart home appliance, or the like.
智慧型裝置上配置有攝像裝置,如攝像頭,智慧型裝置通過攝像裝置可即時採集監控範圍內的環境圖像。在獲取環境圖像後,可對環境圖像進行檢測,以識別進入監控範圍的目標。其中,這裡的目標可以理解為人。The smart device is equipped with a camera device, such as a camera. The smart device can immediately capture the environment image within the monitoring range through the camera device. After the environment image is acquired, the environment image can be detected to identify targets that enter the monitoring range. Among them, the goal here can be understood as people.
以識別環境圖像中的人為例,智慧型裝置可通過人臉檢測或者人體檢測,識別環境圖像中的人。具體而言,從環境圖像中提取物體的輪廓,將提取的物體輪廓與預存的人臉輪廓或人體輪廓,進行比對。當提取的輪廓與預設的輪廓之間的相似度超過預設的臨界值,可以認為從環境圖像中識別到了人。從而,通過該方法可以識別出環境圖像中所有的人。Taking the recognition of people in an environment image as an example, the smart device can recognize the people in the environment image through face detection or human body detection. Specifically, an outline of an object is extracted from an environment image, and the extracted outline of the object is compared with a pre-existing face contour or a contour of a human body. When the similarity between the extracted contour and the preset contour exceeds a preset threshold, it can be considered that a person is recognized from the environmental image. Therefore, all people in the environment image can be identified by this method.
步驟102,將從環境圖像中識別出的目標作為候選目標,獲取候選目標的狀態資訊。Step 102: Obtain the status information of the candidate target from the target identified in the environmental image.
本實施例中,當從環境圖像中識別出目標時,將識別出的目標作為候選目標。例如,當有人進入機器人的監控範圍內時,機器人可從採集的環境圖像中識別出進入監控範圍內的人,這些人均作為候選目標。In this embodiment, when a target is identified from an environment image, the identified target is used as a candidate target. For example, when someone enters the surveillance range of the robot, the robot can identify people who enter the surveillance range from the collected environmental images, and these people are all candidate targets.
在識別出環境圖像中的目標後,獲取候選目標的狀態資訊,如目標的位置、目標在距離臨界值範圍內的停留時間、在預設時長內目標被識別到的次數等,以根據候選目標的狀態資訊,確定候選目標是否存在與智慧型裝置交互的交互意圖。After identifying the target in the environment image, obtain the status information of the candidate target, such as the position of the target, the target's staying time within the distance threshold, the number of times the target is recognized within a preset time period, etc. The status information of the candidate target determines whether the candidate target has an interaction intention to interact with the smart device.
步驟103,針對每個候選目標,根據對應的狀態資訊,判斷是否存在與智慧型裝置交互的交互意圖。Step 103: For each candidate target, determine whether there is an interaction intention to interact with the smart device according to the corresponding state information.
相關技術中,智慧型裝置在識別出人臉之後,直接將人作為交互目標,與人交互。但是,智慧型裝置識別出的人可能與智慧型裝置沒有交互的意願,由此可能會造成誤啟動。In related technologies, after a smart device recognizes a human face, it directly uses the human as an interaction target to interact with the human. However, the person identified by the smart device may not have the willingness to interact with the smart device, which may cause a false activation.
本實施例中,針對每個候選目標,根據候選目標的狀態資訊,判斷候選目標是否存在交互意圖。In this embodiment, for each candidate target, it is determined whether the candidate target has an interaction intention according to the status information of the candidate target.
作為一種可能的實現方式,獲取預設時長內候選目標被識別到的次數,並將該次數與預設的次數進行比較。如果預設時長內目標被識別到的次數,大於預設的次數,可以認為目標經常出現,與智慧型裝置之間存在交互意圖。As a possible implementation manner, the number of times a candidate target is recognized within a preset time period is obtained, and the number of times is compared with the preset number of times. If the number of times the target is recognized within a preset time period is greater than the preset number of times, it can be considered that the target often appears and there is an interaction intention with the smart device.
例如,在過去一個月內,公司前臺的機器人識別到某人的次數為4次,大於預設的次數2次,說明這個人是公司的常客,可以確定該人與機器人之間存在交互意圖。For example, in the past month, the number of times a robot at the front desk of a company recognized a person was 4 times greater than a preset number of times, indicating that the person is a frequent visitor to the company, and it can be determined that there is an interaction intention between the person and the robot.
本實施例中,根據候選目標的狀態資訊,從候選目標中篩除沒有交互意圖的候選目標,從而可以避免將沒有交互意圖的目標,確定為交互目標。In this embodiment, according to the state information of candidate targets, candidate targets without interaction intention are filtered out from the candidate targets, so that it is possible to avoid determining targets without interaction intention as interaction targets.
步驟104,從存在交互意圖的候選目標中選取智慧型裝置的交互目標。Step 104: Select an interactive target of the smart device from the candidate targets having an interaction intention.
本實施例中,為了進一步提高確定交互目標的準確性,降低智慧型裝置誤啟動的概率,可繼續從存在交互意圖的候選目標中選取智慧型裝置的交互目標,從而使交互目標為最可能存在交互意圖的候選目標。In this embodiment, in order to further improve the accuracy of determining the interaction target and reduce the probability of the intelligent device being started incorrectly, the interaction target of the intelligent device may be continued to be selected from the candidate targets having the interaction intention, thereby making the interaction target the most likely Candidates for interaction intent.
如果存在交互意圖的候選目標僅有一個,則將該候選目標作為交互目標。如果存在交互意圖的目標有複數時,可以根據候選目標與智慧型裝置之間的距離,確定交互目標。具體過程後續實施例,將進行詳細說明,在此不再贅述。If there is only one candidate target with interaction intention, the candidate target is taken as the interaction target. If there are plural targets with interaction intention, the interaction target may be determined according to the distance between the candidate target and the smart device. The subsequent embodiments of the specific process will be described in detail, and will not be repeated here.
在上述實施例的基礎上,對於步驟103,根據對應的狀態資訊,判斷是否存在與該智慧型裝置交互的交互意圖,作為一種可能的實現方式,可根據候選目標與智慧型裝置之間的距離,和候選目標在預設的距離臨界值範圍內的停留時間,判斷候選目標是否存在交互意圖。第2圖為本發明實施例提供的另一種智慧型裝置的交互目標確定方法的流程示意圖。On the basis of the above embodiment, for step 103, it is determined whether there is an interaction intention to interact with the smart device according to the corresponding state information. As a possible implementation manner, the distance between the candidate target and the smart device may be determined , And the dwell time of the candidate target within a preset distance critical value range to determine whether the candidate target has an interaction intention. FIG. 2 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention.
如第2圖所示,該智慧型裝置的交互目標確定方法包括: 步驟201,獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別。As shown in FIG. 2, the method for determining an interactive target of a smart device includes: Step 201: Obtain an environmental image within a monitoring range of the smart device, and perform target recognition on the environmental image.
本實施例中,智慧型裝置獲取監控範圍內的環境圖像,以及環境圖像進行目標識別的方法,可參見上述實施例中記載的相關內容,在此不再贅述。In this embodiment, for a smart device to acquire an environment image within a monitoring range and a method for performing target recognition by the environment image, refer to related content recorded in the foregoing embodiment, and details are not described herein again.
步驟202,將從環境圖像中識別出的目標作為候選目標,獲取候選目標與智慧型裝置之間的距離。In step 202, the target identified from the environment image is used as a candidate target, and the distance between the candidate target and the smart device is obtained.
可以理解的是,候選目標與智慧型裝置之間的距離越近,說明候選目標與智慧型裝置之間存在交互意圖的可能性越大,因此本實施例中,將候選目標與智慧型裝置之間的距離,作為判斷候選目標是否存在,與智慧型裝置交互的交互意圖的依據之一。It can be understood that the closer the distance between the candidate target and the smart device, the greater the possibility of interaction intention between the candidate target and the smart device. Therefore, in this embodiment, the candidate target and the smart device are The distance between them is used as one of the basis for judging whether the candidate target exists and the interaction intention of interacting with the smart device.
本實施例中,可通過深度攝像頭或者雙目視覺攝像頭或者雷射雷達,獲取候選目標與智慧型裝置之間的距離。In this embodiment, the distance between the candidate target and the smart device may be obtained through a depth camera, a binocular vision camera, or a laser radar.
作為一種可能的實現方式,智慧型裝置中配置有深度攝像頭,通過深度攝像頭,獲取候選目標的深度圖。在具體實現時,可通過結構光投射器向候選目標表面投射可控制的光點、光條或光面結構,並由深度攝像頭中的圖像感測器獲得圖像,通過幾何關係,利用三角原理計算得到候選目標的三維座標,從而可以得到候選目標與智慧型裝置之間的距離。As a possible implementation manner, a depth camera is configured in the smart device, and a depth map of the candidate target is obtained through the depth camera. In specific implementation, a structured light projector can be used to project a controllable light spot, light bar, or light surface structure on a candidate target surface, and an image is obtained by an image sensor in the depth camera. The geometric relationship is used to use the triangle The principle calculates the three-dimensional coordinates of the candidate target, so that the distance between the candidate target and the smart device can be obtained.
作為另一種可能的實現方式,在智慧型裝置中配置雙目視覺攝像頭,通過雙目視覺攝像頭,對候選目標進行拍攝。然後,計算雙目視覺攝像頭所拍攝圖像的視差,根據視差計算候選目標與智慧型裝置之間的距離。As another possible implementation manner, a binocular vision camera is configured in the smart device, and the candidate target is photographed through the binocular vision camera. Then, the parallax of the image captured by the binocular vision camera is calculated, and the distance between the candidate target and the smart device is calculated according to the parallax.
第3圖為本發明實施例提供的雙目視覺計算距離的原理示意圖。第3圖中,在實際空間中,畫出了兩個攝像頭所在位置和,以及左右攝像頭的光軸線,兩個攝像頭的焦平面,焦平面距離兩個攝像頭所在平面的距離為。FIG. 3 is a schematic diagram of a binocular vision calculation distance provided by an embodiment of the present invention. In Figure 3, the positions of the two cameras are drawn in the actual space. with , And the optical axes of the left and right cameras, the focal planes of the two cameras, and the distance between the focal planes and the planes of the two cameras are .
如第3圖所示,和分別是同一候選目標P在不同拍攝圖像中的位置。其中,點距離所在拍攝圖像的左側邊界的距離為,點距離所在拍攝圖像的左側邊界的距離為。和分別為兩個攝像頭,這兩個攝像頭在同一平面,兩個攝像頭之間的距離為。As shown in Figure 3, with The positions of the same candidate target P in different captured images are respectively. among them, The distance from the point to the left border of the captured image is , The distance from the point to the left border of the captured image is . with There are two cameras, which are on the same plane, and the distance between the two cameras is .
基於三角測距原理,第3圖中的P與兩個攝像頭所在平面之間的距離,具有如下關係:;Based on the principle of triangulation, the distance between P in the third image and the plane where the two cameras are located Has the following relationship: ;
基於此,可以推得,其中,為同一候選目標雙目攝像頭所拍攝圖像的視覺差。由於、為定值,因此,根據視覺差可以確定出候選目標與攝像頭所在平面之間的距離,即候選目標與智慧型裝置之間的距離。Based on this, it can be deduced ,among them, The visual difference of the images taken by the binocular camera of the same candidate target. due to , Is a fixed value, so Can determine the distance between the candidate target and the plane where the camera is located , The distance between the candidate and the smart device.
作為再一種可能的實現方式,在智慧型裝置中配置雷射雷達,通過雷射雷達向監控範圍內發射鐳射,發射的鐳射遇到監控範圍內的障礙物將被反射。智慧型裝置接收監控範圍內的每個障礙物返回的鐳射,根據返回的鐳射生成每個障礙物的二值圖。然後,將每個二值圖與環境圖像進行融合,從所有二值圖中識別出與候選目標對應的二值圖。具體地,可以根據每個障礙物的二值圖可以識別出每個障礙物的輪廓或者大小,然後將環境圖像中每個目標的輪廓或者大小進行匹配,從而可以得到候選目標對應的二值圖。之後,將候選目標對應的二值圖的鐳射返回時間乘以光速,並除以2,得到候選目標與智慧型裝置之間的距離。As another possible implementation manner, a laser radar is configured in the smart device, and the laser is used to emit laser light into the monitoring range. The emitted laser light is reflected when it encounters an obstacle in the monitoring range. The smart device receives the laser beam returned by each obstacle in the monitoring range, and generates a binary map of each obstacle according to the returned laser beam. Then, each binary image is fused with the environment image, and the binary image corresponding to the candidate target is identified from all the binary images. Specifically, the contour or size of each obstacle can be identified according to the binary map of each obstacle, and then the contour or size of each target in the environment image can be matched to obtain the binary value corresponding to the candidate target. Illustration. Then, the laser return time of the binary map corresponding to the candidate target is multiplied by the speed of light and divided by 2 to obtain the distance between the candidate target and the smart device.
需要說明的是,其他用於計算候選目標與智慧型裝置之間的距離的方法,也包含在本發明實施例的保護範圍內。It should be noted that other methods for calculating the distance between the candidate target and the smart device are also included in the protection scope of the embodiment of the present invention.
步驟203,針對每個候選目標,判斷候選目標與智慧型裝置之間的距離是否小於或者等於預設的距離臨界值,且在距離臨界值範圍內的停留時長是否超出預設的時間臨界值。Step 203: For each candidate target, determine whether the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and whether the dwell time within the distance threshold exceeds the preset time threshold. .
由於當候選目標與智慧型裝置之間的距離較遠時,候選目標可能不存在與智慧型裝置交互的交互意圖,或者距離較近,但候選目標的停留時間較短也可能不存在與智慧型裝置交互的交互意圖。Because when the distance between the candidate target and the smart device is long, the candidate target may not have an interaction intention to interact with the smart device, or the distance is short, but the short stay time of the candidate target may not exist with the smart device. Interactive intent of device interaction.
由此,可針對每個候選目標,將候選目標與智慧型裝置之間的距離,與預設的距離臨界值進行比較,以判斷選目標與智慧型裝置之間的距離是否小於或者等於預設的距離臨界值。如果距離在距離臨界值範圍內,判斷候選目標在距離臨界值範圍內停留的時間是否超過預設的時間臨界值。Therefore, for each candidate target, the distance between the candidate target and the smart device can be compared with a preset distance threshold to determine whether the distance between the selected target and the smart device is less than or equal to the preset The critical distance. If the distance is within the distance critical value range, it is judged whether the time that the candidate target stays within the distance critical value range exceeds a preset time critical value.
步驟204,如果距離小於或者等於距離臨界值且停留時長超出時間臨界值,則確定該候選目標存在與智慧型裝置交互的交互意圖。In step 204, if the distance is less than or equal to the distance critical value and the staying time exceeds the time critical value, it is determined that the candidate target has an interaction intention to interact with the smart device.
當候選目標與智慧型裝置之間的距離小於預設的距離臨界值,且候選目標在距離臨界值範圍內的停留時長超過預設的時間臨界值,可以認為候選目標存在與智慧型裝置交互的交互意圖。When the distance between the candidate target and the smart device is less than a preset distance threshold, and the stay time of the candidate target within the range of the distance threshold exceeds the preset time threshold, the candidate target may be considered to interact with the smart device. Interactive intent.
以機器人為例,若人與機器人之間的距離小於3米,且人在3米內停留的時間超過2秒,可以認為人存在與機器人交互的交互意圖。Taking a robot as an example, if the distance between the human and the robot is less than 3 meters, and the human stays within 3 meters for more than 2 seconds, it can be considered that the human has an interaction intention to interact with the robot.
步驟205,從存在交互意圖的候選目標中選取智慧型裝置的交互目標。 本實施例中,步驟205與上述實施例中的步驟104類似,故在此不再贅述。Step 205: Select an interactive target of the smart device from the candidate targets having an interaction intention. In this embodiment, step 205 is similar to step 104 in the foregoing embodiment, so it will not be repeated here.
本發明實施例的智慧型裝置的交互目標確定方法,通過候選目標與智慧型裝置之間的距離,以及候選目標在預設的距離臨界值範圍內的停留時間,從所有候選目標中,篩選出存在與智慧型裝置交互的交互意圖的候選目標,相比在檢測到人臉時,直接將人作為交互目標,可以降低智慧型裝置的誤啟動。The method for determining an interactive target of a smart device according to the embodiment of the present invention selects all candidate targets by the distance between the candidate target and the smart device and the dwell time of the candidate target within a preset distance critical range. There are candidate targets for interaction intent to interact with the smart device. Compared with directly detecting people as the target of interaction when a human face is detected, false start of the smart device can be reduced.
對於步驟103,作為另一種可能的實現方式,也可根據候選目標與智慧型裝置之間的距離,以及候選目標的人臉角度,判斷候選目標是否存在與智慧型裝置交互的交互意圖。第4圖為本發明實施例提供的另一種智慧型裝置的交互目標確定方法的流程示意圖。For step 103, as another possible implementation manner, whether the candidate target has an interaction intention to interact with the smart device may also be determined according to the distance between the candidate target and the smart device and the face angle of the candidate target. FIG. 4 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention.
如第4圖所示,該智慧型裝置的交互目標確定方法包括: 步驟301,獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別。As shown in FIG. 4, the method for determining an interactive target of a smart device includes: Step 301: Obtain an environmental image within a monitoring range of the smart device, and perform target recognition on the environmental image.
本實施例中,步驟301與上述實施例中的步驟101類似,故在此不再贅述。In this embodiment, step 301 is similar to step 101 in the foregoing embodiment, so details are not described herein again.
步驟302,將從環境圖像中識別出的目標作為候選目標,獲取候選目標與智慧型裝置之間的距離,以及候選目標的人臉角度。Step 302: Obtain a target identified from the environment image as a candidate target, and obtain a distance between the candidate target and the smart device, and a face angle of the candidate target.
在實際中,當人路過機器人時,如果人轉頭看向機器人,或者當人臉正對機器人時,說明人對機器人的關注度較高,人存在與機器人交互的交互意圖。由此,可將人臉圖像中人臉的人臉角度,作為判斷候選目標是否存與智慧型裝置交互的交互意圖的依據之一。In practice, when a person passes by the robot, if the person turns his head to look at the robot, or when the human face is facing the robot, it means that the person pays more attention to the robot, and the person has an interaction intention to interact with the robot. Therefore, the face angle of the face in the face image can be used as one of the basis for judging whether the candidate target has an interaction intention to interact with the smart device.
本實施例中,可通過候選目標與智慧型裝置之間的距離,以及候選目標的人臉角度,來判斷候選目標是否存在與智慧型裝置交互的交互意願。其中,在獲取候選目標與智慧型裝置之間的距離時,可通過上述實施例中的記載的方法獲取。In this embodiment, the distance between the candidate target and the smart device and the face angle of the candidate target can be used to determine whether the candidate target has an interaction willingness to interact with the smart device. When obtaining the distance between the candidate target and the smart device, the distance between the candidate target and the smart device may be obtained by using the method described in the foregoing embodiment.
在獲取人臉角度時,可通過預先訓練好的機器學習模型,獲取人臉角度。具體地,可按照人臉輪廓從環境圖像中截取候選目標的人臉圖像,之後將人臉圖像輸入到機器學習模型中。機器學習模型根據人臉圖像,輸出人臉圖像中人臉角度。When obtaining the face angle, the face angle can be obtained through a pre-trained machine learning model. Specifically, the face image of the candidate target may be intercepted from the environment image according to the face contour, and then the face image is input into a machine learning model. The machine learning model outputs the face angle in the face image based on the face image.
其中,人臉角度可以是人臉中軸線偏離人臉圖像中軸線的角度,人臉中軸線包括水準方向的中軸線和垂直方向的中軸線,相應的人臉圖像中軸線也包括水準方向的中軸線和垂直方向的中軸線。從人臉圖像中可以識別出人臉水準方向中軸線和垂直方向的中軸線,分別偏離與人臉圖像的水準方向的中軸線和人臉圖像的垂直方向的中軸線的角度,獲取到的角度就是人臉角度。The face angle may be an angle where the central axis of the face deviates from the central axis of the face image. The central axis of the face includes the central axis in the horizontal direction and the central axis in the vertical direction. The central axis of the corresponding facial image also includes the horizontal direction And the vertical axis. From the face image, the horizontal axis of the horizontal direction of the face and the vertical axis of the vertical direction can be identified, and the angles deviating from the horizontal axis of the horizontal direction of the face image and the vertical axis of the face image are obtained, The angle reached is the face angle.
本實施例中,可採用如下方式訓練機器學習模型。首先,採集人臉圖像,並對人臉圖像進行人臉角度標注,從而使樣本人臉圖像,攜帶表示樣本人臉圖像的人臉角度的標注資料。之後,將樣本人臉圖像輸入到初始構建的機器學習模型中進行訓練。當機器學習模型輸出的人臉角度,與標注的人臉角度之間的差值,在預設的誤差範圍內時,可以認為機器學習模型已經訓練完畢。In this embodiment, a machine learning model may be trained in the following manner. First, a face image is collected, and the face angle is labeled on the face image, so that the sample face image carries label data indicating the face angle of the sample face image. Then, the sample face images are input into the initially constructed machine learning model for training. When the difference between the face angle output by the machine learning model and the labeled face angle is within a preset error range, it can be considered that the machine learning model has been trained.
本實施例中,通過訓練好的機器學習模型獲取人臉角度,可以提高獲取的人臉角度的精度,從而能夠提高後續判斷的準確性。In this embodiment, obtaining the face angle by using a trained machine learning model can improve the accuracy of the obtained face angle, thereby improving the accuracy of subsequent judgments.
步驟303,針對每個候選目標,判斷候選目標與智慧型裝置之間的距離是否小於或者等於預設的距離臨界值,且候選目標的人臉角度是否處於預設的角度範圍內。Step 303: For each candidate target, determine whether the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and whether the face angle of the candidate target is within a preset angle range.
本實施例中,針對每個候選目標,將候選目標與智慧型裝置之間的距離,與預設的距離臨界值進行比較,將候選目標的人臉角度與預設的角度範圍的上限值進行比較。In this embodiment, for each candidate target, the distance between the candidate target and the smart device is compared with a preset distance threshold, and the face angle of the candidate target is compared with a preset upper limit value of the angular range. Compare.
假設距離臨界值為3米,角度範圍為[0°,45°],判斷候選目標與智慧型裝置之間的距離是否小於3米,將人臉角度與45°進行比較,以判斷人臉角度是否處於預設的角度範圍內。Assuming the critical distance value is 3 meters and the angle range is [0 °, 45 °], determine whether the distance between the candidate target and the smart device is less than 3 meters, and compare the face angle with 45 ° to determine the face angle Whether it is within the preset angle range.
步驟304,如果候選目標與智慧型裝置之間的距離小於或者等於預設的距離臨界值,且候選目標的人臉角度處於預設的角度範圍內,則確定候選目標存在與智慧型裝置交互的交互意圖。In step 304, if the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and the face angle of the candidate target is within a preset angle range, it is determined that the candidate target interacts with the smart device. Interaction intent.
本實施例中,當候選目標與智慧型裝置之間的距離小於或等於預設的距離臨界值,並且候選目標的人臉角度處於預設的角度範圍內,說明候選目標在距離臨界值範圍內,對智慧型裝置進行關注,可以確定候選目標存在與智慧型裝置交互的交互意圖。相比直接將檢測到人作為交互目標而言,提高了交互目標確認的準確度。In this embodiment, when the distance between the candidate target and the smart device is less than or equal to a preset distance critical value, and the face angle of the candidate target is within a preset angle range, it indicates that the candidate target is within the distance critical value range. By paying attention to the smart device, it can be determined that the candidate target has an interaction intention to interact with the smart device. Compared with directly detecting a person as an interactive target, the accuracy of the interactive target confirmation is improved.
步驟305,從存在交互意圖的候選目標中選取智慧型裝置的交互目標。Step 305: Select an interactive target of the smart device from the candidate targets having an interaction intention.
本實施例中,步驟305與上述實施例中步驟104類似,在此不再贅述。In this embodiment, step 305 is similar to step 104 in the foregoing embodiment, and details are not described herein again.
本發明實施例的智慧型裝置的交互目標確定方法,通過候選目標與智慧型裝置之間的距離,以及候選目標人臉角度,從所有候選目標中,篩選出存在與智慧型裝置交互的交互意圖的候選目標,相比在檢測到人臉時,直接將人作為交互目標,可以降低智慧型裝置的誤啟動。According to the method for determining an interactive target of a smart device according to an embodiment of the present invention, from all candidate targets, the existence of an interaction intention to interact with the smart device is screened from all candidate targets based on the distance between the candidate target and the smart device, and the face angle of the candidate target. Candidate targets can reduce the false start of smart devices compared with directly using people as interactive targets when detecting faces.
需要說明的是,在判斷候選目標是否存在交互意圖時,也可在候選目標與智慧型裝置之間的距離,在距離臨界值範圍內,且候選目標在距離臨界值範圍內停留的時間超過時間臨界值,以及候選目標的人臉角度在預設的角度範圍內時,確定候選目標存在與智慧型裝置交互的交互意圖。否則,可以認為候選目標不存在與智慧型裝置交互的交互意圖。It should be noted that when judging whether the candidate target has an interaction intention, the distance between the candidate target and the smart device can also be within the distance threshold, and the time during which the candidate target stays within the distance threshold exceeds the time. When the threshold value and the face angle of the candidate target are within a preset angle range, it is determined that the candidate target has an interaction intention to interact with the smart device. Otherwise, it can be considered that the candidate target does not have an interaction intention to interact with the smart device.
上述實施例中,對於從存在交互意圖的候選目標中選取智慧型裝置的交互目標,當存在交互意圖的候選目標只有一個時,可將存在交互意圖的候選目標作為交互目標。當存在交互意圖的候選目標有複數時,可根據候選目標與智慧型裝置之間的距離,確定從候選目標中選取交互目標。第5圖為本發明實施例提供的另一種智慧型裝置的交互目標確定方法的流程示意圖。In the above embodiment, for the interactive target of the smart device selected from the candidate targets with interactive intent, when there is only one candidate target with interactive intent, the candidate target with interactive intent may be used as the interactive target. When there are plural candidate targets with interaction intent, the interactive target may be selected from the candidate targets according to the distance between the candidate target and the smart device. FIG. 5 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention.
如第5圖所示,對於步驟104,該智慧型裝置的交互目標確定方法可包括: 步驟401,當檢測到複數候選目標時,且存在交互意圖的候選目標為複數時,從複數存在交互意圖的候選目標中,確定出與智慧型裝置距離最近的候選目標。As shown in FIG. 5, for step 104, the method for determining an interactive target of a smart device may include: Step 401: When a plurality of candidate targets are detected, and when a candidate target having an interaction intent is plural, an interaction intention exists from the plural. Among the candidate targets, a candidate target closest to the smart device is determined.
由於候選目標與智慧型裝置之間的距離越近,說明候選目標與智慧型裝置之間的交互意圖越強。The closer the distance between the candidate target and the smart device, the stronger the intention of interaction between the candidate target and the smart device.
本實施例中,當智慧型裝置從環境圖像中檢測到複數候選目標,且判斷出存在交互意圖的候選目標也為複數時,可將複數存在交互意圖的候選目標與智慧型裝置之間的距離進行比較,以從複數存在交互意圖的候選目標中,查找出與智慧型裝置距離最近的候選目標,從而篩選出交互意圖較強的候選目標。In this embodiment, when the smart device detects a plurality of candidate targets from the environment image, and determines that the candidate targets with interaction intent are also plural, the number of candidate targets with interaction intention and the smart device may be The distances are compared to find the candidate target closest to the smart device from the plurality of candidate targets with interaction intent, so as to screen out the candidate target with strong interaction intent.
步驟402,從與智慧型裝置距離最近的候選目標中,選取智慧型裝置的交互目標。Step 402: Select an interactive target of the smart device from the candidate targets closest to the smart device.
本實施例中,為了進一步確定智慧型裝置的交互目標,需要從與智慧型裝置距離最近的目標中,選取智慧型裝置的交互目標。In this embodiment, in order to further determine the interaction target of the smart device, it is necessary to select the interaction target of the smart device from among the targets closest to the smart device.
可以理解的是,當與智慧型裝置距離最近的候選目標僅有一個時,可將該候選目標作為智慧型裝置的交互目標。It can be understood that when there is only one candidate target closest to the smart device, the candidate target can be used as the interactive target of the smart device.
當與智慧型裝置距離最近的候選目標有複數時,需要從複數與智慧型裝置距離最近的候選目標中,選取智慧型裝置的交互目標。When there are a plurality of candidate targets closest to the smart device, it is necessary to select an interactive target of the smart device from the plurality of candidate targets closest to the smart device.
以機器人為例,某公司前臺放置一個機器人,當使用者需要進入公司時,可以在機器人中進行資訊登錄,即在機器人中進行註冊。或者可以從公司網站中下載註冊使用者的人臉圖像,儲存到機器人中,在公司網站中註冊過的用戶,同步地在機器人中進行了註冊。一般在該機器人中註冊過的用戶,比未註冊過的用戶與機器人交互的交互意圖更強。由此,可根據候選目標是否已註冊,確定智慧型裝置的交互目標。Taking a robot as an example, a robot is placed at the front desk of a company. When a user needs to enter the company, he can register information in the robot, that is, register in the robot. Alternatively, the facial images of registered users can be downloaded from the company website and stored in the robot. Users who have registered on the company website can be registered in the robot synchronously. Generally, the registered users in the robot have stronger interaction intentions than the unregistered users interacting with the robot. Therefore, the interactive target of the smart device can be determined according to whether the candidate target is registered.
機器人在日常接待工作時,可以採集訪客或者公司員工的人臉圖像,利用採集的訪客或者公司員工的人臉圖像,構建一個已註冊使用者人臉圖像庫,也可以利用網站註冊使用者的人臉圖像,構建該人臉圖像庫。In the daily reception work, the robot can collect the face images of visitors or company employees, and use the collected face images of visitors or company employees to build a registered user face image library, or use the website to register and use The person's face image to build the face image library.
作為一種可能的實現方式,智慧型裝置可在本地查詢與智慧型裝置距離最近的候選目標,是否已經註冊智慧型裝置。具體地,智慧型裝置可預先儲存已註冊使用者人臉圖像庫,人臉圖像庫中儲存有已註冊智慧型裝置的使用者人臉圖像。當與智慧型裝置距離最近的候選目標為複數時,可將與智慧型裝置距離最近的候選目標的人臉圖像,與人臉圖像庫中的人臉圖像進行比較。 如果人臉圖像庫中存一個與智慧型裝置距離最近的候選目標的人臉圖像,說明該候選目標已註冊,那麼將該候選目標作為智慧型裝置的交互目標。As a possible implementation manner, the smart device may locally query whether the candidate candidate closest to the smart device has already registered the smart device. Specifically, the smart device may store a face image database of registered users in advance, and the face image database stores user face images of registered smart devices. When the candidate target closest to the smart device is plural, the face image of the candidate target closest to the smart device may be compared with the face image in the face image library. If a face image of a candidate target closest to the smart device is stored in the face image library, indicating that the candidate target is registered, then the candidate target is used as the interactive target of the smart device.
如果人臉圖像庫中不存在與智慧型裝置距離最近的候選目標的人臉圖像,說明與智慧型裝置距離最近的候選目標均未註冊,可從與智慧型裝置距離最近的候選目標中,隨機選取一個候選目標作為交互目標。If there is no face image of the candidate target closest to the smart device in the face image library, it means that the candidate target closest to the smart device is not registered, and the candidate target closest to the smart device can be selected. , Randomly select a candidate target as the interactive target.
如果人臉圖像庫中存在複數與智慧型裝置距離最近的候選目標的人臉圖像,說明有複數與智慧型裝置距離最近的候選目標已註冊,那麼可將最先查詢出的與智慧型裝置距離最近的候選目標作為交互目標,也可從已註冊且與智慧型裝置距離最近的候選目標中,隨機選取一個候選目標作為交互目標。If there is a face image of a plurality of candidate targets closest to the smart device in the face image library, indicating that a plurality of candidate targets closest to the smart device have been registered, then the first query and the smart The candidate target closest to the device is used as the interaction target, and a candidate target may also be randomly selected as the interactive target from the candidate targets registered and closest to the smart device.
作為另一種可能的實現方式,當與智慧型裝置距離最近的候選目標為複數時,可將所有與智慧型裝置距離最近的候選目標的人臉圖像,發送給伺服器,由伺服器返回查詢結果至智慧型裝置,智慧型裝置根據比較結果確定交互目標。As another possible implementation manner, when the candidate object closest to the smart device is plural, the face images of all the candidate objects closest to the smart device can be sent to the server, and the server returns the query The result is to the smart device, and the smart device determines the interaction target according to the comparison result.
具體地,伺服器儲存有已註冊使用者的人臉圖像庫,當與智慧型裝置距離最近的候選目標為複數時,智慧型裝置將複數與智慧型裝置距離最近的候選目標的人臉圖像,發送至伺服器。伺服器接收到人臉圖像,並在已註冊使用者人臉圖像庫中,查詢是否存在與智慧型裝置距離最近的候選目標的人臉圖像。然後,伺服器將查詢結果發送給智慧型裝置。智慧型裝置根據查詢結果,確定智慧型裝置的交互目標,具體的確定方法可參見上述方法,在此不再贅述。Specifically, the server stores a face image database of registered users. When the candidate object closest to the smart device is a plural number, the smart device compares the face number of the plural candidate object to the smart device. Image, send to server. The server receives the face image, and inquires whether there is a face image of a candidate target closest to the smart device in the registered user face image database. The server then sends the query results to the smart device. The smart device determines the interaction target of the smart device according to the query result. For a specific determination method, refer to the foregoing method, and details are not described herein again.
舉例來說, A從機器人面前路過,並沒有交互意圖,而B是公司的常客,之前已經完成了註冊。當A和B與機器人的距離小於距離臨界值3米,且與機器人的距離相同時,機器人可選取已經完成的註冊B作為交互目標,向B打招呼。For example, A passed by in front of the robot and had no intention of interaction, while B was a frequent visitor to the company and had previously registered. When the distance between A and B and the robot is less than the critical distance of 3 meters and the same distance as the robot, the robot can select the registered B as the interactive target and say hello to B.
本發明實施例的智慧型裝置的交互目標確定方法,在存在交互意圖的候選目標有複數時,篩選出與智慧型裝置距離最近的候選目標,在與智慧型裝置距離最近的目標有複數時,通過查詢已註冊使用者人臉圖像庫,根據查詢結果,選取智慧型裝置的交互目標,而相關技術中智慧型裝置從同時出現的多人中,選取的交互目標可能並不是最可能與智慧型裝置有交互意圖的人,從而提高了交互目標的確定準確度,避免智慧型裝置的誤啟動。In the method for determining an interactive target of a smart device according to the embodiment of the present invention, when there is a plurality of candidate targets having an interaction intention, a candidate target closest to the smart device is selected, and when there are a plurality of targets closest to the smart device, By querying the registered user's face image database and selecting the interactive target of the smart device according to the query result, in the related technology, the smart device may not select the interactive target that is most likely to People with interactive intentions in the device, thereby improving the accuracy of determining the interaction target and avoiding the false start of the intelligent device.
在實際中,智慧型裝置確定交互目標後,在智慧型裝置與交互目標交互的過程中,交互目標可能處於移動狀態,例如,機器人向交互目標打招呼的過程中,可能人處於移動狀態。為了使智慧型裝置達到保持正面跟隨人交互特點,本發明實施例還提出,在交互過程中,使人臉圖像的中心點處於圖像區域內。第6圖為本發明實施例提出的另一種智慧型裝置的交互目標確定方法的流程示意圖。In practice, after the smart device determines the interactive target, the interactive target may be in a moving state during the interaction between the smart device and the interactive target. For example, during the process of the robot greeting the interactive target, the human may be in a moving state. In order to enable the smart device to maintain the front-following interaction feature, the embodiment of the present invention further proposes that during the interaction process, the center point of the face image is located in the image area. FIG. 6 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention.
在從存在交互意圖的候選目標中選取智慧型裝置的交互目標之後,如第6圖所示,該智慧型裝置的交互目標確定方法還包括: 步驟105,控制智慧型裝置與交互目標進行交互。After selecting the interactive target of the smart device from the candidate targets with interactive intent, as shown in FIG. 6, the method for determining an interactive target of the smart device further includes: Step 105, controlling the smart device to interact with the interactive target.
本實施例中,在確定交互目標後,智慧型裝置啟動,並與交互目標進行交互。以機器人為例,機器人在確定打招呼的物件後,啟動並與交互目標打招呼,如“歡迎光臨”。In this embodiment, after the interaction target is determined, the smart device starts and interacts with the interaction target. Take a robot as an example. After the robot determines an object to greet, it starts and greets an interactive target, such as "Welcome."
步驟106,在交互過程中,識別交互目標的人臉圖像的中心點。Step 106: During the interaction, identify the center point of the face image of the interaction target.
其中,目標的人臉圖像可以是環境圖像中包含目標的人臉的最小區域的圖像。The face image of the target may be an image of the smallest area of the target image in the environment image.
本實施例中,在交互的過程,智慧型裝置即時識別交互目標的人臉圖像的中心點。其中,人臉圖像的中心點是人臉圖像的豎直中心線和水準中心線的交點。In this embodiment, during the interaction process, the smart device instantly recognizes the center point of the face image of the interaction target. The center point of the face image is the intersection of the vertical center line and the horizontal center line of the face image.
步驟107,檢測人臉圖像的中心點是否在預設的圖像區域內。 本實施例中,預設的圖像區域可以是以環境圖像的中心點為圓心,以預設尺寸畫圓,得到的圓形區域。其中,預設尺寸可以是人在距離臨界值處時,人臉圖像的水準尺寸的一半。當然,也可以根據需要進行設置。In step 107, it is detected whether the center point of the face image is within a preset image area. In this embodiment, the preset image area may be a circle area obtained by drawing a circle at a preset size with a center point of the environment image as a circle center. The preset size may be a half of a horizontal size of a face image when a person is at a distance threshold. Of course, it can also be set as required.
智慧型裝置可每隔預設時間,如每隔0.5秒,檢測人臉圖像的中心點,是否在預設的圖像區域內,以判斷人臉圖像是否在預設的圖像區域內。The smart device can detect whether the center point of the face image is within the preset image area every preset time, such as every 0.5 seconds, to determine whether the face image is within the preset image area. .
步驟108,如果未在圖像區域內,獲取人臉圖像的中心點到達圖像區域的中心點之間的路徑。In step 108, if it is not in the image area, a path between the center point of the face image and the center point of the image area is obtained.
本實施例中,如果人臉圖像的中心點未在圖像區域內,說明智慧型裝置能夠捕捉到的人臉畫面不夠完整,則獲取人臉圖像的中心點到圖像區域的中心點之間的路徑。In this embodiment, if the center point of the face image is not in the image area, it indicates that the face image that the smart device can capture is not complete, then the center point of the face image is obtained to the center point of the image area Between paths.
步驟109,根據路徑控制智慧型裝置,使人臉圖像的中心點在圖像區域內。Step 109: Control the smart device according to the path so that the center point of the face image is within the image area.
在智慧型裝置獲取到人臉圖像的中心點到圖像區域的中心點之間的路徑後,根據路徑控制智慧型裝置,使人臉圖像的中心點在圖像區域內。After the smart device obtains a path between the center point of the face image and the center point of the image area, the smart device is controlled according to the path so that the center point of the face image is within the image area.
作為一種可能的實現方式,可以圖像區域的中心點為圓心,建立直角坐標系,獲取人臉圖像的中心點的座標,並計算出人臉圖像的中心點與圖像區域的中心點之間的距離,以及人臉圖像的中心點相對水準方向的夾角。之後,控制智慧型裝置轉動相應的角度和距離。As a possible implementation method, the center point of the image area can be used as the center of the circle, a rectangular coordinate system can be established, the coordinates of the center point of the face image can be obtained, and the center point of the face image and the center point of the image area can be calculated. The distance between them, and the angle between the center point of the face image and the horizontal direction. After that, control the smart device to rotate the corresponding angle and distance.
以機器人為例,若機器人檢測到人臉圖像的中心點,在圖像區域中心點的右側,說明人逐漸向右移動,則控制機器人的雲台、底盤向右轉動,以對人進行跟隨,實現注視的目的。Take the robot as an example. If the robot detects the center point of the face image, to the right of the center point of the image area, it means that the person is gradually moving to the right, then the PTZ and chassis of the control robot are rotated to the right to follow the person. To achieve the purpose of gaze.
本發明實施例的智慧型裝置的交互目標確定方法,通過檢測人臉圖像的中心點是否在預設的圖像區域內,實現智慧型裝置對交互目標的跟隨,使智慧型裝置與人交互時更加生動、靈活。The method for determining an interactive target of a smart device according to the embodiment of the present invention realizes that the smart device follows the interactive target by detecting whether the center point of the face image is within a preset image area, and enables the smart device to interact with people. Time is more vivid and flexible.
為了實現上述實施例,本發明實施例還提出一種智慧型裝置的交互目標確定裝置。第7圖為本發明實施例提供的一種智慧型裝置的交互目標確定裝置的結構示意圖。In order to implement the foregoing embodiments, an embodiment of the present invention further provides an interactive target determination device for a smart device. FIG. 7 is a schematic structural diagram of an interactive target determination device for a smart device according to an embodiment of the present invention.
如第7圖所示,該智慧型裝置的交互目標確定裝置包括:第一獲取模組510、第二獲取模組520、判斷模組530、選取模組540。As shown in FIG. 7, the interactive target determination device of the smart device includes a first acquisition module 510, a second acquisition module 520, a determination module 530, and a selection module 540.
第一獲取模組510用於獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別。The first acquisition module 510 is configured to acquire an environment image within a monitoring range of the smart device, and perform target recognition on the environment image.
第二獲取模組520用於將從環境圖像中識別出的目標作為候選目標,獲取候選目標的狀態資訊。The second acquisition module 520 is configured to acquire a target identified from the environment image as a candidate target, and obtain status information of the candidate target.
判斷模組530用於針對每個候選目標,根據對應的狀態資訊,判斷是否存在與智慧型裝置交互的交互意圖。The determination module 530 is configured to determine, for each candidate target, whether there is an interaction intention to interact with the smart device according to the corresponding state information.
選取模組540用於從存在交互意圖的候選目標中選取智慧型裝置的交互目標。The selection module 540 is configured to select an interaction target of the smart device from the candidate targets having an interaction intention.
在本實施例一種可能的實現方式中,第二獲取模組520具體用於: 獲取候選目標與該智慧型裝置之間的距離; 判斷模組530具體用於: 針對每個候選目標,判斷候選目標與智慧型裝置之間的距離是否小於或者等於預設的距離臨界值,且在距離臨界值範圍內的停留時長是否超出預設的時間臨界值; 如果候選目標與智慧型裝置之間的距離小於或者等於距離臨界值且該停留時長超出該時間臨界值,則確定該候選目標存在與智慧型裝置交互的交互意圖。 在本實施例一種可能的實現方式中,第二獲取模組520具體用於: 獲取候選目標與智慧型裝置之間的距離,以及候選目標的人臉角度; 判斷模組530具體用於: 針對每個候選目標,判斷候選目標與智慧型裝置之間的距離是否小於等於預設的距離臨界值,且候選目標的人臉角度是否處於預設的角度範圍內; 如果候選目標與智慧型裝置之間的距離小於或等於預設的距離臨界值,且候選目標的人臉角度處於預設的角度範圍內,則確定候選目標存在與智慧型裝置交互的交互意圖。In a possible implementation manner of this embodiment, the second obtaining module 520 is specifically configured to: obtain a distance between the candidate target and the smart device; the determining module 530 is specifically configured to: determine a candidate for each candidate target Whether the distance between the target and the smart device is less than or equal to a preset distance threshold, and whether the length of the stay within the distance threshold exceeds the preset time threshold; if the distance between the candidate target and the smart device is If the distance is less than or equal to the distance critical value and the staying time exceeds the time critical value, it is determined that the candidate target has an interaction intention to interact with the smart device. In a possible implementation manner of this embodiment, the second acquisition module 520 is specifically configured to: acquire the distance between the candidate target and the smart device, and the face angle of the candidate target; the judgment module 530 is specifically configured to: For each candidate target, determine whether the distance between the candidate target and the smart device is less than or equal to a preset distance threshold, and whether the face angle of the candidate target is within a preset angle range. If the distance between them is less than or equal to a preset distance threshold and the face angle of the candidate target is within the preset angle range, it is determined that the candidate target has an interaction intention to interact with the smart device.
在本實施例一種可能的實現方式中,選取模組540包括: 確定單元,用於當檢測到複數候選目標時,且存在交互意圖的候選目標為複數時,從複數存在交互意圖的候選目標中,確定出與智慧型裝置距離最近的候選目標; 選取單元,用於從與智慧型裝置距離最近的候選目標中,選取智慧型裝置的交互目標。In a possible implementation manner of this embodiment, the selection module 540 includes: a determining unit, configured to: when a plurality of candidate targets are detected, and when the candidate target having an interaction intention is a plural number, select from the plurality of candidate targets having the interaction intention. To determine a candidate target closest to the smart device; and a selecting unit for selecting an interactive target of the smart device from the candidate targets closest to the smart device.
在本實施例一種可能的實現方式中,選取單元具體用於: 當與智慧型裝置距離最近的候選目標為複數時,查詢智慧型裝置的已註冊使用者人臉圖像庫中是否存在與智慧型裝置距離最近的候選目標的人臉圖像; 如果人臉圖像庫中存在一個與智慧型裝置距離最近的候選目標的人臉圖像,則將一個與智慧型裝置距離最近的候選目標作為交互目標; 如果人臉圖像庫中不存在與智慧型裝置距離最近的候選目標的人臉圖像,則隨機選取一個與智慧型裝置距離最近的候選目標作為交互目標; 如果人臉圖像庫中存在複數與智慧型裝置距離最近的候選目標的人臉圖像,則將最先查詢出的與智慧型裝置距離最近的候選目標作為交互目標。 在本實施例一種可能的實現方式中,第二獲取模組520具體用於: 通過智慧型裝置中的深度攝像頭獲取深度圖,根據深度圖,獲取目標與智慧型裝置之間的距離;或者, 通過智慧型裝置中的雙目視覺攝像頭,對候選目標進行拍攝,計算雙目視覺攝像頭所拍攝圖像的視差,根據視差計算候選目標與智慧型裝置之間的距離;或者, 通過智慧型裝置中的雷射雷達,向監控範圍內發射鐳射; 根據處於監控範圍內的每個障礙物返回的鐳射,生成每個障礙物的二值圖; 將每個二值圖與環境圖像進行融合,從所有的二值圖中識別出與候選目標對應的二值圖; 根據候選目標對應的二值圖的鐳射返回時間,確定出候選目標與智慧型裝置之間的距離。In a possible implementation manner of this embodiment, the selecting unit is specifically configured to: when there are plural candidate targets closest to the smart device, query whether there is a smart phone in the registered user face image database of the smart device. Face image of the closest candidate target of the smart device; if a face image of a candidate target closest to the smart device exists in the face image library, a candidate target closest to the smart device is taken as Interactive target; if there is no face image of the candidate target closest to the smart device in the face image library, randomly select a candidate target closest to the smart device as the interactive target; if the face image library In the face image of a plurality of candidate targets closest to the smart device in the face, the candidate target closest to the smart device first queried is used as the interaction target. In a possible implementation manner of this embodiment, the second obtaining module 520 is specifically configured to: obtain a depth map through a depth camera in the smart device, and obtain the distance between the target and the smart device according to the depth map; or, Use the binocular vision camera in the smart device to shoot the candidate target, calculate the parallax of the image captured by the binocular vision camera, and calculate the distance between the candidate target and the smart device according to the parallax; or, Laser radar that emits laser light into the monitoring range; based on the laser light returned by each obstacle in the monitoring range, a binary map of each obstacle is generated; each binary map is fused with the environmental image, from All the binary maps corresponding to the candidate target are identified; according to the laser return time of the binary map corresponding to the candidate target, the distance between the candidate target and the smart device is determined.
在本實施例一種可能的實現方式中,第二獲取模組520具體用於: 從環境圖像中截取候選目標的人臉圖像; 將人臉圖像輸入預先訓練好的機器學習模型中,獲取人臉圖像中人臉的人臉角度; 在本實施例一種可能的實現方式中,該裝置還包括: 採集模組,用於採集攜帶樣本人臉圖像,其中,樣本人臉圖像中攜帶標注資料,標注資料用於表示樣本人臉的人臉角度; 訓練模組,用於將樣本人臉圖像輸入到初始構建的機器學習模型中進行訓練,當訓練後的機器學習模型的誤差在預設的誤差範圍內時,則得到訓練好的機器學習模型。In a possible implementation manner of this embodiment, the second acquisition module 520 is specifically configured to: intercept a face image of a candidate target from an environment image; and input the face image into a pre-trained machine learning model, Obtain a face angle of a face in a face image. In a possible implementation manner of this embodiment, the device further includes: a collection module configured to collect a sample face image, where the sample face image Carrying labeled data, the labeled data is used to represent the face angle of the sample face; the training module is used to input the sample face image into the initially constructed machine learning model for training, and when the trained machine learning model When the error is within a preset error range, a trained machine learning model is obtained.
在本實施例一種可能的實現方式中,該裝置還包括: 第一控制模組,用於在從存在交互意圖的候選目標中選取智慧型裝置的交互目標之後,控制智慧型裝置與交互目標進行交互; 識別模組,用於在交互過程中,識別交互目標的人臉圖像的中心點; 檢測模組,用於檢測人臉圖像的中心點是否在預設的圖像區域內; 第三獲取模組,用於在未在圖像區域內時,獲取人臉圖像的中心點到達圖像區域的中心點之間的路徑; 第二控制模組,用於根據路徑,控制智慧型裝置,使人臉圖像的中心點在圖像區域內。In a possible implementation manner of this embodiment, the device further includes: a first control module, configured to control the intelligent device and the interaction target to perform the interaction after selecting the interaction target of the intelligent device from the candidate targets having an interaction intention. Interaction; a recognition module for identifying the center point of the face image of the interaction target during the interaction process; a detection module for detecting whether the center point of the face image is within a preset image area; Three acquisition modules are used to acquire the path between the center point of the face image and the center point of the image area when not in the image area; the second control module is used to control the intelligent type according to the path Device so that the center point of the face image is within the image area.
需要說明的是,前述對智慧型裝置的交互目標確定方法實施例的解釋說明,也適用於該實施例的智慧型裝置的交互目標確定裝置,故在此不再贅述。It should be noted that, the foregoing description of the embodiment of the method for determining an interactive target of a smart device is also applicable to the device for determining an interactive target of a smart device in this embodiment, so it will not be repeated here.
本發明實施例的智慧型裝置的交互目標確定裝置,通過獲取在智慧型裝置的監控範圍內的環境圖像,對環境圖像進行目標識別,將從環境圖像中識別出的目標作為候選目標,獲取候選目標的狀態資訊,針對每個候選目標,根據對應的狀態資訊,判斷是否存在與智慧型裝置交互的交互意圖,從存在交互意圖的候選目標中選取智慧型裝置的交互目標。本實施例中,通過根據候選目標的狀態資訊,從所有候選目標中,篩選出存在交互意圖的候選目標,進一步從存在交互意圖的候選目標中,為智慧型裝置選取交互目標,使得選取的交互目標最可能是與智慧型裝置有交互意圖的目標,避免了將沒有交互意圖的目標作為交互目標,提高了交互目標的確定準確度,減少了智慧型裝置的誤啟動。The interactive target determining device of the smart device according to the embodiment of the present invention obtains an environmental image within the monitoring range of the smart device, performs target recognition on the environmental image, and uses the target identified from the environmental image as a candidate target. To obtain the status information of the candidate target, and for each candidate target, determine whether there is an interaction intention to interact with the smart device according to the corresponding status information, and select the interaction target of the smart device from the candidate targets having the interaction intention. In this embodiment, according to the status information of candidate targets, from all candidate targets, candidate targets with interaction intent are screened out, and among the candidate targets with interaction intent, interactive targets are selected for smart devices, so that the selected interactions The target is most likely a target with interactive intention with the smart device, avoiding the target without interactive intention as the interactive target, improving the accuracy of determining the interactive target, and reducing the false start of the smart device.
為了實現上述實施例,本發明實施例還提出一種智慧型裝置。In order to implement the foregoing embodiments, an embodiment of the present invention further provides a smart device.
第8圖為本發明智慧型裝置一個實施例的結構示意圖,如第8圖所示,該智慧型裝置可包括:殼體610、處理器620、記憶體630、電路板640和電源電路650,其中,電路板640安置在殼體610圍成的空間內部,處理器620和記憶體630設置在電路板640上;電源電路650,用於為上述智慧型裝置的各個電路或器件供電;記憶體630用於儲存可執行程式碼;處理器620通過讀取記憶體630中儲存的可執行程式碼來運行與可執行程式碼對應的程式,用於執行上述實施例所述的智慧型裝置的交互目標確定方法。 為了實現上述實施例,本發明實施例還提出一種電腦程式產品,該電腦程式產品包括儲存在電腦可讀儲存媒體上的電腦程式,該電腦程式包括程式指令,該程式指令被處理器執行時實現如上述實施例所述的智慧型裝置的交互目標確定方法。FIG. 8 is a schematic structural diagram of an embodiment of a smart device according to the present invention. As shown in FIG. 8, the smart device may include a housing 610, a processor 620, a memory 630, a circuit board 640, and a power circuit 650. The circuit board 640 is disposed in the space surrounded by the housing 610, and the processor 620 and the memory 630 are disposed on the circuit board 640. The power circuit 650 is used to supply power to the circuits or components of the smart device. The memory 630 is used to store executable code; the processor 620 reads the executable code stored in the memory 630 to run a program corresponding to the executable code, and is used to execute the interaction of the smart device described in the foregoing embodiment Goal determination methods. In order to implement the above embodiments, an embodiment of the present invention further provides a computer program product. The computer program product includes a computer program stored on a computer-readable storage medium. The computer program includes program instructions. The program instructions are implemented when the processor executes the program instructions. The method for determining an interactive target of a smart device according to the foregoing embodiment.
為了實現上述實施例,本發明實施例還提出一種非臨時性電腦可讀儲存媒體,其上儲存有電腦程式,該程式被處理器執行時實現如上述實施例所述的智慧型裝置的交互目標確定方法。In order to achieve the above embodiments, an embodiment of the present invention further provides a non-transitory computer-readable storage medium on which a computer program is stored. When the program is executed by a processor, the interaction goal of the smart device as described in the above embodiment is achieved. Determine the method.
在本說明書的描述中,參考術語“一個實施例”、“一些實施例”、 “示例”、“具體示例”、或“一些示例”等的描述意指結合該實施例或示例描述的具體特徵、結構、材料或者特點包含於本發明的至少一個實施例或示例中。在本說明書中,對上述術語的示意性表述不必須針對的是相同的實施例或示例。而且,描述的具體特徵、結構、材料或者特點可以在任一個或複數實施例或示例中以合適的方式結合。此外,在不相互矛盾的情況下,本領域的技術人員可以將本說明書中描述的不同實施例或示例以及不同實施例或示例的特徵進行結合和組合。In the description of this specification, the description with reference to the terms “one embodiment”, “some embodiments”, “examples”, “specific examples”, or “some examples” and the like means specific features described in conjunction with the embodiments or examples , Structure, material, or characteristic is included in at least one embodiment or example of the present invention. In this specification, the schematic expressions of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. In addition, without any contradiction, those skilled in the art may combine and combine different embodiments or examples and features of the different embodiments or examples described in this specification.
此外,術語“第一”、“第二”僅用於描述目的,而不能理解為指示或暗示相對重要性或者隱含指明所指示的技術特徵的數量。由此,限定有“第一”、“第二”的特徵可以明示或者隱含地包括至少一個該特徵。在本發明的描述中,“複數”的含義是至少兩個,例如兩個,三個等,除非另有明確具體的限定。In addition, the terms "first" and "second" are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Therefore, the features defined as "first" and "second" may explicitly or implicitly include at least one of the features. In the description of the present invention, the meaning of "plurality" is at least two, for example, two, three, etc., unless it is specifically and specifically defined otherwise.
流程圖中或在此以其他方式描述的任何過程或方法描述可以被理解為,表示包括一個或更複數用於實現定制邏輯功能或過程的步驟的可執行指令的代碼的模組、片段或部分,並且本發明的優選實施方式的範圍包括另外的實現,其中可以不按所示出或討論的順序,包括根據所涉及的功能按基本同時的方式或按相反的順序,來執行功能,這應被本發明的實施例所屬技術領域的技術人員所理解。Any process or method description in a flowchart or otherwise described herein can be understood as representing a module, fragment, or portion of code that includes one or more executable instructions for implementing a custom logic function or step of a process And, the scope of the preferred embodiments of the present invention includes additional implementations in which functions may be performed out of the order shown or discussed, including performing functions in a substantially simultaneous manner or in the reverse order according to the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present invention pertain.
在流程圖中表示或在此以其他方式描述的邏輯和/或步驟,例如,可以被認為是用於實現邏輯功能的可執行指令的定序列表,可以具體實現在任何電腦可讀媒體中,以供指令執行系統、裝置或設備(如基於電腦的系統、包括處理器的系統或其他可以從指令執行系統、裝置或設備取指令並執行指令的系統)使用,或結合這些指令執行系統、裝置或設備而使用。就本說明書而言,"電腦可讀媒體"可以是任何可以包含、儲存、通信、傳播或傳輸程式以供指令執行系統、裝置或設備或結合這些指令執行系統、裝置或設備而使用的裝置。電腦可讀媒體的更具體的示例(非窮盡性列表)包括以下:具有一個或複數佈線的電連接部(電子裝置),可擕式電腦盤盒(磁裝置),隨機存取記憶體(RAM),唯讀記憶體(ROM),可擦除可編輯唯讀記憶體(EPROM或閃速記憶體),光纖裝置,以及可擕式光碟唯讀記憶體(CDROM)。另外,電腦可讀媒體甚至可以是可在其上列印該程式的紙或其他合適的媒體,因為可以例如通過對紙或其他媒體進行光學掃描,接著進行編輯、解譯或必要時以其他合適方式進行處理來以電子方式獲得該程式,然後將其儲存在電腦記憶體中。The logic and / or steps represented in the flowchart or otherwise described herein, for example, a sequenced list of executable instructions that can be considered to implement a logical function, can be embodied in any computer-readable medium, For use by, or in combination with, an instruction execution system, device, or device (such as a computer-based system, a system that includes a processor, or another system that can fetch and execute instructions from an instruction execution system, device, or device) Or equipment. For the purposes of this specification, a "computer-readable medium" may be any device that can contain, store, communicate, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device. More specific examples (non-exhaustive list) of computer-readable media include the following: electrical connections (electronic devices) with one or more wiring, portable computer disk enclosures (magnetic devices), random access memory (RAM ), Read-only memory (ROM), erasable and editable read-only memory (EPROM or flash memory), fiber optic devices, and portable optical disc read-only memory (CDROM). In addition, the computer-readable medium may even be paper or other suitable media on which the program can be printed, because, for example, by optically scanning the paper or other media, and then editing, interpreting, or other suitable Process to obtain the program electronically and then store it in computer memory.
應當理解,本發明的各部分可以用硬體、軟體、固件或它們的組合來實現。在上述實施方式中,複數步驟或方法可以用儲存在記憶體中且由合適的指令執行系統執行的軟體或固件來實現。如,如果用硬體來實現和在另一實施方式中一樣,可用本領域公知的下列技術中的任一項或他們的組合來實現:具有用於對資料信號實現邏輯功能的邏輯門電路的離散邏輯電路,具有合適的組合邏輯門電路的專用積體電路,可程式設計閘陣列(PGA),現場可程式設計閘陣列(FPGA)等。It should be understood that each part of the present invention may be implemented by hardware, software, firmware, or a combination thereof. In the above embodiments, a plurality of steps or methods may be implemented by software or firmware stored in a memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it may be implemented using any one or a combination of the following techniques known in the art: Discrete logic circuits, dedicated integrated circuits with suitable combinational logic gate circuits, programmable gate arrays (PGA), field programmable gate arrays (FPGA), etc.
本技術領域的普通技術人員可以理解實現上述實施例方法攜帶的全部或部分步驟是可以通過程式來指令相關的硬體完成,該的程式可以儲存於一種電腦可讀儲存媒體中,該程式在執行時,包括方法實施例的步驟之一或其組合。Those of ordinary skill in the art can understand that all or part of the steps carried by the methods in the above embodiments can be completed by a program instructing related hardware. The program can be stored in a computer-readable storage medium, and the program is running. , Including one or a combination of the steps of the method embodiments.
此外,在本發明各個實施例中的各功能單元可以集成在一個處理模組中,也可以是各個單元單獨物理存在,也可以兩個或兩個以上單元集成在一個模組中。上述集成的模組既可以採用硬體的形式實現,也可以採用軟體功能模組的形式實現。該集成的模組如果以軟體功能模組的形式實現並作為獨立的產品銷售或使用時,也可以儲存在一個電腦可讀取儲存媒體中。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist separately physically, or two or more units may be integrated into one module. The above integrated modules can be implemented in the form of hardware or software functional modules. If the integrated module is implemented in the form of a software functional module and sold or used as an independent product, it can also be stored in a computer-readable storage medium.
上述提到的儲存媒體可以是唯讀記憶體,磁片或光碟等。儘管上面已經示出和描述了本發明的實施例,可以理解的是,上述實施例是示例性的,不能理解為對本發明的限制,本領域的普通技術人員在本發明的範圍內可以對上述實施例進行變化、修改、替換和變型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk. Although the embodiments of the present invention have been shown and described above, it can be understood that the above embodiments are exemplary and should not be construed as limitations on the present invention. Those skilled in the art can interpret the above within the scope of the present invention. Embodiments are subject to change, modification, substitution, and modification.
610‧‧‧殼體610‧‧‧shell
620‧‧‧處理器620‧‧‧Processor
630‧‧‧記憶體630‧‧‧Memory
640‧‧‧電路板640‧‧‧circuit board
650‧‧‧電源電路650‧‧‧Power circuit
本發明上述的和/或附加的方面和優點從下面結合附圖對實施例的描述中將變得明顯和容易理解,其中: 第1圖為本發明實施例提供的一種智慧型裝置的交互目標確定方法的流程示意圖; 第2圖為本發明實施例提供的另一種智慧型裝置的交互目標確定方法的流程示意圖; 第3圖為本發明實施例提供的雙目視覺計算距離的原理示意圖; 第4圖為本發明實施例提供的另一種智慧型裝置的交互目標確定方法的流程示意圖; 第5圖為本發明實施例提供的另一種智慧型裝置的交互目標確定方法的流程示意圖; 第6圖為本發明實施例提出的另一種智慧型裝置的交互目標確定方法的流程示意圖; 第7圖為本發明實施例提供的一種智慧型裝置的交互目標確定裝置的結構示意圖; 第8圖為本發明智慧型裝置一個實施例的結構示意圖。The above and / or additional aspects and advantages of the present invention will become apparent and easy to understand from the following description of the embodiments with reference to the accompanying drawings, wherein: FIG. 1 is an interaction target of a smart device provided by an embodiment of the present invention A schematic flowchart of a determination method; FIG. 2 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention; FIG. 3 is a schematic diagram of a binocular vision calculation distance provided by an embodiment of the present invention; FIG. 4 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention; FIG. 5 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention; FIG. 7 is a schematic flowchart of another method for determining an interactive target of a smart device according to an embodiment of the present invention; FIG. 7 is a schematic diagram of a structure of an interactive target determining device for a smart device according to an embodiment of the present invention; A schematic structural diagram of an embodiment of a smart device.
Claims (12)
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201810236768.7A CN108733208A (en) | 2018-03-21 | 2018-03-21 | The I-goal of smart machine determines method and apparatus |
| ??201810236768.7 | 2018-03-21 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TW201941099A true TW201941099A (en) | 2019-10-16 |
Family
ID=63940975
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW108109739A TW201941099A (en) | 2018-03-21 | 2019-03-21 | A method and its equipment of locking interaction target for intelligent device |
Country Status (3)
| Country | Link |
|---|---|
| CN (1) | CN108733208A (en) |
| TW (1) | TW201941099A (en) |
| WO (1) | WO2019179442A1 (en) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI742644B (en) * | 2020-05-06 | 2021-10-11 | 東元電機股份有限公司 | Following mobile platform and method thereof |
| TWI756963B (en) * | 2020-12-03 | 2022-03-01 | 禾聯碩股份有限公司 | Region definition and identification system of target object and method |
Families Citing this family (22)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN108733208A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | The I-goal of smart machine determines method and apparatus |
| CN109508687A (en) * | 2018-11-26 | 2019-03-22 | 北京猎户星空科技有限公司 | Man-machine interaction control method, device, storage medium and smart machine |
| CN109815813B (en) * | 2018-12-21 | 2021-03-05 | 深圳云天励飞技术有限公司 | Image processing method and related product |
| CN111724772B (en) * | 2019-03-20 | 2024-12-24 | 阿里巴巴集团控股有限公司 | Interaction method and device of intelligent device and intelligent device |
| CN110070016A (en) * | 2019-04-12 | 2019-07-30 | 北京猎户星空科技有限公司 | A kind of robot control method, device and storage medium |
| CN110286771B (en) * | 2019-06-28 | 2024-06-07 | 北京金山安全软件有限公司 | Interaction method, device, intelligent robot, electronic equipment and storage medium |
| CN110647797B (en) * | 2019-08-05 | 2022-11-11 | 深圳市海雀科技有限公司 | Visitor detection method and device |
| CN112666572A (en) * | 2019-09-30 | 2021-04-16 | 北京声智科技有限公司 | Wake-up method based on radar, wake-up device, electronic device and storage medium |
| CN112784644A (en) * | 2019-11-08 | 2021-05-11 | 佛山市云米电器科技有限公司 | Multi-device synchronous display method, device, equipment and computer readable storage medium |
| CN111240217B (en) * | 2020-01-08 | 2024-02-23 | 深圳绿米联创科技有限公司 | Status detection method, device, electronic equipment and storage medium |
| CN111341350A (en) * | 2020-01-18 | 2020-06-26 | 南京奥拓电子科技有限公司 | Human-computer interaction control method, system, intelligent robot and storage medium |
| CN115086095A (en) * | 2021-03-10 | 2022-09-20 | Oppo广东移动通信有限公司 | Equipment control method and related device |
| CN113010594B (en) * | 2021-04-06 | 2023-06-06 | 深圳市思麦云科技有限公司 | XR-based intelligent learning platform |
| CN113284404B (en) * | 2021-04-26 | 2022-04-08 | 广州九舞数字科技有限公司 | Electronic sand table display method and device based on user actions |
| CN113299416A (en) * | 2021-04-29 | 2021-08-24 | 中核核电运行管理有限公司 | Intelligent identification system and method for operation intention of nuclear power plant operator |
| CN113658251A (en) * | 2021-08-25 | 2021-11-16 | 北京市商汤科技开发有限公司 | Distance measuring method, device, electronic equipment, storage medium and system |
| CN113850165B (en) * | 2021-09-13 | 2024-07-19 | 支付宝(杭州)信息技术有限公司 | Face recognition method and device |
| CN113835352B (en) * | 2021-09-29 | 2023-09-08 | 歌尔科技有限公司 | Intelligent device control method, system, electronic device and storage medium |
| CN116109970A (en) * | 2022-12-28 | 2023-05-12 | 浙江大华技术股份有限公司 | A video display method, device and computer-readable storage medium |
| CN117389416B (en) * | 2023-10-18 | 2024-08-20 | 七腾机器人有限公司 | Interactive control method and device of intelligent robot and robot |
| CN117170418B (en) * | 2023-11-02 | 2024-02-20 | 杭州华橙软件技术有限公司 | PTZ control method, device, equipment and storage medium |
| CN118767333B (en) * | 2024-07-02 | 2025-02-11 | 宝鸡市人民医院(市急救中心) | Distance-adaptive wireless charging system based on in vivo microwave response |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101140620A (en) * | 2007-10-16 | 2008-03-12 | 上海博航信息科技有限公司 | Human face recognition system |
| CN106584451B (en) * | 2015-10-14 | 2019-12-10 | 国网智能科技股份有限公司 | automatic transformer substation composition robot and method based on visual navigation |
| CN105718896A (en) * | 2016-01-22 | 2016-06-29 | 张健敏 | Intelligent robot with target recognition function |
| CN107102540A (en) * | 2016-02-23 | 2017-08-29 | 芋头科技(杭州)有限公司 | A kind of method and intelligent robot for waking up intelligent robot |
| CN105843118B (en) * | 2016-03-25 | 2018-07-27 | 北京光年无限科技有限公司 | A kind of robot interactive method and robot system |
| CN106225764A (en) * | 2016-07-01 | 2016-12-14 | 北京小米移动软件有限公司 | Based on the distance-finding method of binocular camera in terminal and terminal |
| CN108733208A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | The I-goal of smart machine determines method and apparatus |
-
2018
- 2018-03-21 CN CN201810236768.7A patent/CN108733208A/en active Pending
-
2019
- 2019-03-19 WO PCT/CN2019/078748 patent/WO2019179442A1/en not_active Ceased
- 2019-03-21 TW TW108109739A patent/TW201941099A/en unknown
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| TWI742644B (en) * | 2020-05-06 | 2021-10-11 | 東元電機股份有限公司 | Following mobile platform and method thereof |
| TWI756963B (en) * | 2020-12-03 | 2022-03-01 | 禾聯碩股份有限公司 | Region definition and identification system of target object and method |
Also Published As
| Publication number | Publication date |
|---|---|
| CN108733208A (en) | 2018-11-02 |
| WO2019179442A1 (en) | 2019-09-26 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| TW201941099A (en) | A method and its equipment of locking interaction target for intelligent device | |
| WO2019179441A1 (en) | Focus tracking method and device of smart apparatus, smart apparatus, and storage medium | |
| CN111989537B (en) | Systems and methods for detecting human gaze and gestures in an unconstrained environment | |
| US11257223B2 (en) | Systems and methods for user detection, identification, and localization within a defined space | |
| CN108985225B (en) | Focus following method, device, electronic equipment and storage medium | |
| TW201941643A (en) | A method including its equipment and storage medium to keep intelligent device continuously awake | |
| CN113116224B (en) | Robot and control method thereof | |
| CN111598065B (en) | Depth image acquisition method and living body recognition method, device, circuit and medium | |
| CN114612786B (en) | Obstacle detection method, mobile robot and machine-readable storage medium | |
| CN108733417A (en) | The work pattern selection method and device of smart machine | |
| CN119376549B (en) | Human-computer interaction method, device and embodied intelligent agent based on embodied intelligent agent | |
| CN111872928A (en) | Obstacle attribute distinguishing method and system and intelligent robot | |
| US11435745B2 (en) | Robot and map update method using the same | |
| CN112655021A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
| CN112257617B (en) | Multi-modal target recognition method and system | |
| CN113158912B (en) | Gesture recognition method and device, storage medium and electronic equipment | |
| CN108833766A (en) | Control method, device, smart machine and the storage medium of smart machine | |
| CN119380252A (en) | Object recognition method in dynamic scenes and its applicable intelligent robot | |
| CN114071005B (en) | Object detection method, electronic device and computer-readable storage medium | |
| CN116039613A (en) | A vehicle autonomous parking method and system based on human body gesture recognition | |
| CN211827195U (en) | an interactive device | |
| WO2021245749A1 (en) | Tracking device, tracking method, and recording medium | |
| US20250173352A1 (en) | Information processing apparatus, method, and storage medium | |
| CN118803212A (en) | Control method and device of projection equipment, projection equipment and storage medium | |
| Hadi et al. | Improved occlusion handling for human detection from mobile robot |