[go: up one dir, main page]

CN106815264B - Information processing method and system - Google Patents

Information processing method and system Download PDF

Info

Publication number
CN106815264B
CN106815264B CN201510869366.7A CN201510869366A CN106815264B CN 106815264 B CN106815264 B CN 106815264B CN 201510869366 A CN201510869366 A CN 201510869366A CN 106815264 B CN106815264 B CN 106815264B
Authority
CN
China
Prior art keywords
information
mobile terminal
unit
feature data
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510869366.7A
Other languages
Chinese (zh)
Other versions
CN106815264A (en
Inventor
胡久林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
Original Assignee
China Mobile Communications Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Corp filed Critical China Mobile Communications Corp
Priority to CN201510869366.7A priority Critical patent/CN106815264B/en
Publication of CN106815264A publication Critical patent/CN106815264A/en
Application granted granted Critical
Publication of CN106815264B publication Critical patent/CN106815264B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/958Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention discloses an information processing method and system. The method comprises the following steps: obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; analyzing the image data, identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data; first feedback information is generated based on the facial expression information and the operation information.

Description

一种信息处理方法及系统An information processing method and system

技术领域technical field

本发明涉及信息处理技术,具体涉及一种信息处理方法及系统。The present invention relates to information processing technology, in particular to an information processing method and system.

背景技术Background technique

随着互联网技术的飞速发展,涌现出大量的移动终端应用(APP)。这些应用非常注重用户体验,十分关注用户的交互体验和应用的功能是否能够切实满足用户的需求,用户对应用的反馈及评价是数据运营、产品经理、设计师、软件工程师优化程序的重要依据。With the rapid development of Internet technology, a large number of mobile terminal applications (APPs) have emerged. These applications attach great importance to the user experience, and are very concerned about whether the user's interactive experience and application functions can effectively meet the user's needs. The user's feedback and evaluation of the application is an important basis for data operations, product managers, designers, and software engineers to optimize programs.

现有技术中,收集应用的反馈的方式主要有以下几种:1、用户通过应用商店主动进行评分或评论;2、用户通过网站或论坛主动进行评论或评分的反馈;3、指定的参数信息的收集。其中,上述方式1和方式2占据了大部分比例。在用户使用应用的过程中,应用会不定期的提示用户去应用商店评分或评论,这种反馈的收集方式不够方便灵活,用户体验较差。方式3指定的参数信息并不能直观地对应用户的具体反馈体验,不能针对用户具体的使用过程。In the prior art, the methods for collecting application feedback mainly include the following: 1. Users actively rate or comment through the application store; 2. Users actively comment or rate feedback through websites or forums; 3. Specified parameter information collection. Among them, the above-mentioned methods 1 and 2 occupy most of the proportions. In the process of using the app, the app will occasionally prompt the user to go to the app store to rate or comment. This feedback collection method is not convenient and flexible enough, and the user experience is poor. The parameter information specified in Mode 3 cannot intuitively correspond to the specific feedback experience of the user, and cannot be directed to the specific use process of the user.

发明内容SUMMARY OF THE INVENTION

为解决现有存在的技术问题,本发明实施例提供一种信息处理方法及系统,能够直接获得用户的使用反馈情况。In order to solve the existing technical problems, the embodiments of the present invention provide an information processing method and system, which can directly obtain a user's usage feedback.

为达到上述目的,本发明实施例的技术方案是这样实现的:In order to achieve the above-mentioned purpose, the technical scheme of the embodiment of the present invention is realized as follows:

本发明实施例提供了一种信息处理方法,所述方法包括:An embodiment of the present invention provides an information processing method, and the method includes:

获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上;Obtaining operation information and image data; wherein, the operation information is operation information obtained by detecting a trigger operation on the display interface of the first application when the first application in the mobile terminal is activated; and the image data is the image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;

识别所述图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息;Identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data;

基于所述面部表情信息和所述操作信息生成第一反馈信息。First feedback information is generated based on the facial expression information and the operation information.

上述方案中,所述基于所述面部表情信息和所述操作信息生成第一反馈信息,包括:In the above solution, generating the first feedback information based on the facial expression information and the operation information includes:

基于所述面部表情信息和/或所述操作信息获得对应的第一用户体验参数,基于所述第一用户体验参数生成所述操作信息对应操作位置的第一反馈信息。A corresponding first user experience parameter is obtained based on the facial expression information and/or the operation information, and first feedback information of the operation position corresponding to the operation information is generated based on the first user experience parameter.

上述方案中,所述分析所述图像数据,识别所述图像数据中的面部特征数据之后,所述方法还包括:In the above solution, after analyzing the image data and identifying the facial feature data in the image data, the method further includes:

获得移动终端自身的相对位置信息和方向信息;Obtain the relative position information and direction information of the mobile terminal itself;

将所述面部特征数据中的眼部特征数据、所述相对位置信息和所述方向信息进行关联,获得关注点信息;所述关注点信息表征眼睛在移动终端上的聚焦位置信息。The eye feature data, the relative position information and the direction information in the facial feature data are associated to obtain focus point information; the focus point information represents the focus position information of the eyes on the mobile terminal.

上述方案中,所述方法还包括:基于所述关注点信息和所述操作信息生成第二反馈信息。In the above solution, the method further includes: generating second feedback information based on the attention point information and the operation information.

上述方案中,所述基于所述关注点信息和所述操作信息生成第二反馈信息,包括:In the above solution, the generating the second feedback information based on the focus point information and the operation information includes:

基于预设时间段内的关注点信息和所述预设时间段内的操作信息获得对应的第二用户体验参数,基于所述第二用户体验参数生成所述操作信息对应操作位置的第二反馈信息。A corresponding second user experience parameter is obtained based on the point of interest information within the preset time period and the operation information within the preset time period, and a second feedback of the operation position corresponding to the operation information is generated based on the second user experience parameter information.

上述方案中,所述基于所述面部表情信息和所述操作信息生成第一反馈信息,包括:In the above solution, generating the first feedback information based on the facial expression information and the operation information includes:

基于所述面部表情信息、所述关注点信息和所述操作信息生成第一反馈信息。First feedback information is generated based on the facial expression information, the point of interest information, and the operation information.

上述方案中,所述基于所述面部表情信息、所述关注点信息和所述操作信息生成第一反馈信息,包括:In the above solution, generating the first feedback information based on the facial expression information, the point of interest information and the operation information includes:

基于预设时间段内的所述面部表情信息和所述关注点信息,结合所述预设时间段内的操作信息获得对应的第三用户体验参数,基于所述第三用户体验参数生成所述操作信息对应操作位置的第一反馈信息。A corresponding third user experience parameter is obtained based on the facial expression information and the point of interest information within a preset time period and operation information within the preset time period, and the third user experience parameter is generated based on the third user experience parameter. The operation information corresponds to the first feedback information of the operation position.

本发明实施例还提供了一种信息处理系统,所述系统包括:获取单元、图像处理单元和信息生成单元;其中,An embodiment of the present invention also provides an information processing system, the system includes: an acquisition unit, an image processing unit, and an information generation unit; wherein,

所述获取单元,用于获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上;The obtaining unit is configured to obtain operation information and image data; wherein, the operation information is an operation obtained by detecting a trigger operation on the display interface of the first application when the first application in the mobile terminal is activated information; the image data is the image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;

所述图像处理单元,用于识别所述图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息;The image processing unit is used to identify facial feature data in the image data, and obtain facial expression information based on the facial feature data;

所述信息生成单元,用于基于所述图像处理单元获得的面部表情信息和所述获取单元获得的操作信息生成第一反馈信息。The information generating unit is configured to generate first feedback information based on the facial expression information obtained by the image processing unit and the operation information obtained by the obtaining unit.

上述方案中,所述信息生成单元,用于基于所述面部表情信息和/或所述操作信息获得对应的第一用户体验参数,基于所述第一用户体验参数生成所述操作信息对应操作位置的第一反馈信息。In the above solution, the information generating unit is configured to obtain a corresponding first user experience parameter based on the facial expression information and/or the operation information, and generate an operation position corresponding to the operation information based on the first user experience parameter the first feedback information.

上述方案中,所述获取单元,还用于获得移动终端自身的相对位置信息和方向信息;In the above solution, the obtaining unit is also used to obtain the relative position information and direction information of the mobile terminal itself;

所述图像处理单元,还用于将所述面部特征数据中的眼部特征数据、所述获取单元获得的所述移动终端自身的相对位置信息和所述方向信息进行关联,获得关注点信息;所述关注点信息表征眼睛在移动终端上的聚焦位置信息。The image processing unit is further configured to associate the eye feature data in the facial feature data, the relative position information of the mobile terminal itself obtained by the obtaining unit, and the direction information to obtain focus information; The attention point information represents the focus position information of the eyes on the mobile terminal.

上述方案中,所述信息生成单元,还用于基于所述关注点信息和所述操作信息生成第二反馈信息。In the above solution, the information generating unit is further configured to generate second feedback information based on the attention point information and the operation information.

上述方案中,所述信息生成单元,用于基于预设时间段内的关注点信息和所述预设时间段内的操作信息获得对应的第二用户体验参数,基于所述第二用户体验参数生成所述操作信息对应操作位置的第二反馈信息。In the above solution, the information generating unit is configured to obtain the corresponding second user experience parameter based on the point of interest information within the preset time period and the operation information within the preset time period, and based on the second user experience parameter Second feedback information corresponding to the operation position of the operation information is generated.

上述方案中,所述信息生成单元,用于基于所述面部表情信息、所述关注点信息和所述操作信息生成第一反馈信息。In the above solution, the information generating unit is configured to generate the first feedback information based on the facial expression information, the attention point information and the operation information.

上述方案中,所述信息生成单元,用于基于预设时间段内的所述面部表情信息和所述关注点信息,结合所述预设时间段内的操作信息获得对应的第三用户体验参数,基于所述第三用户体验参数生成所述操作信息对应操作位置的第一反馈信息。In the above solution, the information generating unit is configured to obtain a corresponding third user experience parameter based on the facial expression information and the point of interest information within a preset time period and in combination with operation information within the preset time period , generating first feedback information of the operation position corresponding to the operation information based on the third user experience parameter.

本发明实施例的信息处理方法及系统,所述方法包括:获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上;识别所述图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息;基于所述面部表情信息和所述操作信息生成第一反馈信息。采用本发明实施例的技术方案,通过在用户使用第一应用的过程中,识别表征用户的面部表情信息、获得操作信息,基于所述面部表情信息和所述操作信息生成第一反馈信息,从而作为进一步对所述第一应用进行优化修改的依据,如此,一方面实现了用户反馈信息的直接、主动的获取,无需用户登录应用商店或网站进行评论或评分,大大提升了用户的操作体验;另一方面,本发明实施例的技术方案在用户的使用过程中进行信息的采集和识别,以便于获知所述第一应用中用户体验不佳的具体位置,便于后续运行维护过程中能够对具体问题进行完善或优化,为应用的运行维护提供了详实的依据。In the information processing method and system according to the embodiments of the present invention, the method includes: obtaining operation information and image data; wherein, the operation information is that a first application in the mobile terminal is in an activated state, and detection of a target for the first application is detected. The operation information obtained by the trigger operation of the display interface of the mobile terminal; the image data is the image data collected by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; identify the image data in the image data Facial feature data, obtaining facial expression information based on the facial feature data; generating first feedback information based on the facial expression information and the operation information. By adopting the technical solutions of the embodiments of the present invention, during the process of the user using the first application, the facial expression information representing the user is identified, the operation information is obtained, and the first feedback information is generated based on the facial expression information and the operation information, thereby As a basis for further optimizing and modifying the first application, on the one hand, the direct and active acquisition of user feedback information is realized, and the user does not need to log in to the application store or website to comment or score, which greatly improves the user's operating experience; On the other hand, the technical solution of the embodiment of the present invention collects and identifies information during the use of the user, so as to know the specific location of the poor user experience in the first application, so as to facilitate the subsequent operation and maintenance. To improve or optimize the problem, it provides a detailed basis for the operation and maintenance of the application.

附图说明Description of drawings

图1为本发明实施例的信息处理方法的应用场景示意图;1 is a schematic diagram of an application scenario of an information processing method according to an embodiment of the present invention;

图2为本发明实施例一的信息处理方法的流程示意图;2 is a schematic flowchart of an information processing method according to Embodiment 1 of the present invention;

图3为本发明实施例二的信息处理方法的流程示意图;3 is a schematic flowchart of an information processing method according to Embodiment 2 of the present invention;

图4为本发明实施例三的信息处理方法的流程示意图;4 is a schematic flowchart of an information processing method according to Embodiment 3 of the present invention;

图5为本发明实施例的信息处理系统的组成结构示意图。FIG. 5 is a schematic diagram of the composition and structure of an information processing system according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图及具体实施例对本发明作进一步详细的说明。The present invention will be described in further detail below with reference to the accompanying drawings and specific embodiments.

图1为本发明实施例的信息处理方法的应用场景示意图;如图1所示,包括作为服务器11以及移动终端12;所述移动终端12与所述服务器11之间可通过网络(如有线网络和/或无线网络)连接。所述移动终端12中预先安装有至少一个应用(APP)。所述服务器11可以为所述至少一个应用所述服务器或服务器集群;所述服务器11也可以为第三方用户体验信息统计平台所属服务器或服务器集群。1 is a schematic diagram of an application scenario of an information processing method according to an embodiment of the present invention; as shown in FIG. 1 , including a server 11 and a mobile terminal 12; the mobile terminal 12 and the server 11 can be connected through a network (such as a wired network) and/or wireless network) connection. At least one application (APP) is pre-installed in the mobile terminal 12 . The server 11 may be the server or server cluster of the at least one application; the server 11 may also be a server or server cluster to which a third-party user experience information statistics platform belongs.

本发明各实施例的技术方案应用于上述服务器11和移动终端12中;通过所述移动终端12在运行一应用时,获得用户针对所述应用显示界面的操作信息,同时获得用户的面部特征数据,将所述操作信息和所述面部特征数据发送至服务器11进行分析处理,以使所述服务器11根据所述面部特征数据识别出用户的面部表情信息,再基于所述面部表情信息和所述操作信息获得第一反馈信息,所述第一反馈信息表明在所述操作信息包含位置处,用户的面部表情是愉悦、或平静、或不满等,从而获知所述操作信息包含位置是否造成用户操作体验不佳,即是否需要进行优化改进。The technical solutions of the various embodiments of the present invention are applied to the above-mentioned server 11 and mobile terminal 12; when an application is run by the mobile terminal 12, the operation information of the user on the display interface of the application is obtained, and the facial feature data of the user is obtained at the same time , send the operation information and the facial feature data to the server 11 for analysis and processing, so that the server 11 recognizes the user's facial expression information according to the facial feature data, and then based on the facial expression information and the The operation information obtains first feedback information, and the first feedback information indicates that the user's facial expression is happy, calm, or dissatisfied at the location included in the operation information, so as to know whether the location included in the operation information causes the user to operate Poor experience, that is, whether optimization improvements are needed.

上述图1的例子只是实现本发明实施例的一个应用架构实例,本发明实施例并不限于上述图1所述的应用结构,基于该应用架构,提出本发明各个实施例。The above example in FIG. 1 is only an example of an application architecture for implementing the embodiment of the present invention, and the embodiment of the present invention is not limited to the application structure described in the above FIG. 1 . Based on the application architecture, various embodiments of the present invention are proposed.

实施例一Example 1

本发明实施例提供了一种信息处理方法。图2为本发明实施例一的信息处理方法的流程示意图;如图2所示,所述信息处理方法包括:Embodiments of the present invention provide an information processing method. FIG. 2 is a schematic flowchart of an information processing method according to Embodiment 1 of the present invention; as shown in FIG. 2 , the information processing method includes:

步骤201:获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上。Step 201: Obtain operation information and image data; wherein, the operation information is operation information obtained by detecting a trigger operation on a display interface of the first application when a first application in the mobile terminal is activated; the The image data is image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane.

本实施例所述的信息处理方法应用于信息处理系统中;所述信息处理系统在本实施方式中可通过服务器实现,当然在其他实施方式中,可通过移动终端和服务器共同实现。则本步骤中,所述获得操作信息和图像数据,包括:服务器获得移动终端的操作信息和图像数据。The information processing method described in this embodiment is applied to an information processing system; the information processing system may be implemented by a server in this embodiment, and of course, in other embodiments, it may be implemented by a mobile terminal and a server. In this step, the obtaining of the operation information and the image data includes: the server obtains the operation information and the image data of the mobile terminal.

具体的,当移动终端激活第一应用时,输出表征所述第一应用的显示界面,检测针对所述显示界面的触发操作,获得操作信息;所述操作信息包括操作手势信息和操作位置信息;其中,所述操作手势信息包括:单击手势、双击手势、滑动手势、拖动手势、缩放手势、旋转手势、参数(例如音量参数、亮度参数等)调节手势等等;所述操作位置信息为所述操作手势的操作位置信息;所述操作位置信息可针对一功能按键。进一步地,所述操作信息还可为在一段时间内的连续的操作信息。Specifically, when the mobile terminal activates the first application, it outputs a display interface representing the first application, detects a trigger operation on the display interface, and obtains operation information; the operation information includes operation gesture information and operation position information; Wherein, the operation gesture information includes: click gesture, double-click gesture, sliding gesture, drag gesture, zoom gesture, rotation gesture, parameter (such as volume parameter, brightness parameter, etc.) adjustment gesture, etc.; the operation position information is: Operation position information of the operation gesture; the operation position information may be for a function button. Further, the operation information may also be continuous operation information within a period of time.

当检测到针对所述显示界面的触发操作时,所述移动终端生成第一指令,基于所述第一指令使能所述移动终端的图像采集单元;在其他实施方式中,所述移动终端也可在激活所述第一应用时使能所述移动终端的图像采集单元,或者,所述移动终端也可基于检测到的触发指令使能所述移动终端的图像采集单元,本实施例中不做具体限定。本实施例中,所述图像采集单元与所述移动终端的显示单元在同一平面上,可以理解为,所述图像采集单元可通过所述移动终端的前置摄像头实现。When detecting a trigger operation for the display interface, the mobile terminal generates a first instruction, and enables an image acquisition unit of the mobile terminal based on the first instruction; in other implementation manners, the mobile terminal also The image acquisition unit of the mobile terminal may be enabled when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction. Make specific restrictions. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit may be implemented by the front camera of the mobile terminal.

本实施例中,所述移动终端将获得的操作信息和对应的图像数据发送至信息处理系统。In this embodiment, the mobile terminal sends the obtained operation information and corresponding image data to the information processing system.

步骤202:识别所述图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息。Step 202: Identify facial feature data in the image data, and obtain facial expression information based on the facial feature data.

本步骤中,所述信息处理系统分析所述图像数据,首先对所述图像数据进行预处理(例如去噪、像素位置或光照变量的标准化),以及面部的分割、定位或追踪等等。进一步地,对所述图像数据进行面部特征数据的提取,包括将像素数据转化为面部及其组成部分的外形、运动、颜色、肌肉和空间结构的表示,提取出的面部特征数据用于进行后续的表情分类。进一步地,所述信息处理系统中预先表情分类器,所述表情分类器中包括多组面部特征数据与表情信息的对应关系,或者所述表情分类器中包括一表情分类模型,将所述面部特征数据输入所述表情分类器,输出所述面部特征数据对应的表情信息,也即基于所述面部特征数据获得面部表情信息。本步骤中所述的面部分析方法以及表情信息的识别可参照现有技术中的任何分析识别方法,本实施例中不作具体描述。In this step, the information processing system analyzes the image data, and first performs preprocessing on the image data (eg, denoising, normalization of pixel positions or illumination variables), and segmentation, localization, or tracking of faces, and so on. Further, the extraction of facial feature data is performed on the image data, including converting the pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent follow-up. expression classification. Further, the pre-expression classifier in the information processing system, the expression classifier includes the correspondence between multiple groups of facial feature data and expression information, or the expression classifier includes an expression classification model, and the facial expression The feature data is input into the expression classifier, and the expression information corresponding to the facial feature data is output, that is, the facial expression information is obtained based on the facial feature data. For the facial analysis method and the recognition of expression information described in this step, reference may be made to any analysis and recognition method in the prior art, which will not be described in detail in this embodiment.

当然,在其他实施方式中,本步骤也可由移动终端实现,即移动终端根据获得的图像数据进行分析识别,获得所述面部表情信息,再将所述面部表情信息发送至信息处理系统。具体的实现方式可参照上述描述,这里不再赘述。Of course, in other embodiments, this step can also be implemented by a mobile terminal, that is, the mobile terminal analyzes and recognizes the obtained image data, obtains the facial expression information, and then sends the facial expression information to the information processing system. For a specific implementation manner, reference may be made to the above description, which will not be repeated here.

步骤203:基于所述面部表情信息和所述操作信息生成第一反馈信息。Step 203: Generate first feedback information based on the facial expression information and the operation information.

这里,所述基于所述面部表情信息和所述操作信息生成第一反馈信息,包括:基于所述面部表情信息和/或所述操作信息获得对应的第一用户体验参数,基于所述第一用户体验参数生成所述操作信息对应操作位置的第一反馈信息。Here, the generating the first feedback information based on the facial expression information and the operation information includes: obtaining a corresponding first user experience parameter based on the facial expression information and/or the operation information, and based on the first The user experience parameter generates the first feedback information of the operation position corresponding to the operation information.

具体的,所述信息处理系统中预先存储有多组面部表情信息与第一用户体验参数的对应关系。例如,当所述面部表情信息为愉悦时,对应的第一用户体验参数为5;当所述面部表情信息为平静时,对应的第一用户体验参数为3;当所述面部表情信息为不满时,对应的第一用户体验参数为0。当然,在其他实施方式中,所述面部表情信息以及第一用户体验参数也可按照其他对应方式预先设置,本实施例中不做详细描述。也就是说,所述面部表情信息能够表征用户的体验度,从而当所述面部表情信息表征的用户体验度达到第一预设阈值时,即表明用户体验度佳,依据所述面部表情信息对应的操作信息(包括操作位置信息)生成所述操作信息(包括操作位置信息)对应的第一反馈信息,此时,所述第一反馈信息表明所述第一应用在所述操作位置处给用户带来较佳体验,所述操作位置处的操作功能或提供的内容值得推荐;相应的,当所述面部表情信息表征的用户体验度未达到第二预设阈值时,所述第二预设阈值小于所述第一预设阈值,即表明用户体验度差,依据所述面部表情信息对应的操作信息(包括操作位置信息)生成所述操作信息(包括操作位置信息)对应的第一反馈信息,此时,所述第一反馈信息表明所述第一应用在所述操作位置处给用户带来较差体验,所述操作位置处的操作功能或提供的内容需进一步进行优化或改善。Specifically, the information processing system pre-stores a plurality of sets of correspondence between facial expression information and the first user experience parameter. For example, when the facial expression information is pleasant, the corresponding first user experience parameter is 5; when the facial expression information is calm, the corresponding first user experience parameter is 3; when the facial expression information is dissatisfaction , the corresponding first user experience parameter is 0. Of course, in other implementation manners, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which are not described in detail in this embodiment. That is to say, the facial expression information can represent the user's experience degree, so when the user experience degree represented by the facial expression information reaches the first preset threshold, it means that the user experience degree is good, according to the facial expression information corresponding The first feedback information corresponding to the operation information (including the operation position information) is generated from the operation information (including the operation position information), and at this time, the first feedback information indicates that the first application is at the operation position to the user Bringing a better experience, the operation function at the operation position or the provided content is worthy of recommendation; correspondingly, when the user experience degree represented by the facial expression information does not reach the second preset threshold, the second preset The threshold value is less than the first preset threshold value, which means that the user experience is poor, and the first feedback information corresponding to the operation information (including the operation position information) is generated according to the operation information (including the operation position information) corresponding to the facial expression information. , at this time, the first feedback information indicates that the first application brings a poor experience to the user at the operation position, and the operation function or provided content at the operation position needs to be further optimized or improved.

作为另一种实施方式,所述信息处理系统基于所述面部表情信息和所述操作信息的组合获得对应的第一用户体验参数。例如,当所述面部表情信息为愉悦,且所述操作信息包含的操作次数小于低于第一阈值时,对应的第一用户体验参数为5;当所述面部表情信息为平静,且所述操作信息包含的操作次数大于第一阈值小于第二阈值时,对应的第一用户体验参数为3;当所述面部表情信息为不满,且所述操作信息包含的操作次数大于第二阈值时,对应的第一用户体验参数为0。当然,在其他实施方式中,所述面部表情信息以及第一用户体验参数也可按照其他对应方式预先设置,本实施例中不做详细描述。也就是说,所述面部表情信息和操作信息的组合能够表征用户的体验度,如在一场景中,用户通过输入操作想找到一功能入口,可通过多次的滑动操作才看到,此时的用户会面露不悦的表情;则在这种场景下,所述信息处理系统获得表征不悦表情的面部表情信息和包含有多次滑动操作的操作信息,结合上述两个信息获得对应的第一用户体验参数为0,即表明在所述操作信息对应的操作位置下,用户的体验度较差,需要进行优化或改进。As another implementation manner, the information processing system obtains the corresponding first user experience parameter based on a combination of the facial expression information and the operation information. For example, when the facial expression information is pleasant, and the number of operations included in the operation information is less than a first threshold, the corresponding first user experience parameter is 5; when the facial expression information is calm, and the When the number of operations included in the operation information is greater than the first threshold and smaller than the second threshold, the corresponding first user experience parameter is 3; when the facial expression information is dissatisfied and the number of operations included in the operation information is greater than the second threshold, The corresponding first user experience parameter is 0. Of course, in other implementation manners, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which are not described in detail in this embodiment. That is to say, the combination of the facial expression information and the operation information can represent the user's experience. For example, in a scene, the user wants to find a function entry through an input operation, and can only see it through multiple sliding operations. The user will show an unhappy expression; then in this scenario, the information processing system obtains the facial expression information representing the unhappy expression and the operation information containing multiple sliding operations, and combines the above two information to obtain the corresponding The first user experience parameter is 0, which means that in the operation position corresponding to the operation information, the user experience is poor and needs to be optimized or improved.

采用本发明实施例的技术方案,通过在用户使用第一应用的过程中,识别用户的面部表情信息、获得操作信息,基于所述面部表情信息和所述操作信息生成第一反馈信息,从而作为进一步对所述第一应用进行优化修改的依据,如此,一方面实现了用户反馈信息的直接、主动的获取,无需用户登录应用商店或网站进行评论或评分,大大提升了用户的操作体验;另一方面,本发明实施例的技术方案在用户的使用过程中进行信息的采集和识别,以便于获知所述第一应用中用户体验不佳的具体位置,便于后续运行维护过程中能够对具体问题进行完善或优化,为应用的运行维护提供了详实的依据。By adopting the technical solutions of the embodiments of the present invention, during the process of using the first application, the user's facial expression information is identified, operation information is obtained, and first feedback information is generated based on the facial expression information and the operation information, thereby serving as a The basis for further optimizing and modifying the first application. In this way, on the one hand, the direct and active acquisition of user feedback information is realized, and the user does not need to log in to the application store or website to comment or score, which greatly improves the user's operating experience; On the one hand, the technical solutions of the embodiments of the present invention collect and identify information during the use of the user, so as to know the specific location of the poor user experience in the first application, so as to be able to solve specific problems in the subsequent operation and maintenance process. Improve or optimize to provide a detailed basis for the operation and maintenance of the application.

实施例二Embodiment 2

本发明实施例提供了一种信息处理方法。图3为本发明实施例二的信息处理方法的流程示意图;如图3所示,所述信息处理方法包括:Embodiments of the present invention provide an information processing method. FIG. 3 is a schematic flowchart of an information processing method according to Embodiment 2 of the present invention; as shown in FIG. 3 , the information processing method includes:

步骤301:获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上。Step 301: Obtain operation information and image data; wherein, the operation information is operation information obtained by detecting a trigger operation on a display interface of the first application when a first application in the mobile terminal is activated; the The image data is image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane.

本实施例所述的信息处理方法应用于信息处理系统中;所述信息处理系统在本实施方式中可通过服务器实现,当然在其他实施方式中,可通过移动终端和服务器共同实现。则本步骤中,所述获得操作信息和图像数据,包括:信息处理系统获得移动终端的操作信息和图像数据。The information processing method described in this embodiment is applied to an information processing system; the information processing system may be implemented by a server in this embodiment, and of course, in other embodiments, it may be implemented by a mobile terminal and a server. In this step, the obtaining operation information and image data includes: the information processing system obtains the operation information and image data of the mobile terminal.

具体的,当移动终端激活第一应用时,输出表征所述第一应用的显示界面,检测针对所述显示界面的触发操作,获得操作信息;所述操作信息包括操作手势信息和操作位置信息;其中,所述操作手势信息包括:单击手势、双击手势、滑动手势、拖动手势、缩放手势、旋转手势、参数(例如音量参数、亮度参数等)调节手势等等;所述操作位置信息为所述操作手势的操作位置信息;所述操作位置信息可针对一功能按键。进一步地,所述操作信息还可为在一段时间内的连续的操作信息。Specifically, when the mobile terminal activates the first application, it outputs a display interface representing the first application, detects a trigger operation on the display interface, and obtains operation information; the operation information includes operation gesture information and operation position information; Wherein, the operation gesture information includes: click gesture, double-click gesture, sliding gesture, drag gesture, zoom gesture, rotation gesture, parameter (such as volume parameter, brightness parameter, etc.) adjustment gesture, etc.; the operation position information is: Operation position information of the operation gesture; the operation position information may be for a function button. Further, the operation information may also be continuous operation information within a period of time.

当检测到针对所述显示界面的触发操作时,所述移动终端生成第一指令,基于所述第一指令使能所述移动终端的图像采集单元;在其他实施方式中,所述移动终端也可在激活所述第一应用时使能所述移动终端的图像采集单元,或者,所述移动终端也可基于检测到的触发指令使能所述移动终端的图像采集单元,本实施例中不做具体限定。本实施例中,所述图像采集单元与所述移动终端的显示单元在同一平面上,可以理解为,所述图像采集单元可通过所述移动终端的前置摄像头实现。When detecting a trigger operation for the display interface, the mobile terminal generates a first instruction, and enables an image acquisition unit of the mobile terminal based on the first instruction; in other implementation manners, the mobile terminal also The image acquisition unit of the mobile terminal may be enabled when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction. Make specific restrictions. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit may be implemented by the front camera of the mobile terminal.

本实施例中,所述移动终端将获得的操作信息和对应的图像数据发送至信息处理系统。In this embodiment, the mobile terminal sends the obtained operation information and corresponding image data to the information processing system.

步骤302:识别所述图像数据中的面部特征数据;以及获得移动终端自身的相对位置信息和方向信息。Step 302: Identify facial feature data in the image data; and obtain relative position information and orientation information of the mobile terminal itself.

本步骤中,所述信息处理系统分析所述图像数据,首先对所述图像数据进行预处理(例如去噪、像素位置或光照变量的标准化),以及面部的分割、定位或追踪等等。进一步地,对所述图像数据进行面部特征数据的提取,包括将像素数据转化为面部及其组成部分的外形、运动、颜色、肌肉和空间结构的表示;具体的面部特征数据的提取方式可参照现有技术中的任何面部识别方式,本实施例中不作具体描述。In this step, the information processing system analyzes the image data, and first performs preprocessing on the image data (eg, denoising, normalization of pixel positions or illumination variables), and segmentation, localization, or tracking of faces, and so on. Further, the extraction of facial feature data is performed on the image data, including converting the pixel data into the representation of the shape, motion, color, muscle and spatial structure of the face and its components; the extraction method of the specific facial feature data can refer to Any facial recognition method in the prior art will not be specifically described in this embodiment.

本实施例中,所述信息处理系统获得所述移动终端的相对位置信息和方向信息;所述相对位置信息为所述移动终端与持有者之间的相对位置关系。具体的,所述移动终端中设置有以下传感单元的至少之一:重力感应单元、加速度传感单元、距离传感单元、虹膜识别单元等等,具体的,所述移动终端可通过所述重力感应单元或所述加速度传感单元获得方向信息,所述方向信息可以为所述移动终端的重心方向相对于所述移动终端的长边方向或短边方向的夹角,所述方向信息也即所述移动终端的姿态变化信息。所述移动终端还可以通过所述距离传感单元或所述虹膜识别单元获得所述移动终端与持有者之间的相对位置信息,其中,所述距离传感单元和所述虹膜识别单元通常设置于所述移动终端显示单元的同一面上,当用户持握所述移动终端时,可通过所述距离传感单元检测到与用户之间的距离,或者可通过所述虹膜识别单元识别出所述移动终端与所述用户的眼睛之间的相对方位。In this embodiment, the information processing system obtains relative position information and direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, the mobile terminal is provided with at least one of the following sensing units: a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit, etc. The gravity sensing unit or the acceleration sensing unit obtains direction information, and the direction information may be the angle between the direction of the center of gravity of the mobile terminal relative to the long-side direction or the short-side direction of the mobile terminal, and the direction information is also That is, the posture change information of the mobile terminal. The mobile terminal may also obtain relative position information between the mobile terminal and the holder through the distance sensing unit or the iris identification unit, wherein the distance sensing unit and the iris identification unit usually Set on the same surface of the display unit of the mobile terminal, when the user holds the mobile terminal, the distance to the user can be detected by the distance sensing unit, or the iris recognition unit can be used to identify The relative orientation between the mobile terminal and the user's eyes.

步骤303:将所述面部特征数据中的眼部特征数据、所述相对位置信息和所述方向信息进行关联,获得关注点信息;所述关注点信息表征眼睛在移动终端上的聚焦位置信息。Step 303 : Associate the eye feature data, the relative position information and the direction information in the facial feature data to obtain focus information; the focus information represents the focus position information of the eyes on the mobile terminal.

本实施例中,所述信息处理系统基于所述面部特征数据中包含的眼部特征数据以及所述移动终端自身的相对位置信息和所述方向信息进行关联,获得关注点信息,所述关注点信息表征所述移动终端的持有用户的眼睛在所述移动终端的聚焦位置信息,也可以理解为所述持有用户的眼睛浏览的内容所在的位置信息。具体的,所述信息处理系统可基于所述眼部特征数据获得所述持有用户的眼睛的视线方向信息,进一步的,基于所述移动终端自身的相对位置信息和所述方向信息确定所述移动终端和所述持有用户的相对位置关系,基于所述视线方向信息和所述相对位置关系获得所述持有用户的眼睛聚焦在移动终端上的聚焦范围,基于所述聚焦范围生成所述关注点信息。In this embodiment, the information processing system obtains the point of interest information based on the eye feature data included in the facial feature data and the relative position information of the mobile terminal itself and the direction information, and obtains the point of interest information. The information represents the focus position information of the eyes of the user holding the mobile terminal on the mobile terminal, and can also be understood as the position information where the content browsed by the eyes of the holding user is located. Specifically, the information processing system may obtain the line-of-sight direction information of the eyes of the holding user based on the eye feature data, and further, based on the relative position information of the mobile terminal itself and the direction information, determine the The relative positional relationship between the mobile terminal and the holding user, the focusing range of the eye of the holding user focusing on the mobile terminal is obtained based on the gaze direction information and the relative positional relationship, and the focusing range is generated based on the focusing range Focus information.

步骤304:基于所述关注点信息和所述操作信息生成第二反馈信息。Step 304: Generate second feedback information based on the attention point information and the operation information.

这里,所述基于所述关注点信息和所述操作信息生成第二反馈信息,包括:基于预设时间段内的关注点信息和所述预设时间段内的操作信息获得对应的第二用户体验参数,基于所述第二用户体验参数生成所述操作信息对应操作位置的第二反馈信息。Here, the generating the second feedback information based on the attention point information and the operation information includes: obtaining a corresponding second user based on the attention point information within a preset time period and the operation information within the preset time period an experience parameter, generating second feedback information of the operation position corresponding to the operation information based on the second user experience parameter.

具体的,所述信息处理系统基于预设时间段内的关注点信息和操作信息的组合获得对应的第二用户体验参数。例如,当预设时间段t内,当所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第一阈值,且所述操作信息包含的操作次数大于第二阈值时,对应的第二用户体验参数为0;当所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第三阈值小于第一阈值(所述第一阈值小于所述第三阈值),且所述操作信息包含的操作次数小于第二阈值大于第四阈值(所述第四阈值小于所述第二阈值)时,对应的第二用户体验参数为3;当所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)未出现变化,或者变化的范围比例小于第三阈值,且所述操作信息包含的操作次数小于第四阈值时,对应的第二用户体验参数为5。当然,在其他实施方式中,所述信息处理系统基于眼部特征数据、所述相对位置信息和所述方向信息进行关联得到关注点信息的方式也可参照其他实施方式,即所述信息处理系统可根据眼部特征数据、所述相对位置信息和所述方向信息得到持有用户的眼睛在所述移动终端的聚焦位置信息可参照现有技术中的任何图像识别结合建模技术,本实施例中不做详细描述。如在一场景中,用户通过输入操作想找到一功能入口,则用户会四处寻找所述功能入口的位置;则所述信息处理系统获得的用户的关注点信息变化的范围比例会大于第一阈值;相应的,在用户寻找所述功能入口的位置过程中,伴随着多次的触发操作,也即所述信息处理系统获得操作信息包含的操作次数大于第二阈值;在这种场景下,用户需要很长时间才能找到所述功能入口,表明所述应用在当前操作位置给用户带来较差的操作体验,需要进行优化或改进。Specifically, the information processing system obtains the corresponding second user experience parameter based on a combination of the focus point information and the operation information within a preset time period. For example, within a preset time period t, when the range ratio of the point of interest information (that is, the focus position of the holding user's eyes on the mobile terminal) changes is greater than the first threshold, and the number of operations included in the operation information is greater than When the second threshold is the second threshold, the corresponding second user experience parameter is 0; when the range ratio of the point of interest information (that is, the focus position of the holding user's eyes on the mobile terminal) is greater than the third threshold and less than the first threshold (the When the first threshold is less than the third threshold), and the number of operations included in the operation information is less than the second threshold and greater than the fourth threshold (the fourth threshold is less than the second threshold), the corresponding second user experience The parameter is 3; when the point of interest information (that is, the focus position of the user's eyes on the mobile terminal) does not change, or the range ratio of the change is less than the third threshold, and the number of operations included in the operation information is less than the third threshold. When there are four thresholds, the corresponding second user experience parameter is 5. Of course, in other implementation manners, the information processing system may also refer to other implementation manners for the manner in which the information processing system obtains the point of interest information by associating the eye feature data, the relative position information and the direction information, that is, the information processing system The focus position information of the user's eyes on the mobile terminal can be obtained according to the eye feature data, the relative position information and the direction information. Reference can be made to any image recognition combined modeling technology in the prior art. This embodiment is not described in detail. For example, in a scenario, if the user wants to find a function entry through an input operation, the user will look around for the location of the function entry; then the proportion of the range of changes in the user's focus information obtained by the information processing system will be greater than the first threshold. ; Correspondingly, in the process of the user searching for the location of the function entry, with multiple trigger operations, that is, the number of operations included in the information processing system's obtained operation information is greater than the second threshold; in this scenario, the user It takes a long time to find the function entry, indicating that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or improved.

采用本发明实施例的技术方案,通过在用户使用第一应用的过程中,识别用户的关注点信息、以及获得操作信息,基于所述关注点信息和所述操作信息生成第二反馈信息,从而作为进一步对所述第一应用进行优化修改的依据,如此,一方面实现了用户反馈信息的直接、主动的获取,无需用户登录应用商店或网站进行评论或评分,大大提升了用户的操作体验;另一方面,本发明实施例的技术方案在用户的使用过程中进行信息的采集和识别,以便于获知所述第一应用中用户体验不佳的具体位置,便于后续运行维护过程中能够对具体问题进行完善或优化,为应用的运行维护提供了详实的依据。By adopting the technical solutions of the embodiments of the present invention, in the process of the user using the first application, by identifying the user's point of interest information and obtaining the operation information, the second feedback information is generated based on the point of interest information and the operation information, thereby As a basis for further optimizing and modifying the first application, on the one hand, the direct and active acquisition of user feedback information is realized, and the user does not need to log in to the application store or website to comment or score, which greatly improves the user's operating experience; On the other hand, the technical solution of the embodiment of the present invention collects and identifies information during the use of the user, so as to know the specific location of the poor user experience in the first application, so as to facilitate the subsequent operation and maintenance. To improve or optimize the problem, it provides a detailed basis for the operation and maintenance of the application.

实施例三Embodiment 3

本发明实施例提供了一种信息处理方法。图4为本发明实施例三的信息处理方法的流程示意图;如图4所示,所述信息处理方法包括:Embodiments of the present invention provide an information processing method. FIG. 4 is a schematic flowchart of an information processing method according to Embodiment 3 of the present invention; as shown in FIG. 4 , the information processing method includes:

步骤401:获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上。Step 401: Obtain operation information and image data; wherein, the operation information is operation information obtained by detecting a trigger operation on a display interface of the first application when a first application in the mobile terminal is activated; the The image data is image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane.

本实施例所述的信息处理方法应用于信息处理系统中;所述信息处理系统在本实施方式中可通过服务器实现,当然在其他实施方式中,可通过移动终端和服务器共同实现。则本步骤中,所述获得操作信息和图像数据,包括:信息处理系统获得移动终端的操作信息和图像数据。The information processing method described in this embodiment is applied to an information processing system; the information processing system may be implemented by a server in this embodiment, and of course, in other embodiments, it may be implemented by a mobile terminal and a server. In this step, the obtaining operation information and image data includes: the information processing system obtains the operation information and image data of the mobile terminal.

具体的,当移动终端激活第一应用时,输出表征所述第一应用的显示界面,检测针对所述显示界面的触发操作,获得操作信息;所述操作信息包括操作手势信息和操作位置信息;其中,所述操作手势信息包括:单击手势、双击手势、滑动手势、拖动手势、缩放手势、旋转手势、参数(例如音量参数、亮度参数等)调节手势等等;所述操作位置信息为所述操作手势的操作位置信息;所述操作位置信息可针对一功能按键。进一步地,所述操作信息还可为在一段时间内的连续的操作信息。Specifically, when the mobile terminal activates the first application, it outputs a display interface representing the first application, detects a trigger operation on the display interface, and obtains operation information; the operation information includes operation gesture information and operation position information; Wherein, the operation gesture information includes: click gesture, double-click gesture, sliding gesture, drag gesture, zoom gesture, rotation gesture, parameter (such as volume parameter, brightness parameter, etc.) adjustment gesture, etc.; the operation position information is: Operation position information of the operation gesture; the operation position information may be for a function button. Further, the operation information may also be continuous operation information within a period of time.

当检测到针对所述显示界面的触发操作时,所述移动终端生成第一指令,基于所述第一指令使能所述移动终端的图像采集单元;在其他实施方式中,所述移动终端也可在激活所述第一应用时使能所述移动终端的图像采集单元,或者,所述移动终端也可基于检测到的触发指令使能所述移动终端的图像采集单元,本实施例中不做具体限定。本实施例中,所述图像采集单元与所述移动终端的显示单元在同一平面上,可以理解为,所述图像采集单元可通过所述移动终端的前置摄像头实现。When detecting a trigger operation for the display interface, the mobile terminal generates a first instruction, and enables an image acquisition unit of the mobile terminal based on the first instruction; in other implementation manners, the mobile terminal also The image acquisition unit of the mobile terminal may be enabled when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction. Make specific restrictions. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit may be implemented by the front camera of the mobile terminal.

本实施例中,所述移动终端将获得的操作信息和对应的图像数据发送至信息处理系统。In this embodiment, the mobile terminal sends the obtained operation information and corresponding image data to the information processing system.

步骤402:识别所述图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息。Step 402: Identify facial feature data in the image data, and obtain facial expression information based on the facial feature data.

本步骤中,所述信息处理系统分析所述图像数据,首先对所述图像数据进行预处理(例如去噪、像素位置或光照变量的标准化),以及面部的分割、定位或追踪等等。进一步地,对所述图像数据进行面部特征数据的提取,包括将像素数据转化为面部及其组成部分的外形、运动、颜色、肌肉和空间结构的表示,提取出的面部特征数据用于进行后续的表情分类。进一步地,所述信息处理系统中预先表情分类器,所述表情分类器中包括多组面部特征数据与表情信息的对应关系,或者所述表情分类器中包括一表情分类模型,将所述面部特征数据输入所述表情分类器,输出所述面部特征数据对应的表情信息,也即基于所述面部特征数据获得面部表情信息。本步骤中所述的面部分析方法以及表情信息的识别可参照现有技术中的任何分析识别方法,本实施例中不作具体描述。In this step, the information processing system analyzes the image data, and first performs preprocessing on the image data (eg, denoising, normalization of pixel positions or illumination variables), and segmentation, localization, or tracking of faces, and so on. Further, the extraction of facial feature data is performed on the image data, including converting the pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent follow-up. expression classification. Further, the pre-expression classifier in the information processing system, the expression classifier includes the correspondence between multiple groups of facial feature data and expression information, or the expression classifier includes an expression classification model, and the facial expression The feature data is input into the expression classifier, and the expression information corresponding to the facial feature data is output, that is, the facial expression information is obtained based on the facial feature data. For the facial analysis method and the recognition of expression information described in this step, reference may be made to any analysis and recognition method in the prior art, which will not be described in detail in this embodiment.

当然,在其他实施方式中,本步骤也可由移动终端实现,即移动终端根据获得的图像数据进行分析识别,获得所述面部表情信息,再将所述面部表情信息发送至信息处理系统。具体的实现方式可参照上述描述,这里不再赘述。Of course, in other embodiments, this step can also be implemented by a mobile terminal, that is, the mobile terminal analyzes and recognizes the obtained image data, obtains the facial expression information, and then sends the facial expression information to the information processing system. For a specific implementation manner, reference may be made to the above description, which will not be repeated here.

步骤403:获得移动终端自身的相对位置信息和方向信息;将所述面部特征数据中的眼部特征数据、所述相对位置信息和方向信息进行关联,获得关注点信息;所述关注点信息表征眼睛在移动终端上的聚焦位置信息。Step 403: Obtain the relative position information and direction information of the mobile terminal itself; associate the eye feature data, the relative position information and the direction information in the facial feature data to obtain the point of interest information; the point of interest information represents The focus position information of the eyes on the mobile terminal.

本实施例中,所述信息处理系统获得所述移动终端的相对位置信息和方向信息;所述相对位置信息为所述移动终端与持有者之间的相对位置关系。具体的,所述移动终端中设置有以下传感单元的至少之一:重力感应单元、加速度传感单元、距离传感单元、虹膜识别单元等等,具体的,所述移动终端可通过所述重力感应单元或所述加速度传感单元获得方向信息,所述方向信息可以为所述移动终端的重心方向相对于所述移动终端的长边方向或短边方向的夹角,所述方向信息也即所述移动终端的姿态变化信息。所述移动终端还可以通过所述距离传感单元或所述虹膜识别单元获得所述移动终端与持有者之间的相对位置信息,其中,所述距离传感单元和所述虹膜识别单元通常设置于所述移动终端显示单元的同一面上,当用户持握所述移动终端时,可通过所述距离传感单元检测到与用户之间的距离,或者可通过所述虹膜识别单元识别出所述移动终端与所述用户的眼睛之间的相对方位。In this embodiment, the information processing system obtains relative position information and direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, the mobile terminal is provided with at least one of the following sensing units: a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit, etc. The gravity sensing unit or the acceleration sensing unit obtains direction information, and the direction information may be the angle between the direction of the center of gravity of the mobile terminal relative to the long-side direction or the short-side direction of the mobile terminal, and the direction information is also That is, the posture change information of the mobile terminal. The mobile terminal may also obtain relative position information between the mobile terminal and the holder through the distance sensing unit or the iris identification unit, wherein the distance sensing unit and the iris identification unit usually Set on the same surface of the display unit of the mobile terminal, when the user holds the mobile terminal, the distance to the user can be detected by the distance sensing unit, or the iris recognition unit can be used to identify The relative orientation between the mobile terminal and the user's eyes.

进一步地,所述信息处理系统基于所述面部特征数据中包含的眼部特征数据、所述移动终端自身的相对位置信息和所述方向信息进行关联,获得关注点信息,所述关注点信息表征所述移动终端的持有用户的眼睛在所述移动终端的聚焦位置信息,也可以理解为所述持有用户的眼睛浏览的内容所在的位置信息。具体的,所述信息处理系统可基于所述眼部特征数据获得所述持有用户的眼睛的视线方向信息,进一步的,基于所述移动终端自身的相对位置信息和所述方向信息确定所述移动终端和所述持有用户的相对位置关系,基于所述视线方向信息和所述相对位置关系获得所述持有用户的眼睛聚焦在移动终端上的聚焦范围,基于所述聚焦范围生成所述关注点信息。Further, the information processing system associates the eye feature data contained in the facial feature data, the relative position information of the mobile terminal itself, and the direction information, and obtains point of interest information, and the point of interest information represents The focus position information of the eyes of the holding user on the mobile terminal of the mobile terminal may also be understood as the position information of the content browsed by the eyes of the holding user. Specifically, the information processing system may obtain the line-of-sight direction information of the eyes of the holding user based on the eye feature data, and further, based on the relative position information of the mobile terminal itself and the direction information, determine the The relative positional relationship between the mobile terminal and the holding user, the focusing range of the eye of the holding user focusing on the mobile terminal is obtained based on the gaze direction information and the relative positional relationship, and the focusing range is generated based on the focusing range Focus information.

步骤404:基于所述面部表情信息、所述关注点信息和所述操作信息生成第一反馈信息。Step 404: Generate first feedback information based on the facial expression information, the attention point information, and the operation information.

这里,所述基于所述面部表情信息、所述关注点信息和所述操作信息生成第一反馈信息,包括:Here, the generating the first feedback information based on the facial expression information, the attention point information and the operation information includes:

基于预设时间段内的所述面部表情信息和所述关注点信息,结合所述预设时间段内的操作信息获得对应的第三用户体验参数,基于所述第三用户体验参数生成所述操作信息对应操作位置的第一反馈信息。A corresponding third user experience parameter is obtained based on the facial expression information and the point of interest information within a preset time period and operation information within the preset time period, and the third user experience parameter is generated based on the third user experience parameter. The operation information corresponds to the first feedback information of the operation position.

具体的,所述信息处理系统基于预设时间段内的面部的表情信息,以及所述预设时间段内的关注点信息,结合操作信息获得对应的第三用户体验参数。例如,在预设时间段t内,获得的面部表情信息为愉悦,所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)未出现变化,或者变化的范围比例小于第三阈值,且所述操作信息包含的操作次数小于第四阈值时,对应的第二用户体验参数为5;或者,获得的面部表情信息为平静,所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第三阈值小于第一阈值(所述第一阈值小于所述第三阈值),且所述操作信息包含的操作次数小于第二阈值大于第四阈值(所述第四阈值小于所述第二阈值)时,对应的第二用户体验参数为3;或者,获得的面部表情信息为不满,所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第一阈值,且所述操作信息包含的操作次数大于第二阈值时,对应的第二用户体验参数为0。当然,所述信息处理系统获得第三用户体验参数的方式也可参照其他方式,本实施例中不做赘述。如在一场景中,用户通过输入操作想找到一功能入口,则用户会四处寻找所述功能入口的位置;则所述信息处理系统获得的用户的关注点信息变化的范围比例会大于第一阈值;相应的,在用户寻找所述功能入口的位置过程中,伴随着多次的触发操作,也即所述信息处理系统获得操作信息包含的操作次数大于第二阈值;相应的,在用户花费了较长时间寻找所述功能入口的位置的场景下,用户会面露不悦的表情;在这种场景下,所述信息处理系统基于对获得的所述面部表情信息、所述关注点信息以及所述操作信息的分析,确定当前用户体验较差,也即用户需要很长时间才能找到所述功能入口,表明所述应用在当前操作位置给用户带来较差的操作体验,需要进行优化或改进。Specifically, the information processing system obtains the corresponding third user experience parameter based on the facial expression information within the preset time period and the point of interest information within the preset time period, in combination with the operation information. For example, within the preset time period t, the obtained facial expression information is pleasant, and the point of interest information (that is, the focus position of the holding user's eyes on the mobile terminal) does not change, or the range ratio of the change is smaller than the third threshold, and when the number of operations included in the operation information is less than the fourth threshold, the corresponding second user experience parameter is 5; The ratio of the range of changes in the focus position on the mobile terminal) is greater than a third threshold and less than a first threshold (the first threshold is less than the third threshold), and the number of operations included in the operation information is less than the second threshold and greater than the fourth threshold (the fourth threshold is less than the second threshold), the corresponding second user experience parameter is 3; or, the obtained facial expression information is dissatisfaction, and the point of interest information (that is, holding the user's eyes on the mobile terminal) When the ratio of the range of changes in the focus position) is greater than the first threshold, and the number of operations included in the operation information is greater than the second threshold, the corresponding second user experience parameter is 0. Certainly, the manner in which the information processing system obtains the third user experience parameter may also refer to other manners, which will not be repeated in this embodiment. For example, in a scenario, if the user wants to find a function entry through an input operation, the user will look around for the location of the function entry; then the proportion of the range of changes in the user's focus information obtained by the information processing system will be greater than the first threshold. ; Correspondingly, in the process of the user finding the location of the function entry, with multiple trigger operations, that is, the number of operations included in the information processing system to obtain the operation information is greater than the second threshold; Correspondingly, after the user spends In the scenario of searching for the location of the function entry for a long time, the user will show an unhappy expression; in this scenario, the information processing system The analysis of the operation information determines that the current user experience is poor, that is, it takes a long time for the user to find the function entry, indicating that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or Improve.

采用本发明实施例的技术方案,通过在用户使用第一应用的过程中,识别用户的面部表情信息、用户的关注点信息、以及获得操作信息,基于所述面部表情信息、所述关注点信息和所述操作信息生成第一反馈信息,从而作为进一步对所述第一应用进行优化修改的依据,如此,一方面实现了用户反馈信息的直接、主动的获取,无需用户登录应用商店或网站进行评论或评分,大大提升了用户的操作体验;另一方面,本发明实施例的技术方案在用户的使用过程中进行信息的采集和识别,以便于获知所述第一应用中用户体验不佳的具体位置,便于后续运行维护过程中能够对具体问题进行完善或优化,为应用的运行维护提供了详实的依据。By adopting the technical solutions of the embodiments of the present invention, during the process of the user using the first application, by identifying the user's facial expression information, the user's point of interest information, and obtaining operation information, based on the facial expression information and the point of interest information and the operation information to generate the first feedback information, so as to be used as the basis for further optimization and modification of the first application. In this way, on the one hand, the direct and active acquisition of the user feedback information is realized, and the user does not need to log in to the application store or website. comments or ratings, which greatly improves the user's operating experience; on the other hand, the technical solution of the embodiment of the present invention collects and identifies information during the user's use process, so as to know the poor user experience in the first application. The specific location is convenient for the improvement or optimization of specific problems in the subsequent operation and maintenance process, and provides a detailed basis for the operation and maintenance of the application.

实施例四Embodiment 4

本发明实施例还提供了一种信息处理系统。图5为本发明实施例的信息处理系统的组成结构示意图;如图5所示,所述系统包括:获取单元51、图像处理单元52和信息生成单元53;其中,The embodiment of the present invention also provides an information processing system. FIG. 5 is a schematic diagram of the composition and structure of an information processing system according to an embodiment of the present invention; as shown in FIG. 5 , the system includes: an acquisition unit 51, an image processing unit 52 and an information generation unit 53; wherein,

所述获取单元51,用于获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上;The obtaining unit 51 is configured to obtain operation information and image data; wherein, the operation information is obtained by detecting a triggering operation on the display interface of the first application when the first application in the mobile terminal is activated. operation information; the image data is the image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;

所述图像处理单元52,用于识别所述获取单元51获得的图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息;The image processing unit 52 is used to identify facial feature data in the image data obtained by the acquisition unit 51, and obtain facial expression information based on the facial feature data;

所述信息生成单元53,用于基于所述图像处理单元52获得的面部表情信息和所述获取单元51获得的操作信息生成第一反馈信息。The information generating unit 53 is configured to generate first feedback information based on the facial expression information obtained by the image processing unit 52 and the operation information obtained by the obtaining unit 51 .

其中,所述信息生成单元53,用于基于所述面部表情信息获得对应的第一用户体验参数,基于所述第一用户体验参数生成所述操作信息对应操作位置的第一反馈信息。The information generating unit 53 is configured to obtain the corresponding first user experience parameter based on the facial expression information, and generate first feedback information of the operation position corresponding to the operation information based on the first user experience parameter.

本实施例中,当移动终端激活第一应用时,输出表征所述第一应用的显示界面,检测针对所述显示界面的触发操作,获得操作信息;所述操作信息包括操作手势信息和操作位置信息;其中,所述操作手势信息包括:单击手势、双击手势、滑动手势、拖动手势、缩放手势、旋转手势、参数(例如音量参数、亮度参数等)调节手势等等;所述操作位置信息为所述操作手势的操作位置信息;所述操作位置信息可针对一功能按键。进一步地,所述操作信息还可为在一段时间内的连续的操作信息。In this embodiment, when the mobile terminal activates the first application, it outputs a display interface representing the first application, detects a trigger operation on the display interface, and obtains operation information; the operation information includes operation gesture information and operation position information; wherein, the operation gesture information includes: click gesture, double-click gesture, slide gesture, drag gesture, zoom gesture, rotation gesture, parameter (such as volume parameter, brightness parameter, etc.) adjustment gesture, etc.; the operation position The information is the operation position information of the operation gesture; the operation position information may be for a function button. Further, the operation information may also be continuous operation information within a period of time.

当检测到针对所述显示界面的触发操作时,所述移动终端生成第一指令,基于所述第一指令使能所述移动终端的图像采集单元;在其他实施方式中,所述移动终端也可在激活所述第一应用时使能所述移动终端的图像采集单元,或者,所述移动终端也可基于检测到的触发指令使能所述移动终端的图像采集单元,本实施例中不做具体限定。本实施例中,所述图像采集单元与所述移动终端的显示单元在同一平面上,可以理解为,所述图像采集单元可通过所述移动终端的前置摄像头实现。进一步地,所述移动终端将获得的操作信息和对应的图像数据发送至信息处理系统。When detecting a trigger operation for the display interface, the mobile terminal generates a first instruction, and enables an image acquisition unit of the mobile terminal based on the first instruction; in other implementation manners, the mobile terminal also The image acquisition unit of the mobile terminal may be enabled when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction. Make specific restrictions. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit may be implemented by the front camera of the mobile terminal. Further, the mobile terminal sends the obtained operation information and corresponding image data to the information processing system.

本实施例中,所述图像处理单元52分析所述图像数据,首先对所述图像数据进行预处理(例如去噪、像素位置或光照变量的标准化),以及面部的分割、定位或追踪等等。进一步地,对所述图像数据进行面部特征数据的提取,包括将像素数据转化为面部及其组成部分的外形、运动、颜色、肌肉和空间结构的表示,提取出的面部特征数据用于进行后续的表情分类。进一步地,所述信息处理系统中预先表情分类器,所述表情分类器中包括多组面部特征数据与表情信息的对应关系,或者所述表情分类器中包括一表情分类模型,将所述面部特征数据输入所述表情分类器,输出所述面部特征数据对应的表情信息,也即基于所述面部特征数据获得面部表情信息。本实施例中所述的面部分析方法以及表情信息的识别可参照现有技术中的任何分析识别方法,本实施例中不作具体描述。In this embodiment, the image processing unit 52 analyzes the image data, and first performs preprocessing on the image data (such as denoising, normalization of pixel positions or illumination variables), and segmentation, positioning or tracking of faces, etc. . Further, the extraction of facial feature data is performed on the image data, including converting the pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent follow-up. expression classification. Further, the pre-expression classifier in the information processing system, the expression classifier includes the correspondence between multiple groups of facial feature data and expression information, or the expression classifier includes an expression classification model, and the facial expression The feature data is input into the expression classifier, and the expression information corresponding to the facial feature data is output, that is, the facial expression information is obtained based on the facial feature data. For the facial analysis method and the recognition of expression information described in this embodiment, reference may be made to any analysis and recognition method in the prior art, which is not described in detail in this embodiment.

本实施例中,所述信息生成单元53中预先存储有多组面部表情信息与第一用户体验参数的对应关系。例如,当所述面部表情信息为愉悦时,对应的第一用户体验参数为5;当所述面部表情信息为平静时,对应的第一用户体验参数为3;当所述面部表情信息为不满时,对应的第一用户体验参数为0。当然,在其他实施方式中,所述面部表情信息以及第一用户体验参数也可按照其他对应方式预先设置,本实施例中不做详细描述。也就是说,所述面部表情信息能够表征用户的体验度,从而当所述面部表情信息表征的用户体验度达到第一预设阈值时,即表明用户体验度佳,依据所述面部表情信息对应的操作信息(包括操作位置信息)生成所述操作信息(包括操作位置信息)对应的第一反馈信息,此时,所述第一反馈信息表明所述第一应用在所述操作位置处给用户带来较佳体验,所述操作位置处的操作功能或提供的内容值得推荐;相应的,当所述面部表情信息表征的用户体验度未达到第二预设阈值时,所述第二预设阈值小于所述第一预设阈值,即表明用户体验度差,依据所述面部表情信息对应的操作信息(包括操作位置信息)生成所述操作信息(包括操作位置信息)对应的第一反馈信息,此时,所述第一反馈信息表明所述第一应用在所述操作位置处给用户带来较差体验,所述操作位置处的操作功能或提供的内容需进一步进行优化或改善。In this embodiment, the information generating unit 53 pre-stores a plurality of sets of correspondences between the facial expression information and the first user experience parameter. For example, when the facial expression information is pleasant, the corresponding first user experience parameter is 5; when the facial expression information is calm, the corresponding first user experience parameter is 3; when the facial expression information is dissatisfaction , the corresponding first user experience parameter is 0. Of course, in other implementation manners, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which are not described in detail in this embodiment. That is to say, the facial expression information can represent the user's experience degree, so when the user experience degree represented by the facial expression information reaches the first preset threshold, it means that the user experience degree is good, according to the facial expression information corresponding The first feedback information corresponding to the operation information (including the operation position information) is generated from the operation information (including the operation position information), and at this time, the first feedback information indicates that the first application is at the operation position to the user Bringing a better experience, the operation function at the operation position or the provided content is worthy of recommendation; correspondingly, when the user experience degree represented by the facial expression information does not reach the second preset threshold, the second preset The threshold value is less than the first preset threshold value, which means that the user experience is poor, and the first feedback information corresponding to the operation information (including the operation position information) is generated according to the operation information (including the operation position information) corresponding to the facial expression information. , at this time, the first feedback information indicates that the first application brings a poor experience to the user at the operation position, and the operation function or provided content at the operation position needs to be further optimized or improved.

作为另一种实施方式,所述信息生成单元53基于所述面部表情信息和所述操作信息的组合获得对应的第一用户体验参数。例如,当所述面部表情信息为愉悦,且所述操作信息包含的操作次数小于低于第一阈值时,对应的第一用户体验参数为5;当所述面部表情信息为平静,且所述操作信息包含的操作次数大于第一阈值小于第二阈值时,对应的第一用户体验参数为3;当所述面部表情信息为不满,且所述操作信息包含的操作次数大于第二阈值时,对应的第一用户体验参数为0。当然,在其他实施方式中,所述面部表情信息以及第一用户体验参数也可按照其他对应方式预先设置,本实施例中不做详细描述。也就是说,所述面部表情信息和操作信息的组合能够表征用户的体验度,如在一场景中,用户通过输入操作想找到一功能入口,可通过多次的滑动操作才看到,此时的用户会面露不悦的表情;则在这种场景下,所述信息处理系统获得表征不悦表情的面部表情信息和包含有多次滑动操作的操作信息,结合上述两个信息获得对应的第一用户体验参数为0,即表明在所述操作信息对应的操作位置下,用户的体验度较差,需要进行优化或改进。As another implementation manner, the information generating unit 53 obtains the corresponding first user experience parameter based on the combination of the facial expression information and the operation information. For example, when the facial expression information is pleasant, and the number of operations included in the operation information is less than a first threshold, the corresponding first user experience parameter is 5; when the facial expression information is calm, and the When the number of operations included in the operation information is greater than the first threshold and smaller than the second threshold, the corresponding first user experience parameter is 3; when the facial expression information is dissatisfied and the number of operations included in the operation information is greater than the second threshold, The corresponding first user experience parameter is 0. Of course, in other implementation manners, the facial expression information and the first user experience parameter may also be preset in other corresponding manners, which are not described in detail in this embodiment. That is to say, the combination of the facial expression information and the operation information can represent the user's experience. For example, in a scene, the user wants to find a function entry through an input operation, and can only see it through multiple sliding operations. The user will show an unhappy expression; then in this scenario, the information processing system obtains the facial expression information representing the unhappy expression and the operation information containing multiple sliding operations, and combines the above two information to obtain the corresponding The first user experience parameter is 0, which means that in the operation position corresponding to the operation information, the user experience is poor and needs to be optimized or improved.

本领域技术人员应当理解,本发明实施例的信息处理系统中各处理模块的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理系统中各处理模块,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。Those skilled in the art should understand that the functions of each processing module in the information processing system of the embodiment of the present invention can be understood by referring to the relevant description of the foregoing information processing method, and each processing module in the information processing system of the embodiment of the present invention can be realized by The functions described in the embodiments of the present invention are implemented by analog circuits, and may also be implemented by running software that executes the functions described in the embodiments of the present invention on an intelligent terminal.

实施例五Embodiment 5

本发明实施例还提供了一种信息处理系统,参照图5所示,所述系统包括:获取单元51、图像处理单元52和信息生成单元53;其中,An embodiment of the present invention further provides an information processing system, as shown in FIG. 5 , the system includes: an acquisition unit 51, an image processing unit 52, and an information generation unit 53; wherein,

所述获取单元51,用于获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上;还用于获得移动终端自身的位置信息和方向信息;The obtaining unit 51 is configured to obtain operation information and image data; wherein, the operation information is obtained by detecting a triggering operation on the display interface of the first application when the first application in the mobile terminal is activated. operation information; the image data is the image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; also used to obtain the position information and orientation information of the mobile terminal itself;

所述图像处理单元52,用于识别所述获取单元51获得的图像数据中的面部特征数据,将所述面部特征数据中的眼部特征数据和所述获取单元51获得的所述移动终端自身的位置信息和方向信息进行关联,获得关注点信息;所述关注点信息表征眼睛在移动终端上的聚焦位置信息;The image processing unit 52 is used to identify the facial feature data in the image data obtained by the acquiring unit 51, and combine the eye feature data in the facial feature data with the mobile terminal itself obtained by the acquiring unit 51. The position information and direction information are associated to obtain the point of interest information; the point of interest information represents the focus position information of the eyes on the mobile terminal;

所述信息生成单元53,用于基于所述图像处理单元52获得的关注点信息和所述获取单元51获得的操作信息生成第二反馈信息。The information generating unit 53 is configured to generate second feedback information based on the point of interest information obtained by the image processing unit 52 and the operation information obtained by the obtaining unit 51 .

其中,所述信息生成单元53,用于基于预设时间段内的关注点信息和所述预设时间段内的操作信息获得对应的第二用户体验参数,基于所述第二用户体验参数生成所述操作信息对应操作位置的第二反馈信息。The information generating unit 53 is configured to obtain a corresponding second user experience parameter based on the point of interest information within a preset time period and the operation information within the preset time period, and generate a corresponding second user experience parameter based on the second user experience parameter The operation information corresponds to the second feedback information of the operation position.

本实施例中,当移动终端激活第一应用时,输出表征所述第一应用的显示界面,检测针对所述显示界面的触发操作,获得操作信息;所述操作信息包括操作手势信息和操作位置信息;其中,所述操作手势信息包括:单击手势、双击手势、滑动手势、拖动手势、缩放手势、旋转手势、参数(例如音量参数、亮度参数等)调节手势等等;所述操作位置信息为所述操作手势的操作位置信息;所述操作位置信息可针对一功能按键。进一步地,所述操作信息还可为在一段时间内的连续的操作信息。当检测到针对所述显示界面的触发操作时,所述移动终端生成第一指令,基于所述第一指令使能所述移动终端的图像采集单元;在其他实施方式中,所述移动终端也可在激活所述第一应用时使能所述移动终端的图像采集单元,或者,所述移动终端也可基于检测到的触发指令使能所述移动终端的图像采集单元,本实施例中不做具体限定。本实施例中,所述图像采集单元与所述移动终端的显示单元在同一平面上,可以理解为,所述图像采集单元可通过所述移动终端的前置摄像头实现。In this embodiment, when the mobile terminal activates the first application, it outputs a display interface representing the first application, detects a trigger operation on the display interface, and obtains operation information; the operation information includes operation gesture information and operation position information; wherein, the operation gesture information includes: click gesture, double-click gesture, slide gesture, drag gesture, zoom gesture, rotation gesture, parameter (such as volume parameter, brightness parameter, etc.) adjustment gesture, etc.; the operation position The information is the operation position information of the operation gesture; the operation position information may be for a function button. Further, the operation information may also be continuous operation information within a period of time. When detecting a trigger operation for the display interface, the mobile terminal generates a first instruction, and enables an image acquisition unit of the mobile terminal based on the first instruction; in other implementation manners, the mobile terminal also The image acquisition unit of the mobile terminal may be enabled when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction. Make specific restrictions. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit may be implemented by the front camera of the mobile terminal.

所述图像处理单元52分析所述图像数据,首先对所述图像数据进行预处理(例如去噪、像素位置或光照变量的标准化),以及面部的分割、定位或追踪等等。进一步地,对所述图像数据进行面部特征数据的提取,包括将像素数据转化为面部及其组成部分的外形、运动、颜色、肌肉和空间结构的表示;具体的面部特征数据的提取方式可参照现有技术中的任何面部识别方式,本实施例中不作具体描述。The image processing unit 52 analyzes the image data, first performing preprocessing on the image data (eg, denoising, normalization of pixel positions or illumination variables), and segmentation, localization or tracking of faces, and the like. Further, the extraction of facial feature data is performed on the image data, including converting the pixel data into the representation of the shape, motion, color, muscle and spatial structure of the face and its components; the extraction method of the specific facial feature data can refer to Any facial recognition method in the prior art will not be specifically described in this embodiment.

本实施例中,所述获取单元51获得所述移动终端的相对位置信息和方向信息;所述相对位置信息为所述移动终端与持有者之间的相对位置关系。具体的,所述移动终端中设置有以下传感单元的至少之一:重力感应单元、加速度传感单元、距离传感单元、虹膜识别单元等等,具体的,所述移动终端可通过所述重力感应单元或所述加速度传感单元获得方向信息,所述方向信息可以为所述移动终端的重心方向相对于所述移动终端的长边方向或短边方向的夹角,所述方向信息也即所述移动终端的姿态变化信息。所述移动终端还可以通过所述距离传感单元或所述虹膜识别单元获得所述移动终端与持有者之间的相对位置信息,其中,所述距离传感单元和所述虹膜识别单元通常设置于所述移动终端显示单元的同一面上,当用户持握所述移动终端时,可通过所述距离传感单元检测到与用户之间的距离,或者可通过所述虹膜识别单元识别出所述移动终端与所述用户的眼睛之间的相对方位。In this embodiment, the obtaining unit 51 obtains relative position information and direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, the mobile terminal is provided with at least one of the following sensing units: a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit, etc. The gravity sensing unit or the acceleration sensing unit obtains direction information, and the direction information may be the angle between the direction of the center of gravity of the mobile terminal relative to the long-side direction or the short-side direction of the mobile terminal, and the direction information is also That is, the posture change information of the mobile terminal. The mobile terminal may also obtain relative position information between the mobile terminal and the holder through the distance sensing unit or the iris identification unit, wherein the distance sensing unit and the iris identification unit usually Set on the same surface of the display unit of the mobile terminal, when the user holds the mobile terminal, the distance to the user can be detected by the distance sensing unit, or the iris recognition unit can be used to identify The relative orientation between the mobile terminal and the user's eyes.

本实施例中,所述图像处理单元52基于所述面部特征数据中包含的眼部特征数据以及所述移动终端自身的相对位置信息和所述方向信息进行关联,获得关注点信息,所述关注点信息表征所述移动终端的持有用户的眼睛在所述移动终端的聚焦位置信息,也可以理解为所述持有用户的眼睛浏览的内容所在的位置信息。具体的,所述信息处理系统可基于所述眼部特征数据获得所述持有用户的眼睛的视线方向信息,进一步的,基于所述移动终端自身的相对位置信息和所述方向信息确定所述移动终端和所述持有用户的相对位置关系,基于所述视线方向信息和所述相对位置关系获得所述持有用户的眼睛聚焦在移动终端上的聚焦范围,基于所述聚焦范围生成所述关注点信息。In this embodiment, the image processing unit 52 associates the eye feature data included in the facial feature data, the relative position information of the mobile terminal itself, and the direction information to obtain focus point information, and the focus point information is obtained. The point information represents the focus position information of the eyes of the user holding the mobile terminal on the mobile terminal, and can also be understood as the position information where the content browsed by the eyes of the holding user is located. Specifically, the information processing system may obtain the line-of-sight direction information of the eyes of the holding user based on the eye feature data, and further, based on the relative position information of the mobile terminal itself and the direction information, determine the The relative positional relationship between the mobile terminal and the holding user, the focusing range of the eye of the holding user focusing on the mobile terminal is obtained based on the gaze direction information and the relative positional relationship, and the focusing range is generated based on the focusing range Focus information.

本实施例中,所述信息生成单元53基于预设时间段内的关注点信息和操作信息的组合获得对应的第二用户体验参数。例如,当预设时间段t内,当所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第一阈值,且所述操作信息包含的操作次数大于第二阈值时,对应的第二用户体验参数为0;当所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第三阈值小于第一阈值(所述第一阈值小于所述第三阈值),且所述操作信息包含的操作次数小于第二阈值大于第四阈值(所述第四阈值小于所述第二阈值)时,对应的第二用户体验参数为3;当所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)未出现变化,或者变化的范围比例小于第三阈值,且所述操作信息包含的操作次数小于第四阈值时,对应的第二用户体验参数为5。当然,在其他实施方式中,所述信息处理系统基于眼部特征数据、所述移动终端自身的相对位置信息和所述方向信息进行关联得到关注点信息的方式也可参照其他实施方式,即所述信息处理系统可根据眼部特征数据、所述移动终端自身的相对位置信息和所述方向信息得到持有用户的眼睛在所述移动终端的聚焦位置信息可参照现有技术中的任何图像识别结合建模技术,本实施例中不做详细描述。如在一场景中,用户通过输入操作想找到一功能入口,则用户会四处寻找所述功能入口的位置;则所述信息处理系统获得的用户的关注点信息变化的范围比例会大于第一阈值;相应的,在用户寻找所述功能入口的位置过程中,伴随着多次的触发操作,也即所述信息处理系统获得操作信息包含的操作次数大于第二阈值;在这种场景下,用户需要很长时间才能找到所述功能入口,表明所述应用在当前操作位置给用户带来较差的操作体验,需要进行优化或改进。In this embodiment, the information generating unit 53 obtains the corresponding second user experience parameter based on a combination of the point of interest information and the operation information within a preset time period. For example, within a preset time period t, when the range ratio of the point of interest information (that is, the focus position of the holding user's eyes on the mobile terminal) changes is greater than the first threshold, and the number of operations included in the operation information is greater than When the second threshold is the second threshold, the corresponding second user experience parameter is 0; when the range ratio of the point of interest information (that is, the focus position of the holding user's eyes on the mobile terminal) is greater than the third threshold and less than the first threshold (the When the first threshold is less than the third threshold), and the number of operations included in the operation information is less than the second threshold and greater than the fourth threshold (the fourth threshold is less than the second threshold), the corresponding second user experience The parameter is 3; when the point of interest information (that is, the focus position of the user's eyes on the mobile terminal) does not change, or the range ratio of the change is less than the third threshold, and the number of operations included in the operation information is less than the third threshold. When there are four thresholds, the corresponding second user experience parameter is 5. Of course, in other implementation manners, the information processing system may also refer to other implementation manners for the manner in which the information processing system obtains the point of interest information based on the eye feature data, the relative position information of the mobile terminal itself, and the direction information. The information processing system can obtain the focal position information of the user's eyes on the mobile terminal according to the eye feature data, the relative position information of the mobile terminal itself and the direction information, and can refer to any image recognition method in the prior art. Combining with the modeling technology, no detailed description is given in this embodiment. For example, in a scenario, if the user wants to find a function entry through an input operation, the user will look around for the location of the function entry; then the proportion of the range of changes in the user's focus information obtained by the information processing system will be greater than the first threshold. ; Correspondingly, in the process of the user searching for the location of the function entry, with multiple trigger operations, that is, the number of operations included in the information processing system's obtained operation information is greater than the second threshold; in this scenario, the user It takes a long time to find the function entry, indicating that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or improved.

本领域技术人员应当理解,本发明实施例的信息处理系统中各处理模块的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理系统中各处理模块,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。Those skilled in the art should understand that the functions of each processing module in the information processing system of the embodiment of the present invention can be understood by referring to the relevant description of the foregoing information processing method, and each processing module in the information processing system of the embodiment of the present invention can be realized by The functions described in the embodiments of the present invention are implemented by analog circuits, and may also be implemented by running software that executes the functions described in the embodiments of the present invention on an intelligent terminal.

实施例六Embodiment 6

本发明实施例还提供了一种信息处理系统,参照图5所示,所述系统包括:获取单元51、图像处理单元52和信息生成单元53;其中,An embodiment of the present invention further provides an information processing system, as shown in FIG. 5 , the system includes: an acquisition unit 51, an image processing unit 52, and an information generation unit 53; wherein,

所述获取单元51,用于获得操作信息和图像数据;其中,所述操作信息为移动终端中的第一应用处于激活状态下、检测到针对所述第一应用的显示界面的触发操作获得的操作信息;所述图像数据为所述移动终端的图像采集单元采集获得的图像数据;所述图像采集单元与显示单元在同一平面上;还用于获得移动终端自身的位置信息和方向信息;The obtaining unit 51 is configured to obtain operation information and image data; wherein, the operation information is obtained by detecting a triggering operation on the display interface of the first application when the first application in the mobile terminal is activated. operation information; the image data is the image data acquired by the image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; also used to obtain the position information and orientation information of the mobile terminal itself;

所述图像处理单元52,用于识别所述获取单元51获得的图像数据中的面部特征数据,基于所述面部特征数据获得面部表情信息;还用于将所述面部特征数据中的眼部特征数据和所述获取单元51获得的所述移动终端自身的位置信息和方向信息进行关联,获得关注点信息;所述关注点信息表征眼睛在移动终端上的聚焦位置信息;The image processing unit 52 is used to identify the facial feature data in the image data obtained by the acquisition unit 51, and obtain facial expression information based on the facial feature data; The data is associated with the position information and direction information of the mobile terminal itself obtained by the obtaining unit 51 to obtain focus point information; the focus point information represents the focus position information of the eyes on the mobile terminal;

所述信息生成单元53,用于基于所述图像处理单元52获得的面部表情信息、所述关注点信息和所述获取单元51获得的操作信息生成第一反馈信息。The information generating unit 53 is configured to generate first feedback information based on the facial expression information obtained by the image processing unit 52 , the point of interest information and the operation information obtained by the obtaining unit 51 .

其中,所述信息生成单元53,用于基于预设时间段内的所述面部表情信息和所述关注点信息,结合所述预设时间段内的操作信息获得对应的第三用户体验参数,基于所述第三用户体验参数生成所述操作信息对应操作位置的第一反馈信息。Wherein, the information generating unit 53 is configured to obtain the corresponding third user experience parameter based on the facial expression information and the point of interest information in the preset time period and in combination with the operation information in the preset time period, The first feedback information of the operation position corresponding to the operation information is generated based on the third user experience parameter.

本实施例中,当移动终端激活第一应用时,输出表征所述第一应用的显示界面,检测针对所述显示界面的触发操作,获得操作信息;所述操作信息包括操作手势信息和操作位置信息;其中,所述操作手势信息包括:单击手势、双击手势、滑动手势、拖动手势、缩放手势、旋转手势、参数(例如音量参数、亮度参数等)调节手势等等;所述操作位置信息为所述操作手势的操作位置信息;所述操作位置信息可针对一功能按键。进一步地,所述操作信息还可为在一段时间内的连续的操作信息。当检测到针对所述显示界面的触发操作时,所述移动终端生成第一指令,基于所述第一指令使能所述移动终端的图像采集单元;在其他实施方式中,所述移动终端也可在激活所述第一应用时使能所述移动终端的图像采集单元,或者,所述移动终端也可基于检测到的触发指令使能所述移动终端的图像采集单元,本实施例中不做具体限定。本实施例中,所述图像采集单元与所述移动终端的显示单元在同一平面上,可以理解为,所述图像采集单元可通过所述移动终端的前置摄像头实现。In this embodiment, when the mobile terminal activates the first application, it outputs a display interface representing the first application, detects a trigger operation on the display interface, and obtains operation information; the operation information includes operation gesture information and operation position information; wherein, the operation gesture information includes: click gesture, double-click gesture, slide gesture, drag gesture, zoom gesture, rotation gesture, parameter (such as volume parameter, brightness parameter, etc.) adjustment gesture, etc.; the operation position The information is the operation position information of the operation gesture; the operation position information may be for a function button. Further, the operation information may also be continuous operation information within a period of time. When detecting a trigger operation for the display interface, the mobile terminal generates a first instruction, and enables an image acquisition unit of the mobile terminal based on the first instruction; in other implementation manners, the mobile terminal also The image acquisition unit of the mobile terminal may be enabled when the first application is activated, or the mobile terminal may also enable the image acquisition unit of the mobile terminal based on the detected trigger instruction. Make specific restrictions. In this embodiment, the image acquisition unit and the display unit of the mobile terminal are on the same plane, and it can be understood that the image acquisition unit may be implemented by the front camera of the mobile terminal.

本实施例中,所述图像处理单元52分析所述图像数据,首先对所述图像数据进行预处理(例如去噪、像素位置或光照变量的标准化),以及面部的分割、定位或追踪等等。进一步地,对所述图像数据进行面部特征数据的提取,包括将像素数据转化为面部及其组成部分的外形、运动、颜色、肌肉和空间结构的表示,提取出的面部特征数据用于进行后续的表情分类。进一步地,所述信息处理系统中预先表情分类器,所述表情分类器中包括多组面部特征数据与表情信息的对应关系,或者所述表情分类器中包括一表情分类模型,将所述面部特征数据输入所述表情分类器,输出所述面部特征数据对应的表情信息,也即基于所述面部特征数据获得面部表情信息。本实施例中所述的面部分析方法以及表情信息的识别可参照现有技术中的任何分析识别方法,本实施例中不作具体描述。In this embodiment, the image processing unit 52 analyzes the image data, and first performs preprocessing on the image data (such as denoising, normalization of pixel positions or illumination variables), and segmentation, positioning or tracking of faces, etc. . Further, the extraction of facial feature data is performed on the image data, including converting the pixel data into representations of the shape, motion, color, muscle and spatial structure of the face and its components, and the extracted facial feature data is used for subsequent follow-up. expression classification. Further, the pre-expression classifier in the information processing system, the expression classifier includes the correspondence between multiple groups of facial feature data and expression information, or the expression classifier includes an expression classification model, and the facial expression The feature data is input into the expression classifier, and the expression information corresponding to the facial feature data is output, that is, the facial expression information is obtained based on the facial feature data. For the facial analysis method and the recognition of expression information described in this embodiment, reference may be made to any analysis and recognition method in the prior art, which is not described in detail in this embodiment.

本实施例中,所述获取单元51获得所述移动终端的相对位置信息和方向信息;所述相对位置信息为所述移动终端与持有者之间的相对位置关系。具体的,所述移动终端中设置有以下传感单元的至少之一:重力感应单元、加速度传感单元、距离传感单元、虹膜识别单元等等,具体的,所述移动终端可通过所述重力感应单元或所述加速度传感单元获得方向信息,所述方向信息可以为所述移动终端的重心方向相对于所述移动终端的长边方向或短边方向的夹角,所述方向信息也即所述移动终端的姿态变化信息。所述移动终端还可以通过所述距离传感单元或所述虹膜识别单元获得所述移动终端与持有者之间的相对位置信息,其中,所述距离传感单元和所述虹膜识别单元通常设置于所述移动终端显示单元的同一面上,当用户持握所述移动终端时,可通过所述距离传感单元检测到与用户之间的距离,或者可通过所述虹膜识别单元识别出所述移动终端与所述用户的眼睛之间的相对方位。In this embodiment, the obtaining unit 51 obtains relative position information and direction information of the mobile terminal; the relative position information is the relative position relationship between the mobile terminal and the holder. Specifically, the mobile terminal is provided with at least one of the following sensing units: a gravity sensing unit, an acceleration sensing unit, a distance sensing unit, an iris recognition unit, etc. The gravity sensing unit or the acceleration sensing unit obtains direction information, and the direction information may be the angle between the direction of the center of gravity of the mobile terminal relative to the long-side direction or the short-side direction of the mobile terminal, and the direction information is also That is, the posture change information of the mobile terminal. The mobile terminal may also obtain relative position information between the mobile terminal and the holder through the distance sensing unit or the iris identification unit, wherein the distance sensing unit and the iris identification unit usually Set on the same surface of the display unit of the mobile terminal, when the user holds the mobile terminal, the distance to the user can be detected by the distance sensing unit, or the iris recognition unit can be used to identify The relative orientation between the mobile terminal and the user's eyes.

进一步地,所述图像处理单元52基于所述面部特征数据中包含的眼部特征数据以及所述移动终端的相对位置信息和所述方向信息进行关联,获得关注点信息,所述关注点信息表征所述移动终端的持有用户的眼睛在所述移动终端的聚焦位置信息,也可以理解为所述持有用户的眼睛浏览的内容所在的位置信息。具体的,所述信息处理系统可基于所述眼部特征数据获得所述持有用户的眼睛的视线方向信息,进一步的,基于所述移动终端的相对位置信息和所述方向信息确定所述移动终端和所述持有用户的相对位置关系,基于所述视线方向信息和所述相对位置关系获得所述持有用户的眼睛聚焦在移动终端上的聚焦范围,基于所述聚焦范围生成所述关注点信息。Further, the image processing unit 52 associates the eye feature data included in the facial feature data and the relative position information of the mobile terminal and the direction information to obtain the point of interest information, and the point of interest information represents The focus position information of the eyes of the holding user on the mobile terminal of the mobile terminal may also be understood as the position information of the content browsed by the eyes of the holding user. Specifically, the information processing system may obtain the gaze direction information of the eyes of the holding user based on the eye feature data, and further, determine the movement based on the relative position information of the mobile terminal and the direction information The relative positional relationship between the terminal and the holding user, obtaining the focus range of the holding user's eyes on the mobile terminal based on the gaze direction information and the relative positional relationship, and generating the attention based on the focus range point information.

本实施例中,所述信息生成单元53基于预设时间段内的面部表情信息,以及所述预设时间段内的关注点信息,结合操作信息获得对应的第三用户体验参数。例如,在预设时间段t内,获得的面部表情信息为愉悦,所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)未出现变化,或者变化的范围比例小于第三阈值,且所述操作信息包含的操作次数小于第四阈值时,对应的第二用户体验参数为5;或者,获得的面部表情信息为平静,所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第三阈值小于第一阈值(所述第一阈值小于所述第三阈值),且所述操作信息包含的操作次数小于第二阈值大于第四阈值(所述第四阈值小于所述第二阈值)时,对应的第二用户体验参数为3;或者,获得的面部表情信息为不满,所述关注点信息(即持有用户的眼睛在移动终端上的聚焦位置)变化的范围比例大于第一阈值,且所述操作信息包含的操作次数大于第二阈值时,对应的第二用户体验参数为0。当然,所述信息处理系统获得第三用户体验参数的方式也可参照其他方式,本实施例中不做赘述。如在一场景中,用户通过输入操作想找到一功能入口,则用户会四处寻找所述功能入口的位置;则所述信息处理系统获得的用户的关注点信息变化的范围比例会大于第一阈值;相应的,在用户寻找所述功能入口的位置过程中,伴随着多次的触发操作,也即所述信息处理系统获得操作信息包含的操作次数大于第二阈值;相应的,在用户花费了较长时间寻找所述功能入口的位置的场景下,用户会面露不悦的表情;在这种场景下,所述信息处理系统基于对获得的所述面部表情信息、所述关注点信息以及所述操作信息的分析,确定当前用户体验较差,也即用户需要很长时间才能找到所述功能入口,表明所述应用在当前操作位置给用户带来较差的操作体验,需要进行优化或改进。In this embodiment, the information generating unit 53 obtains the corresponding third user experience parameter based on the facial expression information within the preset time period and the point of interest information within the preset time period, in combination with the operation information. For example, within the preset time period t, the obtained facial expression information is pleasant, and the point of interest information (that is, the focus position of the holding user's eyes on the mobile terminal) does not change, or the range ratio of the change is smaller than the third threshold, and when the number of operations included in the operation information is less than the fourth threshold, the corresponding second user experience parameter is 5; The ratio of the range of changes in the focus position on the mobile terminal) is greater than a third threshold and less than a first threshold (the first threshold is less than the third threshold), and the number of operations included in the operation information is less than the second threshold and greater than the fourth threshold (the fourth threshold is less than the second threshold), the corresponding second user experience parameter is 3; or, the obtained facial expression information is dissatisfaction, and the point of interest information (that is, holding the user's eyes on the mobile terminal) When the ratio of the range of changes in the focus position) is greater than the first threshold, and the number of operations included in the operation information is greater than the second threshold, the corresponding second user experience parameter is 0. Of course, the manner in which the information processing system obtains the third user experience parameter may also refer to other manners, which will not be repeated in this embodiment. For example, in a scenario, if the user wants to find a function entry through an input operation, the user will look around for the location of the function entry; then the proportion of the range of changes in the user's focus information obtained by the information processing system will be greater than the first threshold. ; Correspondingly, in the process of the user finding the location of the function entry, with multiple trigger operations, that is, the number of operations included in the information processing system to obtain the operation information is greater than the second threshold; Correspondingly, after the user spends In the scenario of searching for the location of the function entry for a long time, the user will show an unhappy expression; in this scenario, the information processing system The analysis of the operation information determines that the current user experience is poor, that is, it takes a long time for the user to find the function entry, indicating that the application brings a poor operation experience to the user at the current operation position, and needs to be optimized or Improve.

本领域技术人员应当理解,本发明实施例的信息处理系统中各处理模块的功能,可参照前述信息处理方法的相关描述而理解,本发明实施例的信息处理系统中各处理模块,可通过实现本发明实施例所述的功能的模拟电路而实现,也可以通过执行本发明实施例所述的功能的软件在智能终端上的运行而实现。Those skilled in the art should understand that the functions of each processing module in the information processing system of the embodiment of the present invention can be understood by referring to the relevant description of the foregoing information processing method, and each processing module in the information processing system of the embodiment of the present invention can be realized by The functions described in the embodiments of the present invention are implemented by analog circuits, and may also be implemented by running software that executes the functions described in the embodiments of the present invention on an intelligent terminal.

本发明实施例四至实施例六中,所述信息处理系统中的图像处理单元52和信息生成单元53,在实际应用中可由所述系统中的中央处理器(CPU,Central Processing Unit)、数字信号处理器(DSP,Digital Signal Processor)或可编程门阵列(FPGA,Field-Programmable Gate Array)实现;所述信息处理系统中的获取单元51,在实际应用中,可由所述系统中的收发天线或收发机实现。In Embodiments 4 to 6 of the present invention, the image processing unit 52 and the information generating unit 53 in the information processing system can be implemented by a central processing unit (CPU, Central Processing Unit), a digital signal in the system in practical applications The acquisition unit 51 in the information processing system can be implemented by the transceiver antenna or Transceiver implementation.

在本申请所提供的几个实施例中,应该理解到,所揭露的设备和方法,可以通过其它的方式实现。以上所描述的设备实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,如:多个单元或组件可以结合,或可以集成到另一个系统,或一些特征可以忽略,或不执行。另外,所显示或讨论的各组成部分相互之间的耦合、或直接耦合、或通信连接可以是通过一些接口,设备或单元的间接耦合或通信连接,可以是电性的、机械的或其它形式的。In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other manners. The device embodiments described above are only illustrative. For example, the division of the units is only a logical function division. In actual implementation, there may be other division methods. For example, multiple units or components may be combined, or Can be integrated into another system, or some features can be ignored, or not implemented. In addition, the coupling, or direct coupling, or communication connection between the components shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be electrical, mechanical or other forms. of.

上述作为分离部件说明的单元可以是、或也可以不是物理上分开的,作为单元显示的部件可以是、或也可以不是物理单元,即可以位于一个地方,也可以分布到多个网络单元上;可以根据实际的需要选择其中的部分或全部单元来实现本实施例方案的目的。The unit described above as a separate component may or may not be physically separated, and the component displayed as a unit may or may not be a physical unit, that is, it may be located in one place or distributed to multiple network units; Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.

另外,在本发明各实施例中的各功能单元可以全部集成在一个处理单元中,也可以是各单元分别单独作为一个单元,也可以两个或两个以上单元集成在一个单元中;上述集成的单元既可以采用硬件的形式实现,也可以采用硬件加软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may all be integrated into one processing unit, or each unit may be separately used as a unit, or two or more units may be integrated into one unit; the above-mentioned integration The unit can be implemented either in the form of hardware or in the form of hardware plus software functional units.

本领域普通技术人员可以理解:实现上述方法实施例的全部或部分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于一计算机可读取存储介质中,该程序在执行时,执行包括上述方法实施例的步骤;而前述的存储介质包括:移动存储设备、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。Those of ordinary skill in the art can understand that all or part of the steps of implementing the above method embodiments can be completed by program instructions related to hardware, the aforementioned program can be stored in a computer-readable storage medium, and when the program is executed, execute Including the steps of the above-mentioned method embodiment; and the aforementioned storage medium includes: a mobile storage device, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk or an optical disk and other various A medium on which program code can be stored.

或者,本发明上述集成的单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。基于这样的理解,本发明实施例的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机、服务器、或者网络设备等)执行本发明各个实施例所述方法的全部或部分。而前述的存储介质包括:移动存储设备、ROM、RAM、磁碟或者光盘等各种可以存储程序代码的介质。Alternatively, if the above-mentioned integrated unit of the present invention is implemented in the form of a software function module and sold or used as an independent product, it may also be stored in a computer-readable storage medium. Based on this understanding, the technical solutions of the embodiments of the present invention may be embodied in the form of software products in essence or the parts that make contributions to the prior art. The computer software products are stored in a storage medium and include several instructions for A computer device (which may be a personal computer, a server, or a network device, etc.) is caused to execute all or part of the methods described in the various embodiments of the present invention. The aforementioned storage medium includes: a removable storage device, a ROM, a RAM, a magnetic disk or an optical disk and other mediums that can store program codes.

以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以所述权利要求的保护范围为准。The above are only specific embodiments of the present invention, but the protection scope of the present invention is not limited thereto. Any person skilled in the art can easily think of changes or substitutions within the technical scope disclosed by the present invention. should be included within the protection scope of the present invention. Therefore, the protection scope of the present invention should be based on the protection scope of the claims.

Claims (8)

1. An information processing method, characterized in that the method comprises:
obtaining operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane;
identifying facial feature data in the image data, and obtaining facial expression information based on the facial feature data;
obtaining relative position information and direction information of the mobile terminal;
associating the eye feature data, the relative position information and the direction information in the face feature data to obtain the information of the attention point; specifically, gaze direction information of eyes is obtained based on the eye feature data, a relative position relationship between the mobile terminal and the eyes is determined based on the relative position information and the direction information, a focusing range of the eyes focused on the mobile terminal is obtained based on the gaze direction information and the relative position relationship, and the point-of-interest information is generated based on the focusing range; the focus point information represents focusing position information of eyes on the mobile terminal;
generating first feedback information based on the facial expression information, the range of change in the point of interest information, and the operation information.
2. The method of claim 1, further comprising:
and generating second feedback information based on the point of interest information and the operation information.
3. The method of claim 2, wherein generating second feedback information based on the point of interest information and the operation information comprises:
and obtaining a corresponding second user experience parameter based on the attention point information in a preset time period and the operation information in the preset time period, and generating second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
4. The method of claim 1, wherein the generating first feedback information based on the facial expression information, the point of interest information, and the operation information comprises:
and obtaining a corresponding third user experience parameter by combining operation information in a preset time period based on the facial expression information and the attention point information in the preset time period, and generating first feedback information of an operation position corresponding to the operation information based on the third user experience parameter.
5. An information processing system, the system comprising: an acquisition unit, an image processing unit and an information generation unit; wherein,
the acquisition unit is used for acquiring operation information and image data; the operation information is obtained by detecting a trigger operation aiming at a display interface of a first application when the first application in the mobile terminal is in an activated state; the image data is acquired by an image acquisition unit of the mobile terminal; the image acquisition unit and the display unit are on the same plane; the system is also used for obtaining the relative position information and the direction information of the mobile terminal;
the image processing unit is used for identifying facial feature data in the image data and obtaining facial expression information based on the facial feature data; the face feature data acquisition unit is further configured to associate eye feature data in the face feature data, the relative position information of the mobile terminal obtained by the acquisition unit and the direction information to obtain point-of-interest information; specifically, gaze direction information of eyes is obtained based on the eye feature data, a relative position relationship between the mobile terminal and the eyes is determined based on the relative position information and the direction information, a focusing range of the eyes focused on the mobile terminal is obtained based on the gaze direction information and the relative position relationship, and the point-of-interest information is generated based on the focusing range; the focus point information represents focusing position information of eyes on the mobile terminal;
the information generating unit is used for generating first feedback information based on the facial expression information obtained by the image processing unit, the change range of the focus point information and the operation information obtained by the obtaining unit.
6. The system of claim 5, wherein the information generating unit is further configured to generate second feedback information based on the point of interest information and the operation information.
7. The system according to claim 6, wherein the information generating unit is configured to obtain a corresponding second user experience parameter based on the point of interest information in a preset time period and the operation information in the preset time period, and generate second feedback information of an operation position corresponding to the operation information based on the second user experience parameter.
8. The system according to claim 5, wherein the information generating unit is configured to obtain a corresponding third user experience parameter based on the facial expression information and the point of interest information in a preset time period in combination with the operation information in the preset time period, and generate the first feedback information of the operation position corresponding to the operation information based on the third user experience parameter.
CN201510869366.7A 2015-12-02 2015-12-02 Information processing method and system Active CN106815264B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510869366.7A CN106815264B (en) 2015-12-02 2015-12-02 Information processing method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510869366.7A CN106815264B (en) 2015-12-02 2015-12-02 Information processing method and system

Publications (2)

Publication Number Publication Date
CN106815264A CN106815264A (en) 2017-06-09
CN106815264B true CN106815264B (en) 2020-08-04

Family

ID=59107979

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510869366.7A Active CN106815264B (en) 2015-12-02 2015-12-02 Information processing method and system

Country Status (1)

Country Link
CN (1) CN106815264B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108848416A (en) * 2018-06-21 2018-11-20 北京密境和风科技有限公司 The evaluation method and device of audio-video frequency content
WO2021050595A1 (en) 2019-09-09 2021-03-18 Apple Inc. Multimodal inputs for computer-generated reality
CN114218113B (en) * 2021-12-22 2025-02-21 深圳万兴软件有限公司 Software product experience monitoring method, device and related components based on physiological parameters

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462468A (en) * 2014-12-17 2015-03-25 百度在线网络技术(北京)有限公司 Information supply method and device
CN104699769A (en) * 2015-02-28 2015-06-10 北京京东尚科信息技术有限公司 Interacting method based on facial expression recognition and equipment executing method
CN104881350A (en) * 2015-04-30 2015-09-02 百度在线网络技术(北京)有限公司 Method and device for confirming user experience and method and device for assisting in user experience confirmation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104462468A (en) * 2014-12-17 2015-03-25 百度在线网络技术(北京)有限公司 Information supply method and device
CN104699769A (en) * 2015-02-28 2015-06-10 北京京东尚科信息技术有限公司 Interacting method based on facial expression recognition and equipment executing method
CN104881350A (en) * 2015-04-30 2015-09-02 百度在线网络技术(北京)有限公司 Method and device for confirming user experience and method and device for assisting in user experience confirmation

Also Published As

Publication number Publication date
CN106815264A (en) 2017-06-09

Similar Documents

Publication Publication Date Title
EP3284011B1 (en) Two-dimensional infrared depth sensing
US11650659B2 (en) User input processing with eye tracking
US10429944B2 (en) System and method for deep learning based hand gesture recognition in first person view
CN110209273B (en) Gesture recognition method, interactive control method, device, medium and electronic device
JP6662876B2 (en) Avatar selection mechanism
CN111881763B (en) Method, device, storage medium and electronic device for determining user gaze position
CN113015984A (en) Error correction in convolutional neural networks
JP2015526927A (en) Context-driven adjustment of camera parameters
CN104364733A (en) Position-of-interest detection device, position-of-interest detection method, and position-of-interest detection program
CN108198159A (en) A kind of image processing method, mobile terminal and computer readable storage medium
KR102094953B1 (en) Method for eye-tracking and terminal for executing the same
CN106815264B (en) Information processing method and system
CN110866940A (en) Control method, device, terminal device and storage medium for virtual screen
CN108140124A (en) Prompt information determination method and device, electronic equipment and computer program product
Bâce et al. Accurate and robust eye contact detection during everyday mobile device interactions
CN107844734B (en) Monitoring target determination method and device, video monitoring method and device
US10091436B2 (en) Electronic device for processing image and method for controlling the same
CN115393962A (en) Action recognition method, head-mounted display device and storage medium
CN109040588A (en) Face image photographing method and device, storage medium and terminal
Delabrida et al. Towards a wearable device for monitoring ecological environments
CN115062131B (en) A human-computer interaction method and device based on multimodality
CN114779944B (en) Control command generation method and apparatus
US20260038147A1 (en) Hand touch detection using images
CN120302093A (en) Information processing method and electronic device
KR20190103570A (en) Method for eye-tracking and terminal for executing the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant