WO2019062852A1 - 显示内容的控制方法、装置及计算机可读介质 - Google Patents
显示内容的控制方法、装置及计算机可读介质 Download PDFInfo
- Publication number
- WO2019062852A1 WO2019062852A1 PCT/CN2018/108320 CN2018108320W WO2019062852A1 WO 2019062852 A1 WO2019062852 A1 WO 2019062852A1 CN 2018108320 W CN2018108320 W CN 2018108320W WO 2019062852 A1 WO2019062852 A1 WO 2019062852A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- face
- relative
- user
- terminal screen
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Definitions
- the present disclosure relates to, but is not limited to, the field of communication technology.
- the method of controlling the display content refers to a method of controlling the display content in the terminal display device.
- methods for displaying content control are mainly divided into active and passive. Passive is to provide a human-computer interaction interface by the terminal, and the user performs, for example, a contact operation to control the display.
- the active mode utilizes various sensors in the terminal to actively detect user behavior, and judges and implements control of the display according to the preset parameters.
- the active display control method is widely used in mobile phones, VR (Virtual Reality) terminals and other product fields because it can obtain a better user experience.
- the present disclosure provides a method of controlling display content.
- the method includes: separately acquiring sensor data and a face image; and calculating a relative position of the face and the terminal screen according to the sensor data, wherein a relative position of the face and the terminal screen includes a relative face of the face and the terminal screen a distance and a relative angle; extracting facial feature data in the face image, and obtaining a viewing angle adjustment signal and a depth of field adjustment signal according to the relative position of the face and the terminal screen according to the facial feature data; The viewing angle adjustment signal and the depth of field adjustment signal are used to control the display content of the terminal screen.
- the present disclosure also provides a control device for displaying content.
- the control device for displaying content includes an acquisition module, a relative position analysis module, a data analysis module, and a display module.
- the acquisition module is configured to separately collect sensor data and face images.
- the relative position analysis module is configured to calculate a relative position of the face and the terminal screen according to the sensor data, wherein the relative position of the face and the terminal screen includes a relative distance and a relative angle between the face and the terminal screen.
- the data analysis module is configured to extract face feature data in the face image, and obtain a view angle adjustment signal and a depth of field adjustment signal according to the face feature data and the relative positions of the face and the terminal screen.
- the display module is configured to control the display content of the terminal screen according to the viewing angle adjustment signal and the depth of field adjustment signal.
- the present disclosure also provides a computer readable storage medium having stored thereon a program, the program being executed by a processor to implement the steps of any of the methods described herein.
- FIG. 1 is a flowchart of a display content control method according to an embodiment of the present disclosure
- FIG. 2 is a schematic flow chart of a VR terminal display content control method according to an embodiment of the present disclosure
- FIG. 3 is a schematic structural diagram of a display content control apparatus according to an embodiment of the present disclosure.
- the main technical difference is that there are two types of methods for detecting and analyzing user behavior.
- One is to control the terminal display according to the movement of the user's head and body, such as the title " Chinese Patent Application No. 201610527481.0 (Patent Document 1) of a virtual reality system and a control method for virtual reality glasses;
- one type is to control terminal display according to the direction of eye movement, such as "a virtual reality imaging system and method with eyeball tracking" Chinese Patent Application No. 201510159113.0 (Patent Document 2).
- the virtual reality glasses and the terminal are included in the solution of Patent Document 1.
- the virtual reality glasses are the carriers of the terminal. When in use, the virtual reality glasses are worn on the head of the user, and the terminals are placed and fixed inside the virtual reality glasses; the terminal includes sensing devices such as speed sensors and gyroscopes, when the user When the head moves, these sensors generate parameters related to the terminal motion, which are passed to the processor for calculation and analysis, and finally control the content displayed by the terminal. Therefore, compared with a mature terminal such as a mobile phone, the virtual reality system actually needs to implement virtual reality glasses, and the virtual reality glasses are only used as firmware between the user's head and the terminal.
- Patent Document 1 The terminal implemented in Patent Document 1 can achieve the effect of controlling display, but it has the following disadvantages:
- the terminal requires the user to wear a helmet-like device, and this additional wearing device affects the user's comfort.
- the terminal controls the display according to the detection result of the user's head and body direction, which requires the user to use in a relatively empty scene, and has an environment restriction on the user's use.
- the terminal can only control the rotation effect in the horizontal and vertical directions, and can only control the change of the display angle of view.
- the solution of Patent Document 2 includes a helmet, an eyeball scanner, an eyelid tracking device, a receiving module, a processing module, and a display device.
- the helmet is the carrier of the whole virtual reality system, other modules and components are installed on the helmet;
- the eyeball scanning module is used to detect the iris and retina information of the user's eyeball;
- the eyelid tracking device is used to detect the eyelid movement form of the user.
- the receiving module is configured to receive the detection data of the eyeball scanner and the eyelid tracking device;
- the processing module is configured to compare and analyze the eyeball motion information to determine the motion state of the eyeball; and the display device displays the information according to the motion state of the eyeball.
- Patent Document 2 The terminal implemented in Patent Document 2 can achieve the effect of controlling display, but it has the following disadvantages:
- the terminal requires the user to wear a helmet-like device, and this additional wearing device affects the user's comfort.
- the terminal can only detect the horizontal direction, and can not control the full range of the viewing angle and the depth of field.
- the above two methods may require the wearing of the device to affect the user's comfort; or the horizontal direction may be detected, and the full range of the viewing angle and the depth of field cannot be controlled, and the user experience is poor.
- the present disclosure provides a method, apparatus, and computer readable medium for controlling content that avoids one or more of the problems due to limitations and disadvantages of the related art.
- the control method for displaying the content mainly realizes that the display angle of view and the depth of field are dynamically adjusted according to the behavior of the user, and the display angle and the depth of field are adjusted to the target expected by the user.
- the core content is the analysis method of the user's will.
- the movement of the user's head, body and eyeball is a direct response to the user's wishes.
- the change of the user's expression is also a reaction of their will, therefore, through the movement of the user's body, organs, or the expression of the user.
- Analysis in theory, can be used as a data source for display control.
- FIG. 1 is a flowchart of a display content control method according to an embodiment of the present disclosure. As shown in FIG. 1, the method for controlling display content according to an embodiment of the present disclosure includes the following steps S101 to S104.
- step S101 sensor data and a face image are acquired, respectively.
- the sensor data includes data acquired by an accelerometer, a gyroscope, a distance sensor.
- the face image can be acquired, for example, by an imaging device.
- a relative position of the face and the terminal screen is calculated according to the sensor data, wherein the relative position of the face and the terminal screen includes a relative distance and a relative angle between the face and the terminal screen.
- the method further comprises: determining whether the change of the sensor data/sensor data exceeds a preset threshold; When the threshold is set, the relative position of the face and the terminal screen is recalculated.
- calculating the relative position of the face and the terminal screen includes the following steps: obtaining a relative distance between the face and the terminal screen according to the distance between the collected terminal screen and the face; and according to the collected posture information of the terminal Obtaining a normal direction and a tangential direction of the terminal screen; obtaining a feature surface according to the plurality of feature points of the collected face image; calculating a deflection angle of the feature surface in the normal direction and the tangential direction, Get the relative angle between the face and the terminal screen.
- the plurality of feature points of the face may be binocular corners and nose tips of the face.
- the corners of the eyes and the tip of the nose can be approximated to form an isosceles triangle. Therefore, when the face is perpendicular to the normal of the terminal screen, it should also be a triangle. With the movement of the user's face, this triangular structure is also It will change accordingly. According to the change of the three sides of the triangle, the deflection angle of the triangle in the normal direction and the tangential method can be calculated in the three-dimensional space, so that the user can directly obtain the face and the terminal screen when looking straight ahead. The relative angle.
- step S103 face feature data is extracted in the face image, and according to the face feature data, a view adjustment signal and a depth of field adjustment signal are respectively obtained by combining the relative positions of the face and the terminal screen.
- a view adjustment signal comprising: calculating a user's line of sight relative to the face according to the face feature data An angle of the user's line of sight relative to the terminal screen according to the relative angle of the face and the terminal screen, and the angle of the user's line of sight relative to the face; the angle of the user's line of sight relative to the terminal screen, and the face and The relative position of the terminal screen obtains the position of the user's line of sight on the screen of the terminal; according to the position of the user's line of sight on the screen of the terminal, the user's adjustment requirement for the angle of view is determined, and a viewing angle adjustment signal is obtained.
- combining the relative positions of the face and the terminal screen, obtaining a depth of field adjustment signal comprising the steps of: analyzing a change trend of the face feature data, combining the face with The relative position of the terminal screen determines the user's need to adjust the depth of field and obtains the depth of field adjustment signal.
- the example of the facial feature data includes one or more of the following: a distance between a binocular eyeball with respect to an eye corner, a shape of a eyelid and an eyeball, a shape of an eyebrow, a forehead muscle texture feature, Eyelid muscle texture characteristics, nose muscle texture characteristics, distance between eyebrows and hairline.
- calculating the angle of the user's line of sight relative to the face according to the facial feature data includes the steps of: calculating the user according to the distance of the eyeball relative to the eye corner, and the shape of the eyelid and the eyeball The angle of sight relative to the face.
- calculating the angle of the user's line of sight relative to the face according to the distance of the eyeball from the eye corner and the shape of the eyelid and the eyeball including the following steps: calculating the user's line of sight relative to the distance of the eyeball relative to the eye corner a horizontal angle of the face; calculating a vertical angle of the user's line of sight relative to the face according to the shape of the eyelid and the eyeball; according to the horizontal angle of the user's line of sight relative to the face, and the vertical angle of the user's line of sight relative to the face The angle of the user's line of sight relative to the face.
- the eyeball In the human eye structure, the eyeball is symmetrically relatively centered. When the eyeball is deflected to the left and right, the distance between the eyeballs and the respective eye corners will change, thereby calculating the lateral angle of the user's line of sight relative to the face. When the eyeball is deflected up and down, the shape of the eyeball and the eyelid changes, and the vertical angle of the user's line of sight relative to the face is calculated according to the shape of the eyelid and the display ratio of the eyeball.
- the present disclosure is mainly implemented by analyzing the expression of the eye of the user.
- the human eye expression is mainly composed of frowning muscles, orbicularis, nasal muscles and frontal muscles. It is reflected in the expression of muscles in the eyebrows, eyelids, nose and forehead, and the strength of the muscles. For example, when the user's eyebrows and forehead muscles are squeezed inward, and the eyelids and nose muscles contract, and remain above the preset time, it can be determined that the user is willing to zoom in on the depth of field. When the eyebrows and forehead muscles are squeezed outward, and the eyelids and nose muscles are expanded, and the preset time is exceeded, it can be determined that the user is willing to push the depth of field.
- Trends include one or more of the following: the shape of the eyebrows and the compression or tendency of the forehead muscle texture (or the extrusion or tendency of the eyebrow muscle texture and forehead muscle texture), the distance between the eyebrows and the hairline, and the eyelid muscles. The tendency of shrinkage or expansion of texture features and nose muscle texture features.
- step S104 the display content of the terminal screen is controlled according to the viewing angle adjustment signal and the depth of field adjustment signal.
- the method for controlling display content provided by the embodiment of the present disclosure can obtain a viewing angle adjustment signal and a depth of field adjustment signal according to a face image, combining the relative positions of the face and the terminal screen, and thereby control the display content of the terminal screen, thereby improving the user.
- a viewing angle adjustment signal and a depth of field adjustment signal according to a face image
- depth of field refers to the degree of proximity of the virtual video display
- perspective refers to the direction in which the virtual video is displayed
- gesture refers to the angle of different planes in space.
- FIG. 2 is a flow chart showing a method of controlling display content of a VR terminal according to an embodiment of the present disclosure.
- the control method of the display content of the VR terminal of the embodiment of the present disclosure may include the following steps S201 to S208.
- step S201 after the VR video playback is started, each sensor is checked, and related data is collected, the relative position of the terminal screen and the user's face is calculated, and the angle of view and depth of field of the video playback are initialized.
- step S202 the system checks whether the data of the accelerometer, the distance sensor, and the gyroscope exceeds a preset threshold, and if so, the flow proceeds to step S203, otherwise the flow proceeds to step S204.
- the relative position of the terminal screen to the user's face is calculated based on the sensor data as a calculation parameter of the depth of field and the angle of view control.
- the user facial feature data is extracted according to the image acquired by the camera device, including a feature triangle formed by the corner of the eye and the tip of the nose, a figure formed by the eyelid and the eyeball, a shape of the eyebrow, a shape of the eyelid, a texture of the forehead muscle, and an eyebrow and Hairline distance, nose muscle texture characteristics, etc.
- step S205 according to the facial feature parameters of the user, the compression trend of the eyebrow and forehead muscle problems, the change of the distance between the eyebrows and the hairline, the contraction and expansion of the eyelid and nose muscle texture, and the depth of field control according to the preset parameters are calculated.
- Direction and speed according to the facial feature parameters of the user, the compression trend of the eyebrow and forehead muscle problems, the change of the distance between the eyebrows and the hairline, the contraction and expansion of the eyelid and nose muscle texture, and the depth of field control according to the preset parameters are calculated.
- Direction and speed according to the facial feature parameters of the user, the compression trend of the eyebrow and forehead muscle problems, the change of the distance between the eyebrows and the hairline, the contraction and expansion of the eyelid and nose muscle texture, and the depth of field control according to the preset parameters are calculated.
- Direction and speed according to the facial feature parameters of the user, the compression trend of the eyebrow and forehead muscle problems, the change of the distance between the eyebrows and the hairline, the contraction and expansion of the eyelid and nose muscle texture
- an angle between the user's face plane and the terminal screen is calculated according to the user's facial feature parameters, and the angle of the eye's line of sight with respect to the user's face plane is calculated according to the positions of the eyes with respect to the eyelid and the shape of the eyeball and the eyelid. Then, combining the two spatial angle data to calculate the relative angle of the user's line of sight relative to the display plane, combined with the relative position of the terminal screen and the user's face, calculating the position of the user's viewpoint on the display plane of the terminal, and analyzing the direction of the change of the angle of view according to the preset parameters. And speed.
- step S207 the adjustment parameters of the depth of field and the angle of view are converted into response control signals, and the display device is controlled to adjust the display content.
- step S208 it is judged whether or not the end of the playback signal is detected, if the end of the playback signal is not detected, the flow proceeds to step S202, otherwise the playback ends and the flow ends.
- the control method of the display content is mainly implemented by the wearable mode, and the main purpose is to fix the position of the terminal and the user, and enable the terminal to follow the motion of the user, so that the terminal sensor perceives the type of motion of the user.
- a relative attitude method is adopted.
- the terminal uses a sensor device such as an accelerometer, a gyroscope, a camera device, and a distance sensor to measure and analyze the relative position and posture between the user and the terminal screen, and analyze the relative motion type of the user. In this way, the terminal display is controlled.
- FIG. 3 is a schematic structural diagram of a display content control apparatus according to an embodiment of the present disclosure.
- the display content control apparatus of the embodiment of the present disclosure includes: an acquisition module 30, a relative position analysis module 32, a data analysis module 34, and a display module 36.
- the acquisition module 30 is configured to separately collect sensor data and a face image.
- the acquisition module includes an accelerometer, a gyroscope, a distance sensor, and a camera.
- the distance sensor is used to collect the distance between the terminal and the face; the attitude information of the terminal is collected by the gyroscope; the face image is collected by the camera device; and the motion information of the terminal is collected by the accelerometer.
- the relative position analysis module 32 is configured to calculate a relative position of the face and the terminal screen according to the sensor data, wherein the relative position of the face and the terminal screen includes a relative distance and a relative angle between the face and the terminal screen.
- the relative position analysis module 32 is further configured to: determine whether the change of the sensor data/sensor data exceeds a preset threshold; when the preset threshold is exceeded, recalculate the face and the terminal screen relative position.
- the data analysis module 34 is configured to extract face feature data in the face image, and obtain a view adjustment signal and a depth of field adjustment signal according to the face feature data and the relative positions of the face and the terminal screen respectively. .
- the data analysis module 34 includes a face analysis module, a perspective analysis control module, and a depth of field analysis control module.
- the face analysis module is configured to extract face feature data in the face image.
- the view analysis control module is configured to: calculate an angle of the user's line of sight relative to the face according to the face feature data; and obtain the user according to the relative angle between the face and the terminal screen, and the angle of the user's line of sight relative to the face An angle of the line of sight relative to the terminal screen; using an angle of the user's line of sight relative to the terminal screen, and a relative position of the face and the terminal screen, obtaining a position of the user's line of sight on the terminal screen; according to the user's line of sight on the terminal screen Position, determine the user's need to adjust the viewing angle, and obtain the angle of view adjustment signal.
- the depth of field analysis control module is configured to: analyze the change trend of the face feature data, combine the relative positions of the face and the terminal screen, determine the user's need to adjust the depth of field, and obtain the depth of field adjustment signal.
- the display module 36 is configured to control display content of the terminal screen according to the viewing angle adjustment signal and the depth of field adjustment signal.
- the example of the facial feature data includes one or more of the following: a distance between a binocular eyeball with respect to an eye corner, a shape of a eyelid and an eyeball, a shape of an eyebrow, a forehead muscle texture feature, Eyelid muscle texture characteristics, nose muscle texture characteristics, distance between eyebrows and hairline.
- the viewing angle analysis control module is configured to calculate an angle of the user's line of sight relative to the face according to the distance of the eyeball relative to the eye corner and the shape of the eyelid and the eyeball.
- the viewing angle analysis control module is configured to: calculate a lateral angle of the user's line of sight relative to the face according to the distance of the eyeball from the eye corner; and calculate the user's line of sight relative to the face according to the shape of the eyelid and the eyeball.
- the vertical angle is obtained according to the horizontal angle of the user's line of sight relative to the face and the vertical angle of the user's line of sight relative to the face, and the angle of the user's line of sight relative to the face is obtained.
- control device for displaying content For a specific implementation method of the control device for displaying content provided by the embodiment of the present disclosure, reference may be made to the description of the foregoing method embodiment, and details are not described herein again.
- the control device for displaying content provided by the embodiment of the present disclosure can obtain the viewing angle adjustment signal and the depth of field adjustment signal according to the face image and the relative position of the face and the terminal screen, and thereby control the terminal screen. Display content to improve the user experience.
- the present disclosure also provides a computer readable storage medium.
- the computer readable storage medium stores a control program for displaying content, the step of controlling the display content described herein when the control program for displaying the content is executed by the processor.
- the computer readable storage medium provided by the embodiment of the present invention can obtain the angle of view adjustment signal and the depth of field adjustment signal according to the face image and the relative position of the face and the terminal screen, thereby controlling the display content of the terminal screen, thereby improving the display content of the terminal screen. user experience.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本公开提供了一种显示内容的控制方法、装置及计算机可读介质。显示内容的控制方法包括:分别采集传感器数据和人脸图像;根据传感器数据,计算人脸与终端屏幕的相对位置,其中,人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度;在人脸图像中提取人脸特征数据,根据人脸特征数据,结合人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号;根据视角调整信号和景深调整信号,控制终端屏幕的显示内容。
Description
本公开涉及但不限于通讯技术领域。
显示内容的控制方法是指对终端显示装置中的显示内容进行控制的方法。目前,显示内容的控制的方法主要分为主动式和被动式。被动式是由终端提供人机交互接口,由用户进行例如接触式的操作来控制显示。主动式是利用终端中的各种传感器,主动检测用户行为,并根据设定的预置参数来判断和实现对显示的控制。主动式的显示控制方法由于可以获得更好的用户体验是现在研究的主流方向,被普遍应用于手机、VR(Virtual Reality,虚拟现实)终端等产品领域。
发明内容
一方面,本公开提供了一种显示内容的控制方法。所述方法包括:分别采集传感器数据和人脸图像;根据所述传感器数据,计算人脸与终端屏幕的相对位置,其中,所述人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度;在所述人脸图像中提取人脸特征数据,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号;根据所述视角调整信号和景深调整信号,控制终端屏幕的显示内容。
另一方面,本公开还提供了一种显示内容的控制装置。所述显示内容的控制装置包括采集模块、相对位置分析模块、数据分析模块、及显示模块。采集模块配置为分别采集传感器数据和人脸图像。相对位置分析模块配置为根据所述传感器数据,计算人脸与终端屏幕的相对位置,其中,所述人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度。数据分析模块配置为在所述人脸图像中提取人脸特征数据,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号。显示模块配置为 根据所述视角调整信号和景深调整信号,控制终端屏幕的显示内容。
另一方面,本公开还提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有程序,所述程序被处理器执行时实现本文所述的任一方法的步骤。
图1是根据本公开实施例的显示内容控制方法的流程图;
图2是根据本公开实施例的VR终端显示内容控制方法的流程示意图;
图3是根据本公开实施例的显示内容控制装置的结构示意图。
下面将参照附图更详细地描述本公开的示例性实施例。虽然附图中显示了本公开的示例性实施例,然而应当理解,可以以各种形式实现本公开而不应被这里阐述的实施例所限制。相反,提供这些实施例是为了能够更透彻地理解本公开,并且能够将本公开的范围完整的传达给本领域的技术人员。
下面,首先对相关技术中的显示内容的控制方法进行描述。
在主动式控制方法中,主要的技术差异是对用户行为的检测和分析的方法上,具有代表性的有两类,一类是根据用户头部和身体的运动来控制终端显示,如题为“虚拟现实系统和虚拟现实眼镜的控制方法”的中国专利申请No.201610527481.0(专利文献1);一类是根据对眼球运动方向来控制终端显示,如题为“带眼球跟踪虚拟现实成像系统和方法”的中国专利申请No.201510159113.0(专利文献2)。
在专利文献1的方案中包括虚拟现实眼镜和终端两部分。其中,虚拟现实眼镜是终端的载体,在使用时,虚拟现实眼镜穿戴在用户的头部,终端被放置并固定在虚拟现实眼镜内部;终端内包含速度传感器和陀螺仪等传感设备,当用户头部运动时,这些传感器会产生终端运动的相关参数,这些参数会传递给处理器计算分析,并最终对终端显示的内容进行控制。因此,相对于手机这样成熟的终端,该虚拟现 实系统实际需要实现的就是虚拟现实眼镜,而虚拟现实眼镜只是作为用户头部和终端间的固件。
专利文献1中所实现的终端,虽然可以实现控制显示的效果,但它存在以下缺点:
1.该终端需要用户穿戴头盔类设备,这种额外的穿戴设备会影响用户的使用舒适度。
2.该终端是依据对用户头部和身体方向的检测结果来控制显示的,这就要求用户在相对空旷的场景中使用,对用户的使用有环境限制。
3.该终端只能控制水平和垂直方向的转动效果,仅能控制显示视角的变化。
专利文献2的方案中包括头盔、眼球扫描仪、眼皮跟踪装置、接收模块、处理模块和显示装置。其中,头盔是整个虚拟现实系统的载体,其他的模块和组件都是被安装在头盔上的;眼球扫描模块是用于检测用户眼球的虹膜及视网膜信息;眼皮跟踪装置用于检测用户眼皮动作形态信息;接收模块用于接收眼球扫描仪和眼皮跟踪装置的检测数据;处理模块用于比对和分析眼球运动信息,判断眼球的运动状态;显示装置根据眼球的运动状态进行信息显示。
专利文献2中所实现的终端,虽然可以实现控制显示的效果,但它存在以下缺点:
1.该终端需要用户穿戴头盔类设备,这种额外的穿戴设备会影响用户的使用舒适度。
2.该终端只能对水平方向进行检测,无法实现对视角全范围及景深的控制。
上述两类方法,或者需要穿戴设备,影响用户的舒适度;或者只能对水平方向进行检测,无法实现对视角全范围及景深的控制,用户的体验较差。
因此,本公开提供了一种显示内容的控制方法、装置及计算机可读介质,其避免了由于相关技术中的局限和缺点所导致的问题中的一个或多个。
以下结合附图对本公开进行进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本公开,并不限定本公开。
显示内容的控制方法主要实现的是根据用户的行为、动态地调整显示的视角和景深,实现显示视角和景深向用户预期的目标调整。其中,核心内容是对用户意愿的分析方法。通常,用户头部、身体和眼球的运动,都是对用户意愿的直接反应,此外,用户的表情的变化也是其意愿的反应,因此,通过对用户身体、器官的运动,或者对用户表情的分析,理论上是可以作为显示控制的数据源。
一方面,本公开提供了一种显示内容的控制方法。图1是根据本公开实施例的显示内容控制方法的流程图。如图1所示,本公开实施例的显示内容的控制方法,包括以下步骤S101至S104。
在步骤S101处,分别采集传感器数据和人脸图像。
在一些实施例中,所述传感器数据包括由加速度计、陀螺仪、距离传感器采集的数据。所述人脸图像例如可以通过摄像装置采集。
在步骤S102处,根据所述传感器数据,计算人脸与终端屏幕的相对位置,其中,所述人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度。
在一些实施例中,在根据所述传感器数据,计算人脸与终端屏幕的相对位置之后,所述方法还包括:判断所述传感器数据/传感器数据的变化是否超过预设的门限;当超过预设的门限时,重新计算人脸与终端屏幕的相对位置。
在一些实施例中,计算人脸与终端屏幕的相对位置,包括以下步骤:根据采集到的终端屏幕与人脸的距离,得到人脸与终端屏幕的相对距离;根据采集到的终端的姿态信息,得到终端屏幕的法向方向和切向方向;根据采集到的人脸图像的多个特征点,得到特征面;计算所述特征面在所述法向方向和切向方向上的偏转角度,得到人脸与终端屏幕的相对角度。
在一些实施例中,所述人脸的多个特征点可以为人脸的双眼眼角和鼻尖。人类的脸部结构中,双眼眼角和鼻尖可以近似构成一个等腰三角形,因此,当人脸与终端屏幕法线垂直时,也应该是一个三角 形,随着用户脸部的运动,这个三角形结构也会随之改变,根据三角形的三边长度的变化,可以计算出在立体空间中,三角形在法向方向和切向方法的偏转角度,以此,可以得到用户直视前方时人脸与终端屏幕的相对角度。
在步骤S103处,在所述人脸图像中提取人脸特征数据,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号。
在一些实施例中,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,得到视角调整信号,包括以下步骤:根据所述人脸特征数据,计算用户视线相对人脸的角度;根据所述人脸与终端屏幕的相对角度、及所述用户视线相对人脸的角度得到用户视线相对终端屏幕的角度;利用所述用户视线相对终端屏幕的角度、及所述人脸与终端屏幕的相对位置,得到用户视线在终端屏幕上的位置;根据所述用户视线在终端屏幕上的位置,判断用户对视角的调整需求,得到视角调整信号。
在一些实施例中,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,得到景深调整信号,包括以下步骤:分析人脸特征数据的变化趋势,结合所述人脸与终端屏幕的相对位置,判断用户对景深的调整需求,得到景深调整信号。
在本公开实施例中,所述人脸特征数据的示例包括以下中的一种或几种:双眼眼球相对眼角的距离、眼眶与眼球构成的图形的形态、眉毛的形态、额头肌肉纹理特征、眼眶肌肉纹理特征、鼻子肌肉纹理特征、眉毛与发际线的距离。
相应地,在一些实施例中,根据所述人脸特征数据,计算用户视线相对人脸的角度,包括以下步骤:根据双眼眼球相对眼角的距离、及眼眶与眼球构成的图形的形态,计算用户视线相对人脸的角度。
在一些实施例中,根据双眼眼球相对眼角的距离、及眼眶与眼球构成的图形的形态,计算用户视线相对人脸的角度,包括以下步骤:根据双眼眼球相对眼角的距离,计算用户视线相对人脸的横向角度;根据眼眶与眼球构成的图形的形态,计算用户视线相对人脸的垂直角 度;根据所述用户视线相对人脸的横向角度、及所述用户视线相对人脸的垂直角度,得到用户视线相对人脸的角度。
在人类眼部结构中,眼球是对称相对居中的,当眼球左右偏转时,双眼眼球相对各自眼角的距离会发生此消彼长的变化,以此可以计算出用户视线相对脸部横向角度。当眼球上下偏转时,眼球和眼眶的形态会发生改变,根据显示眼眶的形态和眼球的显示比例计算用户视线相对脸部的垂直角度。
对景深的控制方法,本公开主要是通过对用户眼部表情分析来实现的。人类的眼部表情主要是有皱眉肌、眼轮匝肌、鼻肌和额肌构成,反映在表情中就是眉毛、眼眶、鼻子和额头的肌肉变化,肌肉的力度趋势。例如,当用户眉毛和额头肌肉向内挤压,眼眶和鼻子肌肉收缩时,保持超过预置时间,则可判定用户有意愿拉近景深。当眉毛和额头肌肉向外挤压,眼眶和鼻子肌肉扩张时,保持超过预置时间,则可判定用户有意愿推远景深。
基于上述人脸特征数据的示例,分析人脸特征数据的变化趋势,结合所述人脸与终端屏幕的相对位置,判断用户对景深的调整需求,得到景深调整信号中,人脸特征数据的变化趋势包括以下中的一项或几项:眉毛的形态和额头肌肉纹理的挤压或趋势(或者眉毛肌肉纹理和额头肌肉纹理的挤压或趋势)、眉毛与发际线的距离变化、眼眶肌肉纹理特征和鼻子肌肉纹理特征的收缩或扩张趋势。
在步骤S104处,根据所述视角调整信号和景深调整信号,控制终端屏幕的显示内容。
本公开实施例提供的显示内容的控制方法,能根据人脸图像,结合人脸与终端屏幕的相对位置,得到视角调整信号和景深调整信号,并以此控制终端屏幕的显示内容,提高了用户体验。
下面,以示例方式更加详细地说明本公开实施例的显示内容的控制方法。在以下示例中,景深指虚拟视频显示的远近程度;视角指虚拟视频显示的方向;姿态指在空间中的不同平面的角度。
图2是根据本公开实施例的VR终端的显示内容控制方法的流程示意图。如图2所示,本公开实施例的VR终端的显示内容的控制方 法可以包括下列步骤S201至S208。
在步骤S201处,在启动VR视频播放后,对各传感器进行检查,并采集相关数据,计算终端屏幕与用户脸部的相对位置,并初始化视频播放的视角和景深。
在步骤S202处,系统检查加速度计、距离传感器和陀螺仪的数据是否超过预置门限,如果超过则流程进行至步骤S203,否则流程进行至步骤S204。
在步骤S203处,根据传感器数据计算终端屏幕与用户脸部的相对位置,作为景深和视角控制的计算参数。
在步骤S204处,根据摄像装置采集的图像,提取用户脸部特征数据,包括眼角与鼻尖构成的特征三角形、眼眶与眼球构成的图形、眉毛的形态、眼眶的形态、额头肌肉纹理特征、眉毛与发际线的距离、鼻子肌肉纹理特征等。
在步骤S205处,根据用户脸部特征参数,计算眉毛和额头肌肉问题的挤压趋势、眉毛与发际线的距离变化、眼眶和鼻子肌肉纹理收缩和扩张趋势,依据预设参数计算景深控制的方向和速度。
在步骤S206处,根据用户脸部特征参数,计算用户脸部平面与终端屏幕的夹角,再根据双眼相对眼眶的位置,以及眼球和眼眶的形态计算眼睛视线相对用户脸部平面的角度。随后,结合两个空间角度数据计算用户视线相对显示平面的相对角度,结合终端屏幕与用户脸部的相对位置,计算用户视点在终端显示平面上的位置,依据预设参数,分析视角变化的方向和速度。
在步骤S207处,将景深和视角的调整参数转换成响应的控制信号,控制显示装置调整显示内容。
在步骤S208处,判断是否检测到结束播放信号,如果未检测到结束播放信号则流程进行至步骤S202,否则结束播放并且流程结束。
目前,显示内容的控制方法主要依靠穿戴方式实现,主要的目的是为了固定终端与用户的位置,并使终端能够跟随用户的运动,以便终端传感器感知用户的运动类型。在本公开中,采用了相对姿态的方法,终端利用加速度计、陀螺仪、摄像装置和距离传感器等传感器 装置,测量和分析用户与终端屏幕间的相对位置和姿态,分析用户的相对运动类型,以此来控制终端显示。
另一方面,本公开还提供了一种显示内容的控制装置。图3是根据本公开实施例的显示内容控制装置的结构示意图。如图3所示,本公开实施例的显示内容控制装置包括:采集模块30、相对位置分析模块32、数据分析模块34、及显示模块36。
所述采集模块30配置为分别采集传感器数据和人脸图像。
在一些实施例中,所述采集模块包括加速度计、陀螺仪、距离传感器及摄像装置。利用距离传感器采集终端与人脸的距离;通过陀螺仪采集终端的姿态信息;通过摄像装置采集人脸图像;通过加速度计采集终端的运动信息。
所述相对位置分析模块32配置为根据所述传感器数据,计算人脸与终端屏幕的相对位置,其中,所述人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度。
在一些实施例中,所述相对位置分析模块32还配置为:判断所述传感器数据/传感器数据的变化是否超过预设的门限;当超过预设的门限时,重新计算人脸与终端屏幕的相对位置。
所述数据分析模块34配置为在所述人脸图像中提取人脸特征数据,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号。
在一些实施例中,所述数据分析模块34包括人脸分析模块、视角分析控制模块、及景深分析控制模块。
所述人脸分析模块配置为在所述人脸图像中提取人脸特征数据。
所述视角分析控制模块配置为:根据所述人脸特征数据,计算用户视线相对人脸的角度;根据所述人脸与终端屏幕的相对角度、及所述用户视线相对人脸的角度得到用户视线相对终端屏幕的角度;利用所述用户视线相对终端屏幕的角度、及所述人脸与终端屏幕的相对位置,得到用户视线在终端屏幕上的位置;根据所述用户视线在终端屏幕上的位置,判断用户对视角的调整需求,得到视角调整信号。
景深分析控制模块配置为:分析人脸特征数据的变化趋势,结 合所述人脸与终端屏幕的相对位置,判断用户对景深的调整需求,得到景深调整信号。
所述显示模块36配置为根据所述视角调整信号和景深调整信号,控制终端屏幕的显示内容。
在本公开实施例中,所述人脸特征数据的示例包括以下中的一种或几种:双眼眼球相对眼角的距离、眼眶与眼球构成的图形的形态、眉毛的形态、额头肌肉纹理特征、眼眶肌肉纹理特征、鼻子肌肉纹理特征、眉毛与发际线的距离。
相应地,在一些实施例中,所述视角分析控制模块配置为:根据双眼眼球相对眼角的距离、及眼眶与眼球构成的图形的形态,计算用户视线相对人脸的角度。
在一些实施例中,所述视角分析控制模块配置为:根据双眼眼球相对眼角的距离,计算用户视线相对人脸的横向角度;根据眼眶与眼球构成的图形的形态,计算用户视线相对人脸的垂直角度;根据所述用户视线相对人脸的横向角度、及所述用户视线相对人脸的垂直角度,得到用户视线相对人脸的角度。
本公开实施例提供的显示内容的控制装置的具体实现方法可以参照上述方法实施例的描述,这里不再赘述。
本公开实施例提供的显示内容的控制装置,无须进行穿戴,仅根据人脸图像,结合人脸与终端屏幕的相对位置,就能得到视角调整信号和景深调整信号,并以此控制终端屏幕的显示内容,提高了用户体验。
另一方面,本公开还提供了一种计算机可读存储介质。在一些实施例中,所述计算机可读存储介质上存储有显示内容的控制程序,所述显示内容的控制程序被处理器执行时实现本文所述的显示内容的控制方法的步骤。
本公开实施例提供的计算机可读存储介质,根据人脸图像,结合人脸与终端屏幕的相对位置,就能得到视角调整信号和景深调整信号,并以此控制终端屏幕的显示内容,提高了用户体验。
以上所述仅为本公开的示例实施例而已,并不用于限制本公开, 对于本领域的技术人员来说,本公开可以有各种更改和变化。凡在本公开的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本公开的权利要求范围之内。
Claims (19)
- 一种显示内容的控制方法,应用于终端,包括:分别采集传感器数据和人脸图像;根据所述传感器数据,计算人脸与终端屏幕的相对位置,其中,所述人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度;在所述人脸图像中提取人脸特征数据,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号;根据所述视角调整信号和景深调整信号,控制终端屏幕的显示内容。
- 如权利要求1所述的显示内容的控制方法,其中,在根据所述传感器数据,计算人脸与终端屏幕的相对位置之后,所述方法还包括:判断所述传感器数据/传感器数据的变化是否超过预设的门限;当超过预设的门限时,重新计算人脸与终端屏幕的相对位置。
- 如权利要求1或2所述的显示内容的控制方法,其中,计算人脸与终端屏幕的相对位置,包括:根据采集到的终端屏幕与人脸的距离,得到人脸与终端屏幕的相对距离;根据采集到的终端的姿态信息,得到终端屏幕的法向方向和切向方向;根据采集到的人脸图像的多个特征点,得到特征面;计算所述特征面在所述法向方向和切向方向上的偏转角度,得到人脸与终端屏幕的相对角度。
- 如权利要求1所述的显示内容的控制方法,其中,根据所述 人脸特征数据,结合所述人脸与终端屏幕的相对位置,得到视角调整信号,包括:根据所述人脸特征数据,计算用户视线相对人脸的角度;根据所述人脸与终端屏幕的相对角度、及所述用户视线相对人脸的角度得到用户视线相对终端屏幕的角度;利用所述用户视线相对终端屏幕的角度、及所述人脸与终端屏幕的相对位置,得到用户视线在终端屏幕上的位置;根据所述用户视线在终端屏幕上的位置,判断用户对视角的调整需求,得到视角调整信号。
- 如权利要求1所述的显示内容的控制方法,其中,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,得到景深调整信号,包括:分析人脸特征数据的变化趋势,结合所述人脸与终端屏幕的相对位置,判断用户对景深的调整需求,得到景深调整信号。
- 如权利要求1至5中任一项所述的显示内容的控制方法,其中,所述人脸特征数据的包括以下中的一种或几种:双眼眼球相对眼角的距离、眼眶与眼球构成的图形的形态、眉毛的形态、额头肌肉纹理特征、眼眶肌肉纹理特征、鼻子肌肉纹理特征、眉毛与发际线的距离。
- 如权利要求4所述的显示内容的控制方法,其中,所述人脸特征数据包括双眼眼球相对眼角的距离、以及眼眶与眼球构成的图形的形态,并且其中,根据所述人脸特征数据,计算用户视线相对人脸的角度,包括:根据双眼眼球相对眼角的距离、以及眼眶与眼球构成的图形的形态,计算用户视线相对人脸的角度。
- 如权利要求7所述的显示内容的控制方法,其中,根据双眼 眼球相对眼角的距离、以及眼眶与眼球构成的图形的形态,计算用户视线相对人脸的角度,包括:根据双眼眼球相对眼角的距离,计算用户视线相对人脸的横向角度;根据眼眶与眼球构成的图形的形态,计算用户视线相对人脸的垂直角度;根据所述用户视线相对人脸的横向角度、及所述用户视线相对人脸的垂直角度,得到用户视线相对人脸的角度。
- 如权利要求5所述的显示内容的控制方法,其中,所述人脸特征数据的变化趋势包括以下中的一项或几项:眉毛的形态和额头肌肉纹理的挤压或趋势或者眉毛肌肉纹理和额头肌肉纹理的挤压或趋势、眉毛与发际线的距离变化、眼眶肌肉纹理特征和鼻子肌肉纹理特征的收缩或扩张趋势。
- 一种显示内容的控制装置,包括:采集模块,配置为分别采集传感器数据和人脸图像;相对位置分析模块,配置为根据所述传感器数据,计算人脸与终端屏幕的相对位置,其中,所述人脸与终端屏幕的相对位置包括人脸与终端屏幕的相对距离和相对角度;数据分析模块,配置为在所述人脸图像中提取人脸特征数据,根据所述人脸特征数据,结合所述人脸与终端屏幕的相对位置,分别得到视角调整信号和景深调整信号;显示模块,配置为根据所述视角调整信号和景深调整信号,控制终端屏幕的显示内容。
- 如权利要求10所述的显示内容的控制装置,其中,所述相对位置分析模块还配置为:判断所述传感器数据/传感器数据的变化是否超过预设的门限;当超过预设的门限时,重新计算人脸与终端屏幕的相对位置。
- 如权利要求10或11所述的显示内容的控制装置,其中,所述相对位置分析模块配置为:根据采集到的终端屏幕与人脸的距离,得到人脸与终端屏幕的相对距离;根据采集到的终端的姿态信息,得到终端屏幕的法向方向和切向方向;根据采集到的人脸图像的多个特征点,得到特征面;计算所述特征面在所述法向方向和切向方向上的偏转角度,得到人脸与终端屏幕的相对角度。
- 如权利要求10所述的显示内容的控制装置,其中,所述数据分析模块配置为:根据所述人脸特征数据,计算用户视线相对人脸的角度;根据所述人脸与终端屏幕的相对角度、及所述用户视线相对人脸的角度得到用户视线相对终端屏幕的角度;利用所述用户视线相对终端屏幕的角度、及所述人脸与终端屏幕的相对位置,得到用户视线在终端屏幕上的位置;根据所述用户视线在终端屏幕上的位置,判断用户对视角的调整需求,得到视角调整信号。
- 如权利要求10所述的显示内容的控制装置,其中,所述数据分析模块配置为:分析人脸特征数据的变化趋势,结合所述人脸与终端屏幕的相对位置,判断用户对景深的调整需求,得到景深调整信号。
- 如权利要求10至14中任一项所述的显示内容的控制装置,其中,所述人脸特征数据的包括以下中的一种或几种:双眼眼球相对眼角的距离、眼眶与眼球构成的图形的形态、眉毛的形态、额头肌肉纹理特征、眼眶肌肉纹理特征、鼻子肌肉纹理特征、眉毛与发际线的 距离。
- 如权利要求13所述的显示内容的控制装置,其中,所述人脸特征数据包括双眼眼球相对眼角的距离、以及眼眶与眼球构成的图形的形态,并且其中,所述数据分析模块配置为:根据双眼眼球相对眼角的距离、以及眼眶与眼球构成的图形的形态,计算用户视线相对人脸的角度。
- 如权利要求16所述的显示内容的控制装置,其中,所述数据分析模块配置为:根据双眼眼球相对眼角的距离,计算用户视线相对人脸的横向角度;根据眼眶与眼球构成的图形的形态,计算用户视线相对人脸的垂直角度;根据所述用户视线相对人脸的横向角度、及所述用户视线相对人脸的垂直角度,得到用户视线相对人脸的角度。
- 如权利要求14所述的显示内容的控制装置,其中,所述人脸特征数据的变化趋势包括以下中的一项或几项:眉毛的形态和额头肌肉纹理的挤压或趋势或者眉毛肌肉纹理和额头肌肉纹理的挤压或趋势、眉毛与发际线的距离变化、眼眶肌肉纹理特征和鼻子肌肉纹理特征的收缩或扩张趋势。
- 一种计算机可读存储介质,其上存储有程序,所述程序被处理器执行时实现如权利要求1至9中任一项所述的方法。
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201710908991.7A CN109584285B (zh) | 2017-09-29 | 2017-09-29 | 一种显示内容的控制方法、装置及计算机可读介质 |
| CN201710908991.7 | 2017-09-29 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019062852A1 true WO2019062852A1 (zh) | 2019-04-04 |
Family
ID=65900842
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/CN2018/108320 Ceased WO2019062852A1 (zh) | 2017-09-29 | 2018-09-28 | 显示内容的控制方法、装置及计算机可读介质 |
Country Status (2)
| Country | Link |
|---|---|
| CN (1) | CN109584285B (zh) |
| WO (1) | WO2019062852A1 (zh) |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113253831A (zh) * | 2020-02-12 | 2021-08-13 | 北京七鑫易维信息技术有限公司 | 针对旋屏终端设备的眼球追踪方法及装置 |
| CN114119851A (zh) * | 2021-12-07 | 2022-03-01 | 上海完美时空软件有限公司 | 光影渲染方法、装置、设备和存储介质 |
Families Citing this family (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN111931742A (zh) * | 2020-09-30 | 2020-11-13 | 苏宁金融科技(南京)有限公司 | 一种app登录验证方法、装置及计算机可读存储介质 |
| CN113760156A (zh) * | 2021-02-08 | 2021-12-07 | 北京沃东天骏信息技术有限公司 | 一种调整终端屏幕显示的方法和装置 |
| CN113377216A (zh) * | 2021-06-24 | 2021-09-10 | 深圳市梵盛达技术有限公司 | 一种温度计数据传输方法、装置、温度计及存储介质 |
| CN115933874B (zh) * | 2022-11-23 | 2023-08-29 | 深圳市江元智造科技有限公司 | 一种基于人脸控制的智能滑动控制方法、系统和存储介质 |
Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101989126A (zh) * | 2009-08-07 | 2011-03-23 | 深圳富泰宏精密工业有限公司 | 手持式电子装置及其屏幕画面自动旋转方法 |
| CN102752438A (zh) * | 2011-04-20 | 2012-10-24 | 中兴通讯股份有限公司 | 自动调节终端界面显示的方法及装置 |
| US20150128075A1 (en) * | 2012-05-11 | 2015-05-07 | Umoove Services Ltd. | Gaze-based automatic scrolling |
| CN106843821A (zh) * | 2015-12-07 | 2017-06-13 | 百度在线网络技术(北京)有限公司 | 自动调整屏幕的方法和装置 |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| AU2011364912B2 (en) * | 2011-04-06 | 2014-03-20 | Hisense Electric Co., Ltd | Method,device,system,television and stereo glasses for adjusting stereo image |
| CN102917232B (zh) * | 2012-10-23 | 2014-12-24 | 深圳创维-Rgb电子有限公司 | 基于人脸识别的3d显示自适应调节方法和装置 |
| CN103248822B (zh) * | 2013-03-29 | 2016-12-07 | 东莞宇龙通信科技有限公司 | 摄像终端的对焦方法及摄像终端 |
| CN103412647B (zh) * | 2013-08-13 | 2016-07-06 | 广东欧珀移动通信有限公司 | 一种人脸识别的页面显示控制方法及移动终端 |
| CN104581113B (zh) * | 2014-12-03 | 2018-05-15 | 深圳市魔眼科技有限公司 | 基于观看角度的自适应全息显示方法及全息显示装置 |
| CN104539924A (zh) * | 2014-12-03 | 2015-04-22 | 深圳市亿思达科技集团有限公司 | 基于人眼追踪的全息显示方法及全息显示装置 |
| CN107132920A (zh) * | 2017-05-03 | 2017-09-05 | 三星电子(中国)研发中心 | 一种屏幕内容视图角度调整方法和装置 |
-
2017
- 2017-09-29 CN CN201710908991.7A patent/CN109584285B/zh active Active
-
2018
- 2018-09-28 WO PCT/CN2018/108320 patent/WO2019062852A1/zh not_active Ceased
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101989126A (zh) * | 2009-08-07 | 2011-03-23 | 深圳富泰宏精密工业有限公司 | 手持式电子装置及其屏幕画面自动旋转方法 |
| CN102752438A (zh) * | 2011-04-20 | 2012-10-24 | 中兴通讯股份有限公司 | 自动调节终端界面显示的方法及装置 |
| US20150128075A1 (en) * | 2012-05-11 | 2015-05-07 | Umoove Services Ltd. | Gaze-based automatic scrolling |
| CN106843821A (zh) * | 2015-12-07 | 2017-06-13 | 百度在线网络技术(北京)有限公司 | 自动调整屏幕的方法和装置 |
Cited By (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113253831A (zh) * | 2020-02-12 | 2021-08-13 | 北京七鑫易维信息技术有限公司 | 针对旋屏终端设备的眼球追踪方法及装置 |
| CN114119851A (zh) * | 2021-12-07 | 2022-03-01 | 上海完美时空软件有限公司 | 光影渲染方法、装置、设备和存储介质 |
Also Published As
| Publication number | Publication date |
|---|---|
| CN109584285A (zh) | 2019-04-05 |
| CN109584285B (zh) | 2024-03-29 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11836289B2 (en) | Use of eye tracking to adjust region-of-interest (ROI) for compressing images for transmission | |
| WO2019062852A1 (zh) | 显示内容的控制方法、装置及计算机可读介质 | |
| US10739849B2 (en) | Selective peripheral vision filtering in a foveated rendering system | |
| US10720128B2 (en) | Real-time user adaptive foveated rendering | |
| CN109086726B (zh) | 一种基于ar智能眼镜的局部图像识别方法及系统 | |
| US20200341284A1 (en) | Information processing apparatus, information processing method, and recording medium | |
| US10372205B2 (en) | Reducing rendering computation and power consumption by detecting saccades and blinks | |
| KR101892735B1 (ko) | 직관적인 상호작용 장치 및 방법 | |
| WO2011158511A1 (ja) | 指示入力装置、指示入力方法、プログラム、記録媒体および集積回路 | |
| KR101613091B1 (ko) | 시선 추적 장치 및 방법 | |
| JP6294054B2 (ja) | 映像表示装置、映像提示方法及びプログラム | |
| CN104464579A (zh) | 数据显示方法、装置及终端、显示控制方法及装置 | |
| US11579690B2 (en) | Gaze tracking apparatus and systems | |
| JP2018196730A (ja) | 眼の位置を監視するための方法およびシステム | |
| CN114661152B (zh) | 一种降低视觉疲劳的ar显示控制系统及方法 | |
| CN112585673B (zh) | 信息处理设备、信息处理方法及程序 | |
| CN111367405A (zh) | 头戴显示设备的调整方法、装置、计算机设备及存储介质 | |
| US20170160797A1 (en) | User-input apparatus, method and program for user-input | |
| US20250238078A1 (en) | Method and system of gaze-mapping in real-world environment | |
| JP2006285531A (ja) | 視線方向の検出装置、視線方向の検出方法およびコンピュータに当該視線方向の視線方法を実行させるためのプログラム | |
| US12405662B2 (en) | Screen interaction using EOG coordinates | |
| US12346497B1 (en) | Filtering of gaze tracking information to trigger reading control mode | |
| CN117930970A (zh) | 电子设备控制方法、装置、设备及介质 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18861829 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 18861829 Country of ref document: EP Kind code of ref document: A1 |