[go: up one dir, main page]

CN110673720A - Eye protection display method and learning machine with eye protection mode - Google Patents

Eye protection display method and learning machine with eye protection mode Download PDF

Info

Publication number
CN110673720A
CN110673720A CN201910806532.7A CN201910806532A CN110673720A CN 110673720 A CN110673720 A CN 110673720A CN 201910806532 A CN201910806532 A CN 201910806532A CN 110673720 A CN110673720 A CN 110673720A
Authority
CN
China
Prior art keywords
viewer
display
face
preset
evaluation information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910806532.7A
Other languages
Chinese (zh)
Inventor
郑艳霞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unisound Intelligent Technology Co Ltd
Original Assignee
Unisound Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unisound Intelligent Technology Co Ltd filed Critical Unisound Intelligent Technology Co Ltd
Priority to CN201910806532.7A priority Critical patent/CN110673720A/en
Publication of CN110673720A publication Critical patent/CN110673720A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/70Protecting specific internal or peripheral components, in which the protection of a component leads to protection of the entire computer
    • G06F21/82Protecting input, output or interconnection devices
    • G06F21/84Protecting input, output or interconnection devices output devices, e.g. displays or monitors
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/02Electrically-operated educational appliances with visual presentation of the material to be studied, e.g. using film strip

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Educational Technology (AREA)
  • Educational Administration (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention provides an eye protection display method and a learning machine with an eye protection mode, wherein the eye protection display method and the learning machine with the eye protection mode carry out adaptive adjustment on the current display state by acquiring attribute information about any one of current display content, a viewer and an external environment, wherein the adaptive adjustment not only comprises different display adjustment modes of adjusting whether display operation is stopped or not, adjusting display brightness and adjusting display duration, but also can carry out targeted display mode adjustment on the viewing action of the current viewer and the external environment, so that the uniform display mode adjustment on all the viewers by adopting a single display adjustment mode can be avoided, and the learning machine can carry out the appropriate eye protection mode adjustment on different viewers.

Description

Eye protection display method and learning machine with eye protection mode
Technical Field
The invention relates to the technical field of intelligent learning machines, in particular to an eye protection display method and a learning machine with an eye protection mode.
Background
At present, a learning machine with a screen is widely used for assisting children to learn, but when the children use the learning machine for a long time to watch video images, the learning machine inevitably has adverse effects on the eyesight of the children. In order to ensure the health and eye use of children, the existing learning machine usually monitors the watching distance of the children in the process of using the learning machine by limiting the using time of the learning machine, so as to avoid the bad watching behavior of the children in the process of using the learning machine. Although the control mode adopted by the learning machine can effectively protect the eyesight health of children, the control mode cannot perform personalized control operation according to the actual conditions of different children, the use duration of the learning machine can be limited only singly, and in addition, a distance sensor needs to be additionally arranged on the learning machine to detect the viewing distance, which inevitably increases the production cost of the learning machine. Therefore, the existing learning machine cannot perform adaptive eye protection mode adjustment according to actual situations of different individuals and needs to monitor the individuals by means of different types of sensors.
Disclosure of Invention
Aiming at the defects in the prior art, the invention provides an eye protection display method and a learning machine with an eye protection mode, wherein the eye protection display method and the learning machine with the eye protection mode carry out adaptive adjustment on the current display state by acquiring the attribute information of any one of the current display content, a viewer and the external environment, wherein the adaptive adjustment not only comprises different display adjustment modes of adjusting whether the display operation is terminated or not, adjusting the display brightness and adjusting the display time, but also can carry out targeted display mode adjustment on the viewing action of the current viewer and the external environment, so that the uniform display mode adjustment on all the viewers by adopting a single display adjustment mode can be avoided, and the learning machine can carry out the appropriate eye protection mode adjustment on different viewers; in addition, the eye protection display method and the learning machine with the eye protection mode do not need to additionally use external equipment such as a distance sensor, relevant information about a viewer and the current environment is obtained directly through a camera built in the learning machine, and adaptive algorithm processing is carried out on the relevant information to determine a display mode suitable for the current viewer, so that the cost of the learning machine is effectively reduced, and the operating efficiency of the learning machine is improved.
The invention provides an eye protection display method which is characterized by comprising the following steps:
the method comprises the steps of (1) obtaining attribute information of any one of display content, a viewer and an external environment, and determining evaluation information of a current display state according to the attribute information;
step (2), judging the validity of any one of the current display content, the identity of the viewer and the viewing action of the viewer according to the evaluation information;
step (3), according to the judging result about the legality of any one of the current display content, the viewer identity and the viewer watching action, adjusting the current display state;
further, in the step (1), attribute information on any one of the display content, the viewer, and the external environment is acquired, and it is determined that evaluation information on the current display state specifically includes, based on the attribute information,
step (101), acquiring current display text and/or image information as attribute information of the display content, or acquiring facial images of different angles of a viewer as attribute information of the viewer, or acquiring brightness data of a face area and a non-face area of the viewer as attribute information of the external environment;
step (102), when the attribute information is the display text and/or image information, extracting symbol pixel characteristics of the display text and/or image information, and calculating the extracted symbol pixel characteristics through a first preset evaluation model to obtain the evaluation information;
step (103), when the attribute information is the face images of different angles of the viewer, extracting the face related features of the face images, and calculating the extracted face related features through a second preset evaluation module to obtain the evaluation information;
a step (104) of, when the attribute information is the luminance data regarding each of the face region and the non-face region of the viewer, performing luminance-related feature extraction processing on the luminance data, and performing calculation processing on the extracted luminance-related feature by using a third preset evaluation model to obtain the evaluation information;
further, in the step (102), the calculating the extracted symbol pixel characteristics by a first preset evaluation model to obtain the evaluation information specifically includes,
matching the symbol pixel characteristics with preset display content characteristics through the first preset evaluation model to obtain matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information;
or,
in the step (103), the obtaining of the evaluation information specifically includes performing calculation processing on the extracted face-related feature by using a second preset evaluation module,
converting the face-related feature into a face recognition feature or a face-display face distance value through the second preset evaluation model to serve as the evaluation information;
or,
in the step (104), the calculating the extracted brightness-related feature by using a third preset evaluation model to obtain the evaluation information specifically includes,
converting the brightness-related features into brightness difference values of a face region and a non-face region of a viewer through the third preset evaluation model to serve as the evaluation information;
further, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action according to the evaluation information specifically includes,
when the evaluation information is the matching degree between the symbol pixel characteristics and the preset display content characteristics, if the matching degree meets a preset matching degree range, determining that the current display content is legal, otherwise, determining that the current display content is not legal;
or,
when the evaluation information is the face recognition feature, if the face recognition feature is matched with a preset face feature library, determining that the identity of the viewer is legal, otherwise, determining that the identity of the viewer is not legal;
or,
when the evaluation information is the face-display surface distance value, if the face-display surface distance value meets a preset distance range, determining that the watching action of the viewer is legal, otherwise, determining that the watching action of the viewer is not legal;
or,
when the evaluation information is the brightness difference value between the face area and the non-face area of the viewer, if the brightness difference value between the face area and the non-face area of the viewer meets a preset brightness difference range, determining that the watching action of the viewer is legal, otherwise, determining that the watching action of the viewer is not legal;
further, in the step (3), the adjusting the current display state according to the judgment result on the validity of any one of the current display content, the viewer identity, and the viewer viewing action specifically includes,
when the current display content is determined to be legal, the normal operation of the current display operation is maintained, and when the current display content is determined not to be legal, all the current display operations are terminated;
or,
when the identity of the viewer is determined to be legal, setting a preset display time length for the current viewer, terminating all current display operations under the condition that the actual viewing time length of the current viewer exceeds the preset display time length, and directly terminating all current display operations when the identity of the viewer is determined not to be legal;
or,
and when the watching action of the viewer is determined to be legal, adjusting the display brightness in real time, and when the watching action of the viewer is determined not to be legal, directly terminating all current display operations.
The invention also provides a learning machine with an eye protection mode, which is characterized in that:
the learning machine with the eye protection mode comprises an attribute information acquisition module, an evaluation information determination module, a legality judgment module and a display state adjustment module; wherein,
the attribute information acquisition module is used for acquiring attribute information about any one of the display content of the learning machine, a corresponding viewer and the external environment;
the evaluation information determining module can be used for determining evaluation information about the current display state of the learning machine according to the attribute information;
the legality judging module is used for judging the legality of any one of the currently displayed content, the identity of the viewer and the viewing action of the viewer of the learning machine according to the evaluation information;
the display state adjusting module is used for adjusting the current display state of the learning machine according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer;
further, the attribute information acquisition module comprises a display content acquisition submodule, a face image acquisition submodule and a brightness data acquisition submodule; wherein,
the display content acquisition submodule is used for acquiring current display text and/or image information as attribute information of the display content;
the facial image acquisition sub-module is used for acquiring facial images of different angles of a viewer as attribute information of the viewer;
the brightness data acquisition submodule is used for acquiring brightness data of a face area and a non-face area of a viewer respectively as attribute information of the external environment;
further, the evaluation information determination module comprises a first evaluation information determination submodule, a second evaluation information determination submodule and a third evaluation information determination submodule; wherein,
the first evaluation information determining submodule is used for matching the symbol pixel characteristics corresponding to the display text and/or image information with preset display content characteristics through a first preset evaluation model so as to obtain the matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information;
the second evaluation information determination submodule is used for converting face related features corresponding to face images of different angles of a viewer into face recognition features or face-display face distance values through a second preset evaluation model to serve as the evaluation information;
the third evaluation information determination submodule is used for converting brightness related characteristics corresponding to brightness data of a face area and a non-face area of a viewer into brightness difference values of the face area and the non-face area of the viewer through a third preset evaluation model to serve as the evaluation information;
further, the validity judging module comprises a first validity judging submodule, a second validity judging submodule, a third validity judging submodule and a fourth validity judging submodule; wherein,
the first validity judging submodule is used for judging whether the currently displayed content of the learning machine has validity or not according to the matching degree between the symbol pixel characteristics and the preset display content characteristics;
the second legality judging submodule is used for judging whether the identity of the viewer corresponding to the learning machine is legal or not according to the face recognition feature;
the third legality judging submodule is used for judging whether the watching action of the learning machine corresponding to the watcher is legal or not according to the fact that the face-display face distance value meets a preset distance range;
the fourth legality judging submodule is used for judging whether the watching action of the viewer corresponding to the learning machine is legal or not according to the brightness difference value of the face area and the non-face area of the viewer;
further, the display state adjusting module comprises a display operation adjusting submodule and a display brightness adjusting submodule; wherein,
the display operation adjusting submodule is used for directly terminating all current display operations of the learning machine when the current display content is determined not to have legality, or the identity of a viewer is determined not to have legality, or the viewing action of the viewer is determined not to have legality;
and the display brightness adjusting submodule is used for adjusting the display brightness of the learning machine when the watching action of the viewer is determined to be legal.
Compared with the prior art, the eye protection display method and the learning machine with the eye protection mode perform adaptive adjustment on the current display state by acquiring the attribute information of any one of the current display content, the viewer and the external environment, wherein the adaptive adjustment not only comprises different display adjustment modes of adjusting whether the display operation is terminated or not, adjusting the display brightness and adjusting the display duration, but also can perform targeted display mode adjustment aiming at the viewing action of the current viewer and the external environment, so that the uniform display mode adjustment of all the viewers by adopting a single display adjustment mode can be avoided, and the learning machine can perform appropriate eye protection mode adjustment aiming at different viewers; in addition, the eye protection display method and the learning machine with the eye protection mode do not need to additionally use external equipment such as a distance sensor, relevant information about a viewer and the current environment is obtained directly through a camera built in the learning machine, and adaptive algorithm processing is carried out on the relevant information to determine a display mode suitable for the current viewer, so that the cost of the learning machine is effectively reduced, and the operating efficiency of the learning machine is improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flow chart of an eye protection display method provided by the invention.
Fig. 2 is a schematic structural view of a learning machine with an eye protection mode according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of an eye protection display method according to an embodiment of the present invention. The eye protection display method comprises the following steps:
and (1) acquiring attribute information about any one of the display content, the viewer and the external environment, and determining evaluation information about the current display state according to the attribute information.
Preferably, in the step (1), attribute information on any one of the display content, the viewer, and the external environment is acquired, and it is determined that the evaluation information on the current display state specifically includes,
step (101), acquiring current display text and/or image information as attribute information of the display content, or acquiring face images of different angles of a viewer as attribute information of the viewer, or acquiring brightness data of a face area and a non-face area of the viewer as attribute information of the external environment;
step (102), when the attribute information is the display character and/or image information, extracting symbol pixel characteristics of the display character and/or image information, and calculating the extracted symbol pixel characteristics through a first preset evaluation model to obtain the evaluation information;
step (103), when the attribute information is the face image related to different angles of the viewer, extracting the face related features of the face image, and calculating the extracted face related features through a second preset evaluation module to obtain the evaluation information;
and (104) when the attribute information is the brightness data of the face area and the non-face area of the viewer, extracting brightness-related features of the brightness data, and calculating the extracted brightness-related features through a third preset evaluation model to obtain the evaluation information.
Preferably, in the step (102), the calculating the extracted symbol pixel feature by using a first preset evaluation model to obtain the evaluation information specifically includes,
and matching the symbol pixel characteristics with preset display content characteristics through the first preset evaluation model to obtain the matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information.
Preferably, in the step (103), the calculating process of the extracted face-related feature by the second preset evaluation module to obtain the evaluation information specifically includes,
converting the face-related feature into a face recognition feature or a face-display face distance value as the evaluation information through the second preset evaluation model.
Preferably, in the step (104), the performing, by a third preset evaluation model, a calculation process on the extracted brightness-related feature to obtain the evaluation information specifically includes,
and converting the brightness related characteristics into brightness difference values of the face area and the non-face area of the viewer through the third preset evaluation model to serve as the evaluation information.
And (2) judging the validity of any one of the current display content, the identity of the viewer and the viewing action of the viewer according to the evaluation information.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the matching degree between the symbol pixel feature and the preset display content feature, if the matching degree meets a preset matching degree range, it is determined that the current display content has legality, otherwise, it is determined that the current display content does not have legality, specifically, the preset display content feature may include a corresponding information feature in a text content, an image content or a video content database that a viewer allows to learn and watch on a learning machine, so that it can be ensured that only when the content currently watched by the viewer on the learning machine is the content specified in the database, the watching behavior of the current viewer has legality, thereby preventing the viewer from watching the content in the non-database.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the face identification feature, if the face identification feature is matched with a preset face feature library, the identity of a viewer is determined to be legal, otherwise, the identity of the viewer is determined not to be legal, specifically, the preset face feature library registers and stores face feature information of a plurality of different specified legal users in advance, and through the matching processing of the face identification feature and the preset face feature library, the learning machine can be ensured to be capable of carrying out corresponding display operation states only if the viewer of the current learning machine is the specified legal user, otherwise, the learning machine is in a closed state, so that the unspecified legal user cannot operate.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the face-display face distance value, if the face-display face distance value meets a preset distance range, it is determined that the viewing action of the viewer is legal, otherwise, it is determined that the viewing action of the viewer is not legal, specifically, a face image of the viewer is obtained through a front camera of a learning machine, since the smaller the distance between the viewer and a screen of the learning machine, the larger and the smaller the proportion occupied by the face region of the viewer in the face image, according to the above principle, the face-display face distance value can be obtained by calculating the proportion occupied by the face region of the viewer in the face image, wherein in the face image, the square area of the face of the viewer can be W H, W and H are respectively the width and the height of the square frame of the face of the viewer, and the whole image area of the face image can be W H, w and H are the height and width of the face image, respectively.
Preferably, in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action based on the evaluation information specifically includes,
when the evaluation information is the brightness difference value between the face area and the non-face area of the viewer, if the brightness difference value between the face area and the non-face area of the viewer meets a preset brightness difference range, it is determined that the viewing action of the viewer is legal, otherwise, it is determined that the viewing action of the viewer is not legal, specifically, a front camera of a learning machine is used to obtain a face image of the viewer, because the brightness a of the face area of the viewer in the face image is different from the brightness b of the non-face area under different ambient illumination, when the face image is under daytime illumination or illumination of an illumination source, the brightness of the face area of the viewer is substantially the same as the brightness of the non-face area, and under a relatively dark environment, the brightness difference value between the face area of the viewer and the non-face area of the viewer is calculated, whether a current viewer performs a viewing action under normal external environment brightness can be accurately judged, wherein the calculation formula of the brightness a of the face region of the viewer is a-L1/(W H), wherein L1 is the fitted brightness distribution value of the face region of the viewer, and W and H are the width and height of the face box of the viewer respectively, the calculation formula of the brightness b of the non-face region of the viewer is b-L2/(W H-W H), wherein L2 is the fitted brightness distribution value of the non-face region of the viewer, and W and H are the height and width of the face image respectively.
And (3) adjusting the current display state according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer.
Preferably, in the step (3), the adjusting the current display state according to the judgment result about the validity of any one of the current display content, the viewer identity and the viewer viewing action specifically includes,
and when the current display content is determined to be legal, the normal operation of the current display operation is maintained, and when the current display content is determined not to be legal, all current display operations are terminated.
Preferably, in the step (3), the adjusting the current display state according to the judgment result about the validity of any one of the current display content, the viewer identity and the viewer viewing action specifically includes,
when the identity of the viewer is determined to be legal, a preset display time length is set for the current viewer, all current display operations are terminated under the condition that the actual watching time length of the current viewer exceeds the preset display time length, and when the identity of the viewer is determined not to be legal, all current display operations are directly terminated.
Preferably, in the step (3), the adjusting the current display state according to the judgment result about the validity of any one of the current display content, the viewer identity and the viewer viewing action specifically includes,
when the viewer watching action is determined to be legal, adjusting the display brightness in real time, and when the viewer watching action is determined not to be legal, directly terminating all current display operations, specifically, calculating a ratio (W × H)/(W × H) of a box area of the face of the viewer to an overall image area of the face image, wherein if the ratio (W × H)/(W × H) is greater than a preset ratio threshold M, directly terminating all current display operations, and if the ratio (W × H)/(W × H) is less than or equal to the preset ratio threshold M, adjusting the display brightness in real time; in addition, a brightness difference value a-b between the brightness a of the face region of the viewer and the brightness b of the non-face region in the face image can be calculated, wherein if the brightness difference value a-b is greater than a preset brightness threshold value N, all current display operations are directly terminated, and if the brightness difference value a-b is less than or equal to the preset brightness threshold value N, the display brightness is adjusted in real time.
Fig. 2 is a schematic structural diagram of a learning machine with an eye protection mode according to an embodiment of the present invention. The learning machine with the eye protection mode comprises an attribute information acquisition module, an evaluation information determination module, a legality judgment module and a display state adjustment module; wherein,
the attribute information acquisition module is used for acquiring the display content of the learning machine, and attribute information corresponding to any one of a viewer and the external environment;
the evaluation information determining module can be used for determining evaluation information about the current display state of the learning machine according to the attribute information;
the legality judging module is used for judging the legality of any one of the currently displayed content, the identity of the viewer and the viewing action of the viewer of the learning machine according to the evaluation information;
the display state adjusting module is used for adjusting the current display state of the learning machine according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer.
Preferably, the attribute information acquisition module comprises a display content acquisition submodule, a face image acquisition submodule and a brightness data acquisition submodule;
preferably, the display content obtaining sub-module is configured to obtain current display text and/or image information as attribute information of the display content;
preferably, the facial image acquisition sub-module is configured to acquire facial images of different angles with respect to a viewer as the attribute information of the viewer;
preferably, the luminance data acquisition sub-module is configured to acquire luminance data on each of a face region and a non-face region of the viewer as the attribute information of the external environment;
preferably, the evaluation information determination module includes a first evaluation information determination sub-module, a second evaluation information determination sub-module, and a third evaluation information determination sub-module;
preferably, the first evaluation information determining sub-module is configured to perform matching processing on the symbol pixel feature corresponding to the display text and/or image information and a preset display content feature through a first preset evaluation model, so as to obtain a matching degree between the symbol pixel feature and the preset display content feature as the evaluation information;
preferably, the second evaluation information determination submodule is configured to convert, as the evaluation information, face-related features corresponding to the face images at different angles with respect to the viewer into face recognition features or face-display face distance values by a second preset evaluation model;
preferably, the third evaluation information determination submodule is configured to convert, as the evaluation information, the luminance-related feature corresponding to the luminance data of each of the viewer face region and the non-face region into a luminance difference value between the viewer face region and the non-face region by using a third preset evaluation model;
preferably, the validity judging module comprises a first validity judging submodule, a second validity judging submodule, a third validity judging submodule and a fourth validity judging submodule;
preferably, the first validity judging sub-module is configured to judge whether the currently displayed content of the learning machine is valid according to a matching degree between the symbol pixel feature and the preset display content feature;
preferably, the second validity judging submodule is configured to judge whether the identity of the viewer corresponding to the learning machine is valid according to the face recognition feature;
preferably, the third validity judging submodule is configured to judge whether the viewing action of the viewer corresponding to the learning machine is valid according to that the face-display face distance value satisfies a preset distance range;
preferably, the fourth validity judging submodule is configured to judge whether the viewing action of the learning machine corresponding to the viewer is valid according to the brightness difference between the face region and the non-face region of the viewer;
preferably, the display state adjusting module comprises a display operation adjusting submodule and a display brightness adjusting submodule;
preferably, the display operation adjustment submodule is used for directly terminating all current display operations of the learning machine when the current display content is determined not to be legal, or the identity of the viewer is determined not to be legal, or the viewing action of the viewer is determined not to be legal;
preferably, the display brightness adjusting sub-module is used for adjusting the display brightness of the learning machine when the validity of the watching action of the viewer is determined.
It can be seen from the above embodiments that the eye protection display method and the learning machine with the eye protection mode adaptively adjust the current display state by obtaining the attribute information about any one of the current display content, the viewer and the external environment, wherein the adaptive adjustment not only includes different display adjustment modes of adjusting whether the display operation is terminated, adjusting the display brightness, and adjusting the display duration, but also can perform targeted display mode adjustment for the viewing action of the current viewer and the external environment, so that uniform display mode adjustment for all viewers by using a single display adjustment mode can be avoided, and the learning machine can perform appropriate eye protection mode adjustment for different viewers; in addition, the eye protection display method and the learning machine with the eye protection mode do not need to additionally use external equipment such as a distance sensor, relevant information about a viewer and the current environment is obtained directly through a camera built in the learning machine, and adaptive algorithm processing is carried out on the relevant information to determine a display mode suitable for the current viewer, so that the cost of the learning machine is effectively reduced, and the operating efficiency of the learning machine is improved.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. An eye protection display method is characterized by comprising the following steps:
the method comprises the steps of (1) obtaining attribute information of any one of display content, a viewer and an external environment, and determining evaluation information of a current display state according to the attribute information;
step (2), judging the validity of any one of the current display content, the identity of the viewer and the viewing action of the viewer according to the evaluation information;
and (3) adjusting the current display state according to the judgment result about the legality of any one of the current display content, the viewer identity and the viewer watching action.
2. The eye-protected display method of claim 1, wherein:
in the step (1), attribute information on any one of the display content, the viewer, and the external environment is acquired, and it is determined that evaluation information on the current display state specifically includes, based on the attribute information,
step (101), acquiring current display text and/or image information as attribute information of the display content, or acquiring facial images of different angles of a viewer as attribute information of the viewer, or acquiring brightness data of a face area and a non-face area of the viewer as attribute information of the external environment;
step (102), when the attribute information is the display text and/or image information, extracting symbol pixel characteristics of the display text and/or image information, and calculating the extracted symbol pixel characteristics through a first preset evaluation model to obtain the evaluation information;
step (103), when the attribute information is the face images of different angles of the viewer, extracting the face related features of the face images, and calculating the extracted face related features through a second preset evaluation module to obtain the evaluation information;
and (104) when the attribute information is the brightness data of the face area and the non-face area of the viewer, extracting brightness-related features of the brightness data, and calculating the extracted brightness-related features through a third preset evaluation model to obtain the evaluation information.
3. The eye-protected display method of claim 2, wherein:
in the step (102), the calculating the extracted symbol pixel characteristics by a first preset evaluation model to obtain the evaluation information specifically includes,
matching the symbol pixel characteristics with preset display content characteristics through the first preset evaluation model to obtain matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information;
or,
in the step (103), the obtaining of the evaluation information specifically includes performing calculation processing on the extracted face-related feature by using a second preset evaluation module,
converting the face-related feature into a face recognition feature or a face-display face distance value through the second preset evaluation model to serve as the evaluation information;
or,
in the step (104), the calculating the extracted brightness-related feature by using a third preset evaluation model to obtain the evaluation information specifically includes,
and converting the brightness related characteristics into brightness difference values of the face area and the non-face area of the viewer through the third preset evaluation model to serve as the evaluation information.
4. The eye-protected display method of claim 3, wherein:
in the step (2), the judging the validity of any one of the currently displayed content, the viewer identity and the viewer viewing action according to the evaluation information specifically includes,
when the evaluation information is the matching degree between the symbol pixel characteristics and the preset display content characteristics, if the matching degree meets a preset matching degree range, determining that the current display content is legal, otherwise, determining that the current display content is not legal;
or,
when the evaluation information is the face recognition feature, if the face recognition feature is matched with a preset face feature library, determining that the identity of the viewer is legal, otherwise, determining that the identity of the viewer is not legal;
or,
when the evaluation information is the face-display surface distance value, if the face-display surface distance value meets a preset distance range, determining that the watching action of the viewer is legal, otherwise, determining that the watching action of the viewer is not legal;
or,
and when the evaluation information is the brightness difference value between the face region and the non-face region of the viewer, if the brightness difference value between the face region and the non-face region of the viewer meets a preset brightness difference range, determining that the watching action of the viewer is legal, otherwise, determining that the watching action of the viewer is not legal.
5. The eye-protected display method of claim 4, wherein:
in the step (3), the adjusting the current display state according to the determination result on the validity of any one of the current display content, the viewer identity, and the viewer viewing action specifically includes,
when the current display content is determined to be legal, the normal operation of the current display operation is maintained, and when the current display content is determined not to be legal, all the current display operations are terminated;
or,
when the identity of the viewer is determined to be legal, setting a preset display time length for the current viewer, terminating all current display operations under the condition that the actual viewing time length of the current viewer exceeds the preset display time length, and directly terminating all current display operations when the identity of the viewer is determined not to be legal;
or,
and when the watching action of the viewer is determined to be legal, adjusting the display brightness in real time, and when the watching action of the viewer is determined not to be legal, directly terminating all current display operations.
6. The utility model provides a learning machine with eyeshield mode which characterized in that:
the learning machine with the eye protection mode comprises an attribute information acquisition module, an evaluation information determination module, a legality judgment module and a display state adjustment module; wherein,
the attribute information acquisition module is used for acquiring attribute information about any one of the display content of the learning machine, a corresponding viewer and the external environment;
the evaluation information determining module can be used for determining evaluation information about the current display state of the learning machine according to the attribute information;
the legality judging module is used for judging the legality of any one of the currently displayed content, the identity of the viewer and the viewing action of the viewer of the learning machine according to the evaluation information;
the display state adjusting module is used for adjusting the current display state of the learning machine according to the judgment result about the legality of any one of the current display content, the identity of the viewer and the viewing action of the viewer.
7. The learning machine with eye-shielding mode of claim 6, wherein:
the attribute information acquisition module comprises a display content acquisition submodule, a face image acquisition submodule and a brightness data acquisition submodule; wherein,
the display content acquisition submodule is used for acquiring current display text and/or image information as attribute information of the display content;
the facial image acquisition sub-module is used for acquiring facial images of different angles of a viewer as attribute information of the viewer;
the luminance data acquisition sub-module is configured to acquire luminance data on each of a face region and a non-face region of the viewer as attribute information of the external environment.
8. The learning machine with eye-shielding mode of claim 7, wherein:
the evaluation information determining module comprises a first evaluation information determining submodule, a second evaluation information determining submodule and a third evaluation information determining submodule; wherein,
the first evaluation information determining submodule is used for matching the symbol pixel characteristics corresponding to the display text and/or image information with preset display content characteristics through a first preset evaluation model so as to obtain the matching degree between the symbol pixel characteristics and the preset display content characteristics as the evaluation information;
the second evaluation information determination submodule is used for converting face related features corresponding to face images of different angles of a viewer into face recognition features or face-display face distance values through a second preset evaluation model to serve as the evaluation information;
the third evaluation information determination submodule is configured to convert, through a third preset evaluation model, luminance-related features corresponding to luminance data of each of the viewer face region and the non-face region into a luminance difference value between the viewer face region and the non-face region as the evaluation information.
9. The learning machine with eye-shielding mode of claim 8, wherein:
the legality judging module comprises a first legality judging submodule, a second legality judging submodule, a third legality judging submodule and a fourth legality judging submodule; the first validity judging submodule is used for judging whether the currently displayed content of the learning machine is valid or not according to the matching degree between the symbol pixel characteristics and the preset display content characteristics;
the second legality judging submodule is used for judging whether the identity of the viewer corresponding to the learning machine is legal or not according to the face recognition feature;
the third legality judging submodule is used for judging whether the watching action of the learning machine corresponding to the watcher is legal or not according to the fact that the face-display face distance value meets a preset distance range;
and the fourth legality judging submodule is used for judging whether the watching action of the viewer corresponding to the learning machine is legal or not according to the brightness difference value of the face region and the non-face region of the viewer.
10. The learning machine with eye-shielding mode of claim 9, wherein:
the display state adjusting module comprises a display operation adjusting submodule and a display brightness adjusting submodule; wherein,
the display operation adjusting submodule is used for directly terminating all current display operations of the learning machine when the current display content is determined not to have legality, or the identity of a viewer is determined not to have legality, or the viewing action of the viewer is determined not to have legality;
and the display brightness adjusting submodule is used for adjusting the display brightness of the learning machine when the watching action of the viewer is determined to be legal.
CN201910806532.7A 2019-08-28 2019-08-28 Eye protection display method and learning machine with eye protection mode Pending CN110673720A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910806532.7A CN110673720A (en) 2019-08-28 2019-08-28 Eye protection display method and learning machine with eye protection mode

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910806532.7A CN110673720A (en) 2019-08-28 2019-08-28 Eye protection display method and learning machine with eye protection mode

Publications (1)

Publication Number Publication Date
CN110673720A true CN110673720A (en) 2020-01-10

Family

ID=69076406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910806532.7A Pending CN110673720A (en) 2019-08-28 2019-08-28 Eye protection display method and learning machine with eye protection mode

Country Status (1)

Country Link
CN (1) CN110673720A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115250554A (en) * 2022-07-15 2022-10-28 北京觅机科技有限公司 Eye protection area display method, device, equipment and computer readable storage medium
CN116305049A (en) * 2023-05-11 2023-06-23 深圳市欧度利方科技有限公司 Visual control system and method for tablet personal computer

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015184810A1 (en) * 2014-11-18 2015-12-10 中兴通讯股份有限公司 Method and device for adjusting screen display
CN106231419A (en) * 2016-08-30 2016-12-14 北京小米移动软件有限公司 Operation performs method and device
CN106878780A (en) * 2017-04-28 2017-06-20 张青 It is capable of the intelligent TV set and its control system and control method of Intelligent adjustment brightness
CN106990828A (en) * 2017-03-31 2017-07-28 努比亚技术有限公司 A kind of apparatus and method for controlling screen display
CN107424584A (en) * 2016-05-24 2017-12-01 富泰华工业(深圳)有限公司 Eyes protecting system and method
CN109451164A (en) * 2018-11-21 2019-03-08 惠州Tcl移动通信有限公司 Intelligent terminal and its eye care method, the device with store function

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015184810A1 (en) * 2014-11-18 2015-12-10 中兴通讯股份有限公司 Method and device for adjusting screen display
CN107424584A (en) * 2016-05-24 2017-12-01 富泰华工业(深圳)有限公司 Eyes protecting system and method
CN106231419A (en) * 2016-08-30 2016-12-14 北京小米移动软件有限公司 Operation performs method and device
CN106990828A (en) * 2017-03-31 2017-07-28 努比亚技术有限公司 A kind of apparatus and method for controlling screen display
CN106878780A (en) * 2017-04-28 2017-06-20 张青 It is capable of the intelligent TV set and its control system and control method of Intelligent adjustment brightness
CN109451164A (en) * 2018-11-21 2019-03-08 惠州Tcl移动通信有限公司 Intelligent terminal and its eye care method, the device with store function

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115250554A (en) * 2022-07-15 2022-10-28 北京觅机科技有限公司 Eye protection area display method, device, equipment and computer readable storage medium
CN116305049A (en) * 2023-05-11 2023-06-23 深圳市欧度利方科技有限公司 Visual control system and method for tablet personal computer
CN116305049B (en) * 2023-05-11 2023-09-08 深圳市欧度利方科技有限公司 Visual control system and method for tablet personal computer

Similar Documents

Publication Publication Date Title
US10528810B2 (en) Detecting user viewing difficulty from facial parameters
US8913005B2 (en) Methods and systems for ergonomic feedback using an image analysis module
EP2806373A2 (en) Image processing system and method of improving human face recognition
CN107945766A (en) Display device
US11232584B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
CN110059666B (en) Attention detection method and device
CN104916271A (en) Display device capable of adjusting subject patterns automatically, television and control method for display device
CN115171024A (en) Face multi-feature fusion fatigue detection method and system based on video sequence
CN112183200A (en) Eye movement tracking method and system based on video image
WO2017113619A1 (en) Method and apparatus for adjusting brightness of display interface
WO2014030405A1 (en) Display device, display method, television receiver, and display control device
CN111757082A (en) Image processing method and system applied to AR intelligent device
CN110673720A (en) Eye protection display method and learning machine with eye protection mode
CN118337980A (en) XR simulation method, medium and system suitable for brightness greatly changing environment
CN110536044B (en) Automatic certificate photo shooting method and device
US20140140624A1 (en) Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored
US10733706B2 (en) Mobile device, and image processing method for mobile device
CN113673378B (en) Face recognition method and device based on binocular camera and storage medium
CN117435058B (en) Interactive control method and system for intelligent exhibition hall
CN109493293A (en) A kind of image processing method and device, display equipment
CN106775527B (en) Adjust the method, apparatus and display equipment of the display parameters of display panel
US20170084047A1 (en) System and method for determining colors of foreground, and computer readable recording medium therefor
CN115509351B (en) Sensory linkage situational digital photo frame interaction method and system
CN116524877A (en) Vehicle-mounted screen brightness adjustment method and device, electronic equipment and storage medium
CN110728630A (en) Internet image processing method based on augmented reality and augmented reality glasses

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200110