Control method and display device
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a control method and a display device.
Background
The prior art display devices have different shapes, for example, after a screen of a cylindrical display device is lighted up, a blind area always exists when a user views the screen of the display device at any visual angle, and when the displayed content is fully distributed on the whole screen, the user cannot view the content of the blind area.
Disclosure of Invention
In view of this, embodiments of the present application provide a control method and a display apparatus.
The display control method of the display device comprises the following steps that the display device comprises a flexible display screen and a camera which can rotate relative to the flexible display screen, and the control method comprises the following steps: controlling the camera to rotate and acquiring a scene image; judging whether a human body exists in the scene image or not; when the human body exists in the scene image, the relative position of the human body relative to the flexible display screen is obtained according to the scene image and the motion information of the camera; and controlling a partial area of the flexible display screen to display according to the relative position.
The display equipment comprises a flexible display screen and a camera which can rotate relative to the flexible display screen; the display equipment also comprises a first control module, a first judgment module, a first acquisition module and a second control module; the first control module is used for controlling the camera to rotate and acquiring a scene image; the first judging module is used for judging whether a human body exists in the scene image; the first acquisition module is used for acquiring the relative position of the human body relative to the flexible display screen according to the scene image and the motion information of the camera when the human body exists in the scene image; the second control module is used for controlling a partial area of the flexible display screen to display according to the relative position.
The display equipment comprises a processor, a flexible display screen and a camera which can rotate relative to the flexible display screen; the processor is used for controlling the camera to rotate and obtain a scene image, judging whether a human body exists in the scene image or not, obtaining the relative position of the human body relative to the flexible display screen according to the motion information of the scene image and the camera when the human body exists in the scene image, and controlling a partial area of the flexible display screen to display according to the relative position.
According to the control method and the display device, when the user is obtained by the movable camera, the flexible display screen can display the area visible by the user based on the human body in the scene image and the motion information of the camera, so that the situation that the content displayed by the flexible display screen appears in the blind area of the user is avoided. In addition, other areas except the user visual area in the flexible display screen do not display, and the power consumption of the display equipment is reduced, and the cruising ability of the display equipment is improved.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic perspective view of a display device of an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of a control method of an embodiment of the present application;
FIG. 3 is a block schematic diagram of a display device of an embodiment of the present application;
fig. 4 is another block schematic diagram of a display device of an embodiment of the present application;
fig. 5 is a schematic perspective view of a display device of another embodiment of the present application;
FIG. 6 is a schematic flow chart diagram of a control method of another embodiment of the present application;
FIG. 7 is a block schematic diagram of a display device of another embodiment of the present application;
FIG. 8 is a schematic flow chart diagram of a control method of another embodiment of the present application;
FIG. 9 is a block schematic diagram of a display device of another embodiment of the present application;
fig. 10 is a schematic top view of a display device of an embodiment of the present application;
fig. 11 is a schematic top view of another state of the display device of the embodiment of the present application;
fig. 12 is a schematic diagram of a scene image of a display device of an embodiment of the present application;
fig. 13 is a schematic view of another scene image of the display device of the embodiment of the present application;
fig. 14 is a schematic view of still another scene image of the display device of the embodiment of the present application;
fig. 15 is a schematic top view of still another state of the display device of the embodiment of the present application;
fig. 16 is a schematic flow chart of a control method of another embodiment of the present application;
fig. 17 is a block schematic diagram of a display device of another embodiment of the present application;
fig. 18 is an application scenario diagram of a display device according to an embodiment of the present application;
fig. 19 is a schematic view of an application scenario of a display device according to another embodiment of the present application;
fig. 20 is a schematic flow chart of a control method of another embodiment of the present application;
fig. 21 is a block schematic diagram of a display device of another embodiment of the present application;
fig. 22 is a flowchart schematically illustrating a control method according to still another embodiment of the present application;
fig. 23 is a block schematic diagram of a display device of a further embodiment of the present application;
fig. 24 is a flowchart illustrating a control method according to still another embodiment of the present application;
fig. 25 is a block diagram of a display device of still another embodiment of the present application; and
fig. 26 is a block diagram of a computer device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be further described below with reference to the accompanying drawings. The same or similar reference numbers in the drawings identify the same or similar elements or elements having the same or similar functionality throughout.
In the description of the present application, it is to be understood that the terms "center", "upper", "outer", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred devices or elements must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application. Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first", "second", may explicitly or implicitly include one or more of the described features. In the description of the present application, "plurality" means two or more in number unless specifically limited otherwise.
The following disclosure provides many different embodiments or examples for implementing different features of the application. In order to simplify the disclosure of the present application, specific example components and arrangements are described below. Of course, they are merely examples and are not intended to limit the present application. Moreover, the present application may repeat reference numerals and/or letters in the various examples, such repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. In addition, examples of various specific processes and materials are provided herein, but one of ordinary skill in the art may recognize applications of other processes and/or use of other materials.
Referring to fig. 1 to 2, in an embodiment of the present application, a control method of a display device 100 is provided, where the display device 100 includes a flexible display screen 80 and a camera 81 capable of rotating relative to the flexible display screen 80. The control method comprises the following steps:
s10: controlling the camera 81 to rotate and acquiring a scene image;
s20: judging whether a human body exists in the scene image;
s30: when a human body exists in the scene image, acquiring the relative position of the human body relative to the flexible display screen 80 according to the scene image and the motion information of the camera 81; and
s40: and controlling a partial area of the flexible display screen 80 to display according to the relative position.
Continuing to refer to fig. 1, the present embodiment provides a display device 100. The display device 100 includes a flexible display screen 80 and a camera 81 that can rotate relative to the flexible display screen 80. Referring to fig. 3, the display device 100 further includes a first control module 10, a first determining module 20, a first obtaining module 30, and a second control module 40. The S10 may be implemented by the first control module 10, the S20 may be implemented by the first determination module 20, the S30 may be implemented by the first acquisition module 30, and the S40 may be implemented by the second control module 40. That is, the first control module 10 can be used to control the camera 81 to rotate and acquire a scene image; the first judging module 20 may be configured to judge whether a human body exists in the scene image; the first obtaining module 30 may be configured to, when a human body exists in the scene image, obtain a relative position of the human body with respect to the flexible display screen 80 according to the scene image and the motion information of the camera 81; the second control module 40 can be used to control a partial area of the flexible display screen 80 to display according to the relative position.
Referring to fig. 4, the present embodiment provides a display device 100. The display device 100 includes a processor 82, a flexible display 80, and a camera 81 that is rotatable relative to the flexible display 80. S10, S20, S30, and S40 may be implemented by the processor 82. That is, the processor 82 may be configured to control the camera 81 to rotate and acquire a scene image, determine whether a human body exists in the scene image, and when a human body exists in the scene image, acquire a relative position of the human body with respect to the flexible display 80 according to the motion information of the scene image and the camera 81, and control a partial area of the flexible display 80 to display according to the relative position.
The display device 100 of the embodiment of the present application includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a smart audio, a wearable smart device (such as a bracelet, a head-mounted display, etc.), and the like. In this embodiment, the display device 100 will be described by taking an intelligent sound as an example.
Specifically, the camera 81 may automatically operate after the user turns on the display apparatus 100. Referring to fig. 1, the flexible display 80 is cylindrical, and the camera 81 may rotate around a straight line parallel to a central axis of the flexible display 80 as a rotation axis 83; alternatively, referring to fig. 5, the camera 81 may operate by taking a straight line parallel to the central axis of the flexible display screen 80 as a rotating shaft 83 and rotating around the rotating shaft 83 at a certain interval (i.e. the moving track of the camera 81 is circular), and the moving track of the camera 81 may also be in the form of an ellipse or a square, etc.
Since the camera 81 continuously acquires the scene image during the rotation, the camera 81 captures the user (human body) in the scene image when the user approaches the display apparatus 100. The display apparatus 100 recognizes the scene images acquired by the camera 81 one by one through an image recognition algorithm, and can recognize whether a human body exists in the scene images. The image recognition algorithm adopted by the display device 100 for detecting the human body may be implemented based on a human body contour detection algorithm, and if the detected scene image has a contour similar to the human body contour, it may be determined that the human body exists in the scene image.
It should be noted that, when the display device 100 detects that a human body exists in the scene image, the first control module 10 (the processor 82) may control the camera 81 to stop rotating. When no human body exists in the scene image, the display device 100 may continue to control the camera 81 to rotate and continue to acquire the scene image, and re-determine whether a human body exists in the scene image. In other embodiments, the camera 81 may be in a rotating state all the time after the display device 100 is turned on, and specifically, when the display device 100 detects that a human body exists in the scene image, the first control module 10 (the processor 82) may control the camera 81 to keep rotating and continuously detect whether another human body exists in the acquired scene image.
The display device 100 can obtain the relative position of the human body with respect to the flexible display screen 80 according to the position information of the human body in the scene image and the motion information of the camera 81 when the scene image is acquired.
Generally, the second control module 40 or the processor 82 may control a visible area of the flexible display 80 that can be seen by the human body to display according to the relative position of the human body with respect to the flexible display 80, and control an area of the flexible display 80 that cannot be seen by the human body to not display a picture. Wherein, the relative position of the human body with respect to the flexible display screen 80 may include: the distance between the human body and the origin of coordinates relative to the flexible display screen 80, and the angle formed by the connection line of the human body and the origin of coordinates and the connection line of the origin of coordinates at a preset position of the flexible display screen 80 (as shown in fig. 10 or fig. 14, the preset position is the intersection point of the flexible display screen 80 and the X axis) and the same coordinate system.
The control method and the display device 100 of the embodiment of the application acquire the scene image around the display device 100 through the rotatable camera 81, determine whether a human body exists according to the scene image, and control the area of the flexible display screen 80 which can be seen by the human body to display the image picture when the human body exists around the display device 100, so that a user can see a complete image from the flexible display screen 80. In addition, the control method and the display device 100 also control the other areas except the visible area in the flexible display screen 80 not to be displayed, which is beneficial to reducing the power consumption of the display device 100 and improving the cruising ability of the display device 100.
Referring to fig. 6, in some embodiments, when a human body exists in the scene image, the control method further includes:
s80: judging whether the scene image contains a human face image or a human eye image; and
when the human body includes the human face image and/or the human eye image, a step of acquiring the relative position of the human body with respect to the flexible display screen 80 according to the scene image and the motion information of the camera 81 is performed (step S30).
Referring to fig. 7, in some embodiments, the display device 100 further includes a second determining module 21. S80 may be implemented by the second judging module 21. That is, the second determining module 21 can be used to determine whether the scene image includes a human face image or a human eye image.
Referring to fig. 4 in conjunction, in some embodiments, S80 may be implemented by processor 82. That is, the processor 82 may also be used to determine whether the scene image includes a human face image or a human eye image.
If the human body in the scene image includes a face image, it is determined that the user views the display apparatus 100. After the user is determined to watch the display device 100, the relative position of the human body relative to the flexible display screen 80 is acquired, and the flexible display screen 80 is controlled to be displayed in a visible area which can be seen by the user, so that the display device 100 is prevented from displaying images when the human body is not watched, the power consumption of the display device 100 is reduced, and the cruising ability of the display device 100 is improved. In other embodiments, if the human body in the scene image includes a human eye image, it can be more accurately determined that the user is viewing the display device 100.
The second judging module 21 (processor 82) can realize whether human faces or human eyes exist in the scene image through an image recognition algorithm. The detection of the human face and the human eyes can be based on the human face contour or the human eye contour.
It should be noted that, when the second determining module 21 (processor 82) determines that the scene image does not contain the face image or the eye image, the first control module 10 (processor 82) may continue to control the camera 81 to rotate and re-acquire the scene image, and the second determining module 21 (processor 82) re-identifies whether the acquired scene image contains the face image or the eye image.
Referring to fig. 8, in some embodiments, obtaining the relative position of the human body with respect to the flexible display screen 80 according to the scene image and the motion information of the camera 81 includes:
s31: establishing a coordinate system by taking a rotating shaft of the camera 81 as an original point;
s32: acquiring the rotation angle of the camera 81 in the coordinate system;
s33: acquiring the pixel position of a human body in a scene image; and
s34: the relative position of the human body with respect to the flexible display screen 80 in the coordinate system is obtained from the pixel position and the rotation angle.
Referring to fig. 9, in some embodiments, the first obtaining module 30 includes a creating unit 31, a first obtaining unit 32, a second obtaining unit 33, and a third obtaining unit 34. S31 may be implemented by the establishing unit 31, S32 may be implemented by the first obtaining unit 32, S33 may be implemented by the second obtaining unit 33, and S34 may be implemented by the third obtaining unit 34. That is, the establishing unit 31 may be configured to establish a coordinate system with the rotating shaft of the camera 81 as an origin; the first obtaining unit 32 may be configured to obtain a rotation angle of the camera 81 in the coordinate system; the second obtaining unit 33 may be configured to obtain pixel positions of the human body in the scene image; the third obtaining unit 34 may be configured to obtain a relative position of the human body with respect to the flexible display screen 80 in the coordinate system according to the pixel position and the rotation angle.
Referring to fig. 4, in some embodiments, S31, S32, S33 and S34 may be implemented by the processor 82. That is, the processor 82 is configured to establish a coordinate system with the rotation axis of the camera 81 as an origin, obtain a rotation angle of the camera 81 in the coordinate system, obtain a pixel position of the human body in the scene image, and obtain a relative position of the human body with respect to the flexible display 80 in the coordinate system according to the pixel position and the rotation angle.
The operation of the camera 81 according to the embodiment of the present application will be described by taking the rotation of the center axis of the flexible display 80 as an example. The rotation direction of the camera 81 is clockwise. Of course, in other embodiments, the rotation direction of the camera 81 may be counterclockwise.
Referring to fig. 10, specifically, the establishing unit 31 (processor 82) establishes a coordinate system with the central axis of the flexible display screen 80 as the origin O of the coordinate system and the optical axis 84 of the camera 81 at the initial position as the X-axis of the coordinate system. It should be noted that the initial position of the camera 81 is preset by factory setting, so the coordinate system is a preset coordinate system, and the coordinate system is not changed with the rotation of the camera 81 after being established.
Referring to fig. 11 and 12, when a human body exists in the scene image acquired by the camera 81, an included angle between the optical axis of the camera 81 and the X axis in the clockwise direction is β, and the pixel position in the scene image where the human body is located at the center of the scene image, an included angle between a connecting line between the position (point a) of the human body in the coordinate system and the origin O of coordinates and the X axis is β, and at this time, the camera 81 is directly opposite to the human body. The relative position of the human body with respect to the flexible display screen 80 is: the human body is positioned on a line of the flexible display 80 at an angle β (in a clockwise direction) to the coordinate axis X.
Referring to fig. 13 and fig. 14 in combination, when a human body exists in the scene image acquired by the camera 81, if the pixel position of the human body is located at a position to the right of the middle or to the left of the middle in the scene image, it indicates that the camera 81 is not directly opposite to the user (human body). In one scene image, the deviation angle of the pixel at the middle position from the optical axis 84 of the camera 81 is zero; the third acquisition unit 34 (processor 82) can determine the deviation angle of the pixel from the optical axis 84 of the camera 81 based on the distance of the pixel at the middle-to-left position from the middle pixel, the wide angle of the camera 81, and the size of the scene image. The third obtaining unit 34 (the processor 82) calculates a pixel position of the human body relative to a center of the scene image through an image recognition algorithm, obtains a deviation angle of the human body relative to an optical axis 84 of the camera 81 by combining a wide angle of the camera 81 and a size of the scene image, and obtains a relative position of the human body relative to the flexible display screen 80 in the coordinate system according to the deviation angle and a rotation angle of the camera 81.
Referring to fig. 15, in the present embodiment, when there is a human body in the scene image acquired by the camera 81, if the rotation angle of the camera 81 is β (where β is greater than or equal to 0 and less than or equal to 360 °), and the pixel position of the human body is at a position that is on the left in the middle of the scene image, as shown in fig. 13, the included angle between the connection line of the human body and the coordinate origin and the X axis is the difference between the rotation angle β of the camera 81 and the deviation angle θ of the pixel position of the human body relative to the optical axis 84 of the camera 81, and the human body is located on the straight line of the flexible display 80 corresponding to the coordinate axis X, where the included angle is β - θ (in the clockwise direction). When a human body exists in the scene image acquired by the camera 81, if the rotation angle of the camera 81 is β (where β is greater than or equal to 0 and less than or equal to 360 °), and the pixel position of the human body is at a position which is inclined to the right in the middle of the scene image, as shown in fig. 14, the included angle between the connecting line of the human body and the origin of coordinates and the X axis is the sum of the rotation angle β of the camera 81 and the deviation angle θ of the pixel position of the human body relative to the optical axis 84 of the camera 81, and the human body is located on the straight line corresponding to the coordinate axis X of the flexible display screen 80, where the included angle is β + θ (in the clockwise direction).
Referring to fig. 16, in some embodiments, controlling a partial area of the flexible display 80 to display according to the relative position includes:
s41: acquiring a human body visible region 85 of the flexible display screen 80 covered by the angle of view according to the relative position and the angle of view of the human body to serve as a display region of the flexible display screen 80; and
s42: and controlling the display area of the flexible display screen 80 to display the picture.
Referring to fig. 17, in some embodiments, the second control module 40 includes a calculating unit 41 and a first control unit 42. S41 may be implemented by the calculation unit 41, and S42 may be implemented by the first control unit 42. That is, the calculating unit 41 may be configured to acquire the human body visible region 85 of the flexible display screen 80 covered by the angle of view of the human body according to the relative position and the angle of view of the human body as the display area of the flexible display screen 80; the first control unit 42 can be used to control the display area of the flexible display 80 to display a picture.
Referring to fig. 4, in some embodiments, S41 and S42 may be implemented by the processor 82. That is, the processor 82 is operable to acquire the human body visible region 85 of the flexible display 80 covered by the angle of view of the human body according to the relative position and the angle of view of the human body as the display area of the flexible display 80; and controlling the display area of the flexible display screen 80 to display the picture.
Specifically, the calculation unit 41 (processor 82) calculates the size (the size of the viewable area 85) at which the human body can see the flexible display screen 80 from the relative position and the angle of field of the human body. The first control unit 42 (processor 82) controls the flexible display 80 to display a screen of the size, which is presented in a region on the side of the flexible display 80 facing the user. It will be appreciated that the size of the human viewable area 85 varies in a manner calculated from the shape of the flexible display 80. The relative position includes a distance between the human body and the flexible display screen 80, and an included angle formed by a connection line between the human body and the origin of coordinates and a connection line between a preset position (as shown in fig. 15, the preset position is an intersection point of the flexible display screen 80 and the X axis) of the flexible display screen 80 and the origin of coordinates in the same coordinate system.
Referring to fig. 18, in some embodiments, the display device 100 further includes a cylindrical main body 89, the flexible display 80 is at least partially disposed around the outer circumference of the main body 89, and the area of the human body visible region 85 is S, h ═ l, l ═ r (180 ° - α)/180 °, where h is the height of the flexible display 80, l is the arc length of the human body visible region 85, and α is the angle of view of the human body. In this way, the size of the human body visible region 85 is suitable, which can provide a good viewing experience for the user.
Referring to fig. 19, in some embodiments, the display device 100 further includes a main body 89 having a regular pentagonal prism shape, the flexible display 80 is disposed on an outer peripheral surface of the main body 89, and an area of the human body visible region 85 is S, h ═ l, l ═ 5a (180 ° - α), where h is a height of the flexible display 80, l is a length of the human body visible region 85, a is a length of one side of the display device 100, and α is a field angle of the human body. In other embodiments, the area of the human body viewable area 85 is also related to the distance between the human body and the flexible display 80, and the farther the distance between the human body and the flexible display 80, the larger the area of the viewable area 85.
Referring to fig. 20, in some embodiments, the control method further includes:
s50: detecting whether the human body moves according to the scene image;
s51: when the human body moves, the camera 81 is controlled to rotate;
s52: acquiring rotation information of the camera 81; and
s53: the position of the partial area displayed by the flexible display 80 is adjusted according to the rotation information.
Referring to fig. 21, in some embodiments, the display apparatus 100 further includes a detection module 50, a fifth control module 51, a second obtaining module 52, and an adjustment module 53. S50 may be implemented by the detection module 50, S51 may be implemented by the fifth control film 52, S52 may be implemented by the second obtaining module 52, and S53 may be implemented by the adjustment module 53. That is, the detecting module 50 can be used to detect whether the human body moves according to the scene image; the fifth control film 52 can be used for controlling the camera 81 to rotate when the human body moves; the second obtaining module 52 may be configured to obtain rotation information of the camera 81; the adjusting module 53 may be configured to adjust the position of the partial area displayed by the flexible display 80 according to the rotation information.
Referring to fig. 4, in some embodiments, S50, S51, S52 and S53 may be implemented by the processor 82. That is, the processor 82 is configured to detect whether a human body moves according to the scene image, control the camera 81 to rotate when the human body moves, acquire rotation information of the camera 81, and adjust the position of the partial region displayed on the flexible display 80 according to the rotation information.
Specifically, when the first determination module 20 (processor 82) determines that a human body exists in the scene image, the first control module 10 (processor 82) controls the camera 81 to stop rotating. When the detection module 50 (processor 82) detects that the position of the user (human body) relative to the flexible display screen 80 changes or the position change relative to the flexible display screen 80 is larger than a predetermined value, the detection module 50 (processor 82) determines that the human body moves. The detection module 50 (processor 82) may also calculate the speed and direction of the movement of the human body by calculating the change of the pixel position of the human body in the current scene image relative to the pixel position of the human body in the previous scene image through an image recognition algorithm. The fifth control module 51 can control the camera 81 to rotate correspondingly according to the moving speed and the moving direction of the human body so as to lock and track the user, so that the optical axis of the camera 81 always faces the human body, and the area of the flexible display screen 80 displaying the picture correspondingly changes along with the user.
Referring to fig. 1 and 22, in some embodiments, the camera 81 is disposed on an end surface of the main body 89, and the rotation shaft 83 of the camera 81 is parallel to the axis of the main body 89. The control method further comprises the following steps:
s60: after the camera 81 rotates for a preset number of turns and no human image of the human body exists in the scene image, the display device 100 is controlled to enter the sleep mode or the display device 100 is controlled to stop working.
Referring to fig. 23 in combination, in some embodiments, the display device 100 further includes a third control module 60. S60 may be implemented by the third control module 60. That is, the third control module 60 may be configured to control the display apparatus 100 to enter the sleep mode or control the display apparatus 100 to stop working after the camera 81 rotates for a preset number of turns and no human image of the human body exists in the scene image.
Referring to fig. 4 in conjunction, in some embodiments, S60 may be implemented by processor 82. That is, the processor 82 may be configured to control the display apparatus 100 to enter the sleep mode or control the display apparatus 100 to stop operating after the camera 81 rotates a preset number of turns and no human body exists in the scene image.
Specifically, after the camera 81 rotates circularly for a preset number of turns (for example, three turns) and no human body exists in the acquired scene image, indicating that the user is not near the display device 100, in order to save power, the third control module 60 (the processor 82) may control the camera 81 to stop rotating, and control the flexible display screen 80 to enter the sleep mode or control the display device 100 to stop working. Thus, the control method of the embodiment is beneficial to realizing the intellectualization of the display device 100, and when the user does not need to use the display device 100, the display device 100 can automatically enter the sleep mode or stop working, thereby being beneficial to saving the power of the power supply and improving the cruising ability of the display device 100.
Referring to fig. 24, in some embodiments, the control method further includes:
s70: when the display apparatus 100 is in the sleep mode and the display apparatus 100 receives the wake-up signal, the display apparatus 100 is controlled to enter the operation mode.
Referring to fig. 25, in some embodiments, the display device 100 further includes a fourth control module 70. S70 may be implemented by the fourth control module 70. That is, the fourth control module 70 may be configured to control the display apparatus 100 to enter the operating mode when the display apparatus 100 is in the sleep mode and the display apparatus 100 receives the wake-up signal.
Referring to fig. 4 in conjunction, in some embodiments, S70 may be implemented by processor 82. That is, the processor 82 may be configured to control the display apparatus 100 to enter the operating mode when the display apparatus 100 is in the sleep mode and the display apparatus 100 receives the wake-up signal.
Specifically, after the display device 100 is in the sleep mode, when the user needs to reuse the display device 100, the user can directly operate to wake up the display device 100, thereby avoiding the waiting time for the display device 100 to be powered on again. Thus, the user experience is improved.
In some embodiments, the wake-up signal includes a signal generated by a touch operation or a gesture operation received by the flexible display 80 or a timing signal set by the display device 100, wherein the sensor for sensing the touch operation and the gesture operation may be a flexible sensor.
Specifically, the user may directly touch the screen wakeup display device 100 of the flexible display screen 80, the user may also perform a preset gesture operation on the flexible display screen 80 to wake up the display device 100, the user may also set a timing signal to automatically wake up the display device 100, or the user may also send a wakeup signal by controlling a controller wirelessly connected to the display device 100.
Referring to fig. 26, the present embodiment further provides a computer device 200. The computer device 200 may be a mobile phone, a tablet computer, a notebook computer, a smart audio, etc. The computer device 200 includes a memory 87, an internal memory 88, and a processor 82 connected by a system bus 86. The memory 87 has stored therein computer readable instructions. Internal memory 88 provides an environment for the execution of computer readable instructions in memory 87. The processor 82 may be used to provide computing and control capabilities to support the operation of the overall computer device. The processor 82 can execute the instructions of the memory 87, and the processor 82 executes the control method of the display device of any of the above embodiments, for example, S10: controlling the camera to rotate and acquiring a scene image; s20: judging whether a human body exists in the scene image; s30: when a human body exists in the scene image, the relative position of the human body relative to the flexible display screen is obtained according to the motion information of the scene image and the camera; and S40: and controlling a partial area of the flexible display screen to display according to the relative position. It will be understood by those skilled in the art that the configuration shown in fig. 26 is only a schematic diagram of a part of the configuration related to the present application, and does not constitute a limitation to the computer device 200 to which the present application is applied, and a specific computer device 200 may include more or less components than those shown in the drawings, or combine some components, or have a different arrangement of components.
It will be understood by those skilled in the art that all or part of the processes of the methods of the above embodiments may be implemented by a computer program, which can be stored in a non-volatile computer readable storage medium, and when executed, can include the processes of the above embodiments of the methods. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
In the description herein, references to the description of the terms "one embodiment," "some embodiments," "an illustrative embodiment," "an example," "a specific example," or "some examples" or the like mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.