Disclosure of Invention
The embodiment of the invention aims to provide a display driving method, a display driving device, electronic equipment and an intelligent display system, so that definition of different areas on a display can be adjusted according to the visual field of a user. The specific technical scheme is as follows:
in a first aspect of embodiments of the present invention, there is provided a display driving method, the method including:
determining a visual field area of a user on a display as a gazing area;
sequentially scanning each pixel row in the gazing area in the display;
and scanning all pixel rows in the pixel groups simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
In a possible embodiment, the determining the user's view area on the display as the gazing area includes:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
In one possible embodiment, the location sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
In a possible embodiment, the coding information further includes one or more sub information of region adjustment sub information, compression mode sub information;
the method further comprises the following steps:
determining N according to the compression mode represented by the compression mode sub information;
the determining the area represented by the position sub information to obtain a gazing area includes:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
In one possible embodiment, said scanning each row of pixels in said display in said gazing region in turn comprises:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the scanning all pixel rows in the pixel group simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display comprises:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
In a second aspect of embodiments of the present invention, there is provided a display driving apparatus, the apparatus including:
the gazing area determining module is used for determining a visual field area of a user on the display as a gazing area;
the first scanning module is used for scanning each pixel row in the gazing area in the display in sequence;
and a second scanning module, configured to scan all pixel rows in each pixel group in the display sequentially for each pixel group not in the gazing region, where each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the gazing area determining module determines a visual field area of the user on the display as the gazing area, including:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
In one possible embodiment, the location sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
In a possible embodiment, the coding information further includes one or more sub information of region adjustment sub information, compression mode sub information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub information;
the gazing region determining module determines the region represented by the position sub information to obtain a gazing region, and the gazing region determining module comprises:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
In one possible embodiment, the first scanning module sequentially scans each pixel row in the display in the gazing zone, and includes:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the second scanning module scans all pixel rows in each pixel group in the display, wherein the pixel groups are not in the gazing area, and the scanning comprises the following steps:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
In a third aspect of the embodiments of the present invention, an electronic device is provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor adapted to perform the method steps of any of the above first aspects when executing a program stored in the memory.
In a fourth aspect of the embodiments of the present invention, there is provided an intelligent display system, including: a host and a display;
the host comprises image acquisition equipment and a processor, and the display comprises a panel and a Field Programmable Gate Array (FPGA);
the image acquisition equipment is used for shooting human eye images of a user;
the processor is used for determining a visual field area of the user on the display according to the shot human eye image;
the FPGA is used for acquiring the visual field area determined by the processor and taking the visual field area as a gazing area; sequentially scanning each pixel row in the gazing area in the panel; sequentially scanning all pixel rows in each pixel group which is not in the gazing area in the panel, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
the panel is used for displaying an image to be displayed under the driving of the FPGA.
In a fifth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, performs the method steps of any one of the above-mentioned second aspects.
The embodiment of the invention has the following beneficial effects:
the display driving method, the display driving device, the electronic device and the intelligent display system provided by the embodiment of the invention can adjust the scanning time sequence of each pixel row of the display according to the gazing region, and because each pixel row in the gazing region is scanned in sequence, the image data input when scanning different pixel rows are different, namely, each pixel row in the gazing region displays different images, and for each pixel row not in the gazing region, N rows in each pixel group are scanned simultaneously, so the image data input when scanning N pixel rows in the same pixel group are the same, namely, N pixel rows in the same pixel group display the same image, therefore, the non-gazing region of the display can only display 1/N image data when displaying images, and the gazing region can display complete image data, namely, the image displayed by the non-gazing region has lower definition, and the definition of the image displayed by the gazing area is higher, so that the definition of different areas on the display can be adjusted according to the visual field of a user.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
In order to more clearly describe the display driving method provided by the embodiment of the present invention, an implementation body of the display driving method provided by the embodiment of the present invention will be described below. The display driving method provided by the invention can be applied to any electronic equipment with display driving capability, and the electronic equipment can be integrated with a display and can also be independent of the display. For convenience of description, the electronic device is integrated in a display for example.
In one possible embodiment, the electronic device is an FPGA (Field Programmable Gate Array) integrated inside the display, and illustratively, as shown in fig. 1, the FPGA is connected to the host via the DP interface and to the panel in the display.
The host is used for sending image data used for representing an image to be displayed to the FPGA, and the FPGA drives the panel to display the image data. In the light field display, the definition of different areas on the display needs to be adjusted according to the visual field of the user, while in the related art, only the overall definition of the display can be adjusted.
Based on this, an embodiment of the present invention provides a display driving method, as shown in fig. 2, including:
s201, determining a visual field area of the user on the display as a gazing area.
S202, scanning each pixel row in the gazing area in the display in sequence.
S203, sequentially aiming at each pixel group which is not in the gazing area in the display, simultaneously scanning all pixel rows in the pixel group, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer larger than 1.
With the embodiment, the scanning timing sequence of each pixel row of the display can be adjusted according to the gazing region, and because each pixel row in the gazing region is scanned sequentially, the image data input when scanning different pixel rows is different, that is, the pixel rows located in the gazing zone display different images, and the N rows in each pixel group are simultaneously scanned for the pixel rows not located in the gazing zone, so that the image data input when the N pixel rows in the same pixel group are scanned are the same, that is, N pixel rows in the same pixel group display the same image, so that the non-gazing region of the display can display only 1/N of the image data when displaying the image, and the gazing region can display the complete image data, the definition of the image displayed in the non-gazing area is low, the definition of the image displayed in the gazing area is high, and the definition of different areas on the display is adjusted according to the visual field of a user.
The foregoing S201 to S203 will be explained below, respectively:
in S201, the position of the gazing area may be calculated by the execution subject, or may be calculated by another device and sent to the execution subject. Exemplarily, taking the architecture shown in fig. 1 as an example, the host collects the eye data of the user through a sensor connected to the host, calculates the position of the gazing region according to the collected eye data, and sends information indicating the position of the gazing region to the FPGA, and the FPGA determines the gazing region by analyzing the information. The host can also send the collected human eye data to the FPGA so that the FPGA can determine the gazing area according to the human eye data. The sensor can be any sensor capable of acquiring the position or azimuth angle of human eyes, such as an image acquisition device. For example, in one possible embodiment, the host is externally connected with an image acquisition device, the image acquisition device is used for shooting human eye images and sending the human eye images to the host, and the host determines the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye images and determines the visual field area of the user on the display according to the position and the azimuth angle, so as to generate the position sub-information for representing the visual field area.
In S202, if at least one pixel in a pixel row is in the gazing region, the pixel row is in the gazing region, and if all pixels in a pixel row are not in the gazing region, the pixel row is not in the gazing region. Sequential scanning means that: at most one of the pixels in the same order in each pixel row is in an on state at the same time. For example, assuming that a total of four pixel rows are in the gazing area, which are respectively referred to as a first pixel row, a second pixel row, a third pixel row and a fourth pixel row, if a first pixel in the first pixel row is in an on state within a time period from t 0 to t 1, a first pixel in the second pixel row, the third pixel row and the fourth pixel row is in an off state within a time period from t 0 to t 1. The scanning timing of the first pixel row, the second pixel row, the third pixel row and the fourth pixel row can be seen in fig. 3 a.
In S203, the simultaneous scanning means: the pixels with the same sequence in each pixel row are in an on state in the same time period. For example, assuming that a total of four pixel rows are not in the gazing area, which are respectively denoted as a fifth pixel row, a sixth pixel row, a seventh pixel row and an eighth pixel row, if the first pixel in the fifth pixel row is in an on state within t-0 to t-1, the first pixel in the sixth pixel row, the seventh pixel row and the eighth pixel row is also in an on state within t-0 to t-1. The scanning timing of the fifth pixel row, the sixth pixel row, the seventh pixel row and the eighth pixel row can be seen in fig. 3 b.
The value of N may be different according to different application scenarios, such as 3, 4, 5, 8, 16, and the like, which is not limited in this embodiment. It can be understood that, since the pixel rows in the same pixel group are scanned simultaneously, the pixels in the same order in the pixel rows in the same pixel group are turned on simultaneously, so that the pixels in the same order will display the same image data, and therefore the image data displayed by the pixel rows in the same pixel group is the same, and therefore the image displayed by the non-gazing area is compressed to 1/N in the row dimension, so that the image displayed by the non-gazing area has lower definition.
As described above, in some embodiments, the position of the gazing zone is calculated by the host computer, and information (hereinafter, referred to as position sub-information) indicating the position of the gazing zone is transmitted to the execution main body. The location sub information may be data of different forms according to a connection manner between the host and the execution main body, but the location sub information should be data that the connection manner between the host and the execution main body can transmit.
Illustratively, the position sub information is represented in the form of an image since the DP (display port) interface, through which the host and the execution body are connected, can transmit image data. How information is transferred between the host and the execution main via image data will be described below.
In a possible embodiment, the host adds a preset information line to the image to be displayed, the preset information line is one or more lines of pixels, the preset information line is used for representing the encoded information, the pixel value of each pixel in the preset information line corresponds to an M-bit code in the encoded information, M is any positive integer, an xth pixel in the preset information line corresponds to a (x-1) M + 1-xth-M-bit code in the encoded information, x is any positive integer with a value range of 1 to L, and L is the total number of pixels in the preset information line.
For convenience of description, M ═ 2 is taken as an example for explanation, and the principle is the same for the case where M ═ 1 and M > 2, and the description is omitted here. In the case where M is 2, each pixel corresponds to two bits in the encoded information, and in the case where the encoded information is represented in binary form, there are four cases of two-bit encoding, which are "00", "01", "10", "11", and to which four colors are respectively associated in advance, assuming that a first color corresponds to "00", a second color corresponds to "01", a third color corresponds to "10", and a fourth color corresponds to "11", if the encoding corresponding to the pixel in the preset information line is "00", the host sets the pixel value of the pixel to the first color, and if the encoding corresponding to the pixel in the preset information line is "01", the host sets the pixel value of the pixel to the second color, and so on. For example, assuming that the first bit in the encoded information is 0 and the second bit is 0, the corresponding encoding of the first pixel in the predetermined pixel row is "00", and therefore the host sets the pixel value of the first pixel in the predetermined pixel row to the first color.
The first color, the second color, the third color and the fourth color may be any four colors, but the color difference between any two colors should be as large as possible, and in one possible embodiment, the first color, the second color, the third color and the fourth color may be black, blue, red and white, respectively, by way of example.
After receiving the image to be displayed added with the preset information line, the execution main body determines the codes corresponding to the pixel values in the preset information line according to the preset corresponding relation between the codes and the pixel values to obtain the coded information, so that the information is transmitted between the host and the execution main body through the image data.
For example, if the first pixel in the preset information line is in the first color, the execution body may determine that the first bit in the encoded information is encoded as "0" and the second bit is encoded as "0", and if the second pixel in the preset information line is in the second color, the execution body may determine that the third bit in the encoded information is encoded as "0" and the fourth bit is encoded as "1", and so on, the execution body determines that the bit in the encoded information is encoded, that is, determines the encoded information.
According to the length of the coding information and the number of pixels in the preset information line, the execution main body can determine the codes corresponding to all pixel values in the preset information line, or only determine the codes corresponding to partial pixel values in the preset information line. For example, assuming that the preset information line includes 1920 pixels and the length of the encoded information is 80 bits, the encoding subject only determines the encoding corresponding to the first 40 pixels in the preset information line, and thus the encoded information can be determined without determining the encoding corresponding to the remaining 1880 pixels.
At least the position sub information should be included in the coding information so that the execution body determines the gazing zone according to the position sub information. According to actual requirements, other information besides the position sub-information can be included in the coded information. Illustratively, the encoded information may further include one or more of the following information:
area adjustment sub-information, compression method sub-information.
And the execution main body determines the value of N according to the compression mode represented by the compression mode sub information. As described above, since the non-gazing region can display only 1/N of image data when displaying an image, it corresponds to compressing the image data to 1/N, and therefore, if the compression method indicated by the compression method sub information is half compression, the execution subject determines that the value of N is 2, and if the compression method indicated by the compression method sub information is quarter compression, the execution subject determines that the value of N is 4, and so on.
The area adjustment sub information is used for indicating an adjustment mode, and the execution main body adjusts the area indicated by the position sub information according to the adjustment mode indicated by the area adjustment sub information when determining the watching area, so that an adjusted area is obtained and used as the watching area.
It will be appreciated that in some application scenarios, the gazing zone may undergo minor changes due to fine adjustment of the pose of the human eye, for example, as the position of the human eye moves backwards, the gazing zone will expand to a certain size. Although the area indicated by the updated position sub-information is consistent with the changed gazing area by updating the position sub-information, re-determining the position sub-information needs to occupy certain system resources. For example, assuming that the position sub information is expressed in the form of vertex coordinates of two vertices on a diagonal of the gazing region, the position sub information is newly determined to determine the vertex coordinates of the two vertices on the diagonal of the gazing region after the change. The host can determine the change mode of the gazing area according to the adjustment mode of the position and posture of the human eyes and generate the area adjustment sub-information for indicating the change mode, so that the execution main body adjusts the original gazing area into the changed gazing area according to the area adjustment sub-information. Since the position sub-information does not need to be re-determined, the occupied system resources are relatively less.
In addition to the position sub-information, the mode sub-information, the area adjustment sub-information, and the compression mode sub-information, the encoded information may further include other information according to different actual requirements, which is not limited in this embodiment. For example, in one possible embodiment, the information represented by each bit code in the coded information is shown in table 1:
TABLE 1 meaning of bits of the coded information
Take the second row example in table 1, where [79:64] indicates that 79 bits to 64 bits in the encoded information are encoded as a piece of sub information, 16 indicates that the length of the sub information is 16 bits, and the flag bit indicates that the sub information is used for indicating the flag bit, and the third to ninth rows are the same. [63:56] is encoded as the compression-mode sub-information, [39:8] is encoded as the position sub-information, [7:0] is the region adjustment sub-information,
[63:56] the code is shown in Table 2:
bit
|
Encoding
|
The information expressed
|
[63:62]
|
00
|
Sequential scanning
|
[63:62]
|
01
|
Non-sequential scanning
|
[61:60]
|
00
|
Not compressing
|
[61:60]
|
01
|
One-half compression
|
[61:60]
|
10
|
Quarter compression
|
[61:60]
|
11
|
One-eighth compression
|
[59:56]
|
0000
|
Reservation |
TABLE 2 compression of the respective meanings of the sub-information
Wherein, the second row of table 2 indicates that the scanning order is sequential scanning when the encoding of [63:62] is "00", the third row indicates that the scanning order is non-sequential scanning when the encoding of [63:62] is "01", the fourth row indicates that the compression method is not compressed when the encoding of [61:60] is "00", the fifth row indicates that the compression method is half-compressed when the encoding of [61:60] is "01", and so on.
The sequential scanning refers to sequential scanning according to the line coordinates of each pixel row, and the non-sequential scanning refers to scanning the pixel rows in the gazing area first and then scanning the pixel rows not in the gazing area. For example, assume that there are four pixel rows, which are respectively referred to as a first pixel row, a second pixel row, a third pixel row and a fourth pixel row, where the second pixel row and the third pixel row are in the gazing region, the first pixel row and the fourth pixel row are not in the gazing region, and the order of the horizontal coordinates of each pixel row is: first pixel row → second pixel row → third pixel row → fourth pixel row. If the scanning is sequential, the scanning sequence is the first pixel row → the second pixel row → the third pixel row → the fourth pixel row, and if the scanning is non-sequential, the scanning sequence is: second pixel row → third pixel row → first pixel row → fourth pixel row.
In some application scenarios, in order to more conveniently control each pixel in the display, the pixel in the display is divided into a plurality of pixel islands, and each pixel island includes a plurality of pixels. In these application scenarios, the foregoing S202 may be implemented by the manner shown in fig. 4:
s2021, determining a first scanning order of the first pixels in the gazing region when sequentially scanning each pixel row in the gazing region in the display.
For example, assuming that a total of two pixel rows are in the gazing zone, which are respectively referred to as a first pixel row and a second pixel row, the first scanning sequence is: the first pixel of the first pixel row, the first pixel of the second pixel row, the second pixel of the first pixel row, the second pixel of the second pixel row, and so on.
S2022, determining a first turn-on sequence of the first switch for controlling each first pixel according to the pixel island to which the first pixel belongs and the first scanning sequence.
The division of the pixel islands may be different according to different application scenarios, and for example, in one possible embodiment, every 11 pixels may be divided into one pixel island. Each pixel is controlled by one switch and different pixels can be controlled by the same switch, as shown in fig. 5 for example. In fig. 6, there are two pixel islands, each pixel island includes 11 pixels, wherein the 1 st, 3 rd, 5 th, 7 th, 9 th, 11 th pixels in the first pixel island and the 2 nd, 4 th, 6 th, 8 th, 10 th pixels in the second pixel island are controlled by a MUX (data selector) 1 switch, and the 2 nd, 4 th, 6 th, 8 th, 10 th pixels in the first pixel island and the 1 st, 3 th, 5 th, 7 th, 9 th, 11 th pixels in the second pixel island are controlled by a MUX2 switch.
S2023, sequentially turning on the first switches of the pixel islands according to the first turn-on sequence.
Because the first starting sequence is determined according to the first scanning sequence, the first switches of the pixel islands are sequentially started according to the first starting sequence, so that the first pixels are started according to the first scanning sequence, and each pixel row in the gazing area in the display is sequentially scanned.
Similarly to the pixel row in the gazing region, for the pixel row not in the gazing region, the foregoing S203 can be implemented by the following manner as shown in fig. 5:
s2031, determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel group are scanned simultaneously for each pixel group which is not in the gazing area in turn in the display.
S2032, determining a second turn-on sequence of the second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence.
S2033, sequentially turning on the second switches in the pixel islands according to the second turn-on sequence.
With the embodiment, the scanning of the pixel rows can be converted into the control of each pixel in the pixel island through pixel rearrangement, so that the scanning accuracy is effectively improved.
Corresponding to the foregoing display driving method, an embodiment of the present invention further provides a display driving apparatus, as shown in fig. 7, including:
a gazing region determining module 701, configured to determine a field of view region of the user on the display as a gazing region;
a first scanning module 702, configured to scan each pixel row in the gazing region in the display in turn;
a second scanning module 703, configured to scan all pixel rows in each pixel group in the display sequentially for each pixel group not in the gazing region, where each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the gazing area determining module determines a visual field area of the user on the display as the gazing area, including:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
In one possible embodiment, the location sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
In a possible embodiment, the coding information further includes one or more sub information of region adjustment sub information, compression mode sub information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub information;
the gazing region determining module determines the region represented by the position sub information to obtain a gazing region, and the gazing region determining module comprises:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
In one possible embodiment, the first scanning module sequentially scans each pixel row in the display in the gazing zone, and includes:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the second scanning module scans all pixel rows in each pixel group in the display, wherein the pixel groups are not in the gazing area, and the scanning comprises the following steps:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
Referring to fig. 8, an embodiment of the present invention further provides an intelligent display system, including:
host 810, display 820.
The host 810 comprises an image acquisition device 811 and a processor 812, the display 820 comprises a panel 822 and a field programmable gate array FPGA 821;
the image acquisition equipment 811 is used for shooting human eye images of a user;
the processor 812 is configured to determine a visual field area of the user on the display according to the captured human eye image;
the FPGA821 is used for acquiring the view area determined by the processor as a gazing area; sequentially scanning each pixel row in the gazing region in the panel 822; sequentially scanning all pixel rows in each pixel group in the panel 822, which is not in the gazing area, simultaneously, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
the panel 822 is used for displaying an image to be displayed under the driving of the FPGA 821.
The aforementioned processor 812 may refer to a CPU and/or a GPU in the host 810, and for example, in one possible embodiment, the aforementioned processor 812 includes a CPU and a GPU, where the CPU is configured to determine a visual field area of the user on the display according to a captured human eye image, and the GPU is configured to add a preset information line for representing encoded information in an image to be displayed, where the encoded information includes position sub-information for representing a position of the visual field area. For the encoded information and the preset information row, reference may be made to the related description, which is not repeated herein.
The architecture of the FGPA821 may be as shown in fig. 9, and includes a mode control module 8211, a GOA (gate On Array, Array substrate row driver) timing module 8212, a MUX timing module 8213, an image compression module 8214, a data reordering module 8215, and a CEDS (Clock Embedded Differential signaling) module 8216.
The mode control module 8211 is configured to switch a signal source, for example, the signal source may be switched to a DP signal source, or may be switched to a BIST signal source. The GOA timing module 8212 is configured to control the GOA timing of the panel 822, and the MUX timing module 8213 is configured to control the panel MUX timing, and together with the GOA timing module 8212, implement scanning of each pixel on the panel. An image compression module 8214 is used for compressing image data, and a data rearrangement module 8215 is used for implementing the steps of S2021-S2023 and S2031-S2033 described above. The CEDS module 8216 is used to transmit the compressed image data to the panel 822, so that the panel 822 displays the image data under the driving of the FPGA 821.
The embodiment of the present invention further provides an electronic device, as shown in fig. 10, which includes a processor 1001, a communication interface 1002, a memory 1003 and a communication bus 1004, wherein the processor 1001, the communication interface 1002 and the memory 1003 complete mutual communication through the communication bus 1004,
a memory 1003 for storing a computer program;
the processor 1001 is configured to implement the following steps when executing the program stored in the memory 1003:
determining a visual field area of a user on a display as a gazing area;
sequentially scanning each pixel row in the gazing area in the display;
and scanning all pixel rows in the pixel groups simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In still another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program realizes the steps of any one of the above display driving methods when executed by a processor.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform any of the above-described display driving methods.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments of the apparatus, the electronic device, the system, the computer-readable storage medium, and the computer program product, which are substantially similar to the method embodiments, the description is relatively simple, and related matters can be found in the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.