[go: up one dir, main page]

CN114217691A - A display driving method, device, electronic device and intelligent display system - Google Patents

A display driving method, device, electronic device and intelligent display system Download PDF

Info

Publication number
CN114217691A
CN114217691A CN202111521857.4A CN202111521857A CN114217691A CN 114217691 A CN114217691 A CN 114217691A CN 202111521857 A CN202111521857 A CN 202111521857A CN 114217691 A CN114217691 A CN 114217691A
Authority
CN
China
Prior art keywords
pixel
display
information
gazing
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111521857.4A
Other languages
Chinese (zh)
Other versions
CN114217691B (en
Inventor
孙高明
朱文涛
毕育欣
段欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BOE Technology Group Co Ltd
Original Assignee
BOE Technology Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BOE Technology Group Co Ltd filed Critical BOE Technology Group Co Ltd
Priority to CN202111521857.4A priority Critical patent/CN114217691B/en
Publication of CN114217691A publication Critical patent/CN114217691A/en
Application granted granted Critical
Publication of CN114217691B publication Critical patent/CN114217691B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G3/00Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
    • G09G3/20Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
    • G09G3/2007Display of intermediate tones
    • G09G3/2074Display of intermediate tones using sub-pixels

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • Human Computer Interaction (AREA)
  • Control Of Indicators Other Than Cathode Ray Tubes (AREA)

Abstract

本发明实施例提供了一种显示器驱动方法、装置、电子设备及智能显示系统。其中,所述方法包括:确定用户在显示器上的视野区域,作为注视区;依次扫描所述显示器中每个处于所述注视区的像素行;依次针对所述显示器中每个不处于所述注视区的像素组,同时扫描所述像素组中的所有像素行,其中,每个像素组包括N个相邻的像素行,N为大于1的正整数。能够针对注视区和非注视区以不同的方式对像素进行扫描,从而在保证注视区的图像质量的同时压缩非注视区的图像质量,即实现了根据用户的视野调整显示器上不同区域的清晰度。

Figure 202111521857

Embodiments of the present invention provide a display driving method, an apparatus, an electronic device, and an intelligent display system. Wherein, the method includes: determining a user's visual field area on the display as a gaze area; sequentially scanning each pixel row in the display that is in the gaze area; sequentially scanning each pixel row in the display that is not in the gaze area A pixel group of a region, and all pixel rows in the pixel group are scanned at the same time, wherein each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1. It can scan pixels in different ways for the fixation area and the non-fixation area, so as to compress the image quality of the non-fixation area while ensuring the image quality of the fixation area, that is, to adjust the clarity of different areas on the display according to the user's field of vision .

Figure 202111521857

Description

Display driving method and device, electronic equipment and intelligent display system
Technical Field
The invention relates to the technical field of displays, in particular to a display driving method and device, electronic equipment and an intelligent display system.
Background
In some application scenarios, such as light field display, it is desirable to have the image in the user's field of view displayed at a relatively high resolution, and the image outside the field of view displayed at a relatively low resolution, i.e., it is desirable to adjust the resolution of different areas on the display according to the user's field of view. However, in the prior art, the definition of each area of the display is always consistent, so that the definition of different areas on the display cannot be adjusted according to the visual field of a user.
Therefore, how to adjust the definition of different areas on the display according to the visual field of the user is a technical problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention aims to provide a display driving method, a display driving device, electronic equipment and an intelligent display system, so that definition of different areas on a display can be adjusted according to the visual field of a user. The specific technical scheme is as follows:
in a first aspect of embodiments of the present invention, there is provided a display driving method, the method including:
determining a visual field area of a user on a display as a gazing area;
sequentially scanning each pixel row in the gazing area in the display;
and scanning all pixel rows in the pixel groups simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
In a possible embodiment, the determining the user's view area on the display as the gazing area includes:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
In one possible embodiment, the location sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
In a possible embodiment, the coding information further includes one or more sub information of region adjustment sub information, compression mode sub information;
the method further comprises the following steps:
determining N according to the compression mode represented by the compression mode sub information;
the determining the area represented by the position sub information to obtain a gazing area includes:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
In one possible embodiment, said scanning each row of pixels in said display in said gazing region in turn comprises:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the scanning all pixel rows in the pixel group simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display comprises:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
In a second aspect of embodiments of the present invention, there is provided a display driving apparatus, the apparatus including:
the gazing area determining module is used for determining a visual field area of a user on the display as a gazing area;
the first scanning module is used for scanning each pixel row in the gazing area in the display in sequence;
and a second scanning module, configured to scan all pixel rows in each pixel group in the display sequentially for each pixel group not in the gazing region, where each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the gazing area determining module determines a visual field area of the user on the display as the gazing area, including:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
In one possible embodiment, the location sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
In a possible embodiment, the coding information further includes one or more sub information of region adjustment sub information, compression mode sub information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub information;
the gazing region determining module determines the region represented by the position sub information to obtain a gazing region, and the gazing region determining module comprises:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
In one possible embodiment, the first scanning module sequentially scans each pixel row in the display in the gazing zone, and includes:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the second scanning module scans all pixel rows in each pixel group in the display, wherein the pixel groups are not in the gazing area, and the scanning comprises the following steps:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
In a third aspect of the embodiments of the present invention, an electronic device is provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
a processor adapted to perform the method steps of any of the above first aspects when executing a program stored in the memory.
In a fourth aspect of the embodiments of the present invention, there is provided an intelligent display system, including: a host and a display;
the host comprises image acquisition equipment and a processor, and the display comprises a panel and a Field Programmable Gate Array (FPGA);
the image acquisition equipment is used for shooting human eye images of a user;
the processor is used for determining a visual field area of the user on the display according to the shot human eye image;
the FPGA is used for acquiring the visual field area determined by the processor and taking the visual field area as a gazing area; sequentially scanning each pixel row in the gazing area in the panel; sequentially scanning all pixel rows in each pixel group which is not in the gazing area in the panel, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
the panel is used for displaying an image to be displayed under the driving of the FPGA.
In a fifth aspect of embodiments of the present invention, there is provided a computer-readable storage medium having stored therein a computer program which, when executed by a processor, performs the method steps of any one of the above-mentioned second aspects.
The embodiment of the invention has the following beneficial effects:
the display driving method, the display driving device, the electronic device and the intelligent display system provided by the embodiment of the invention can adjust the scanning time sequence of each pixel row of the display according to the gazing region, and because each pixel row in the gazing region is scanned in sequence, the image data input when scanning different pixel rows are different, namely, each pixel row in the gazing region displays different images, and for each pixel row not in the gazing region, N rows in each pixel group are scanned simultaneously, so the image data input when scanning N pixel rows in the same pixel group are the same, namely, N pixel rows in the same pixel group display the same image, therefore, the non-gazing region of the display can only display 1/N image data when displaying images, and the gazing region can display complete image data, namely, the image displayed by the non-gazing region has lower definition, and the definition of the image displayed by the gazing area is higher, so that the definition of different areas on the display can be adjusted according to the visual field of a user.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by referring to these drawings.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a display driving method according to an embodiment of the present invention;
FIG. 3a is a schematic diagram illustrating a scanning timing sequence of a pixel row in a gazing region according to an embodiment of the present invention;
FIG. 3b is a schematic diagram illustrating a scanning timing sequence of a pixel row not in the gazing region according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating an implementation manner of S202 according to an embodiment of the present invention;
fig. 5 is a schematic diagram of an implementation manner of S203 according to the embodiment of the present invention;
fig. 6 is a schematic diagram of a pixel island structure according to an embodiment of the present invention;
FIG. 7 is a schematic structural diagram of a display driving apparatus according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of an intelligent display system according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of an FPGA according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived from the embodiments given herein by one of ordinary skill in the art, are within the scope of the invention.
In order to more clearly describe the display driving method provided by the embodiment of the present invention, an implementation body of the display driving method provided by the embodiment of the present invention will be described below. The display driving method provided by the invention can be applied to any electronic equipment with display driving capability, and the electronic equipment can be integrated with a display and can also be independent of the display. For convenience of description, the electronic device is integrated in a display for example.
In one possible embodiment, the electronic device is an FPGA (Field Programmable Gate Array) integrated inside the display, and illustratively, as shown in fig. 1, the FPGA is connected to the host via the DP interface and to the panel in the display.
The host is used for sending image data used for representing an image to be displayed to the FPGA, and the FPGA drives the panel to display the image data. In the light field display, the definition of different areas on the display needs to be adjusted according to the visual field of the user, while in the related art, only the overall definition of the display can be adjusted.
Based on this, an embodiment of the present invention provides a display driving method, as shown in fig. 2, including:
s201, determining a visual field area of the user on the display as a gazing area.
S202, scanning each pixel row in the gazing area in the display in sequence.
S203, sequentially aiming at each pixel group which is not in the gazing area in the display, simultaneously scanning all pixel rows in the pixel group, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer larger than 1.
With the embodiment, the scanning timing sequence of each pixel row of the display can be adjusted according to the gazing region, and because each pixel row in the gazing region is scanned sequentially, the image data input when scanning different pixel rows is different, that is, the pixel rows located in the gazing zone display different images, and the N rows in each pixel group are simultaneously scanned for the pixel rows not located in the gazing zone, so that the image data input when the N pixel rows in the same pixel group are scanned are the same, that is, N pixel rows in the same pixel group display the same image, so that the non-gazing region of the display can display only 1/N of the image data when displaying the image, and the gazing region can display the complete image data, the definition of the image displayed in the non-gazing area is low, the definition of the image displayed in the gazing area is high, and the definition of different areas on the display is adjusted according to the visual field of a user.
The foregoing S201 to S203 will be explained below, respectively:
in S201, the position of the gazing area may be calculated by the execution subject, or may be calculated by another device and sent to the execution subject. Exemplarily, taking the architecture shown in fig. 1 as an example, the host collects the eye data of the user through a sensor connected to the host, calculates the position of the gazing region according to the collected eye data, and sends information indicating the position of the gazing region to the FPGA, and the FPGA determines the gazing region by analyzing the information. The host can also send the collected human eye data to the FPGA so that the FPGA can determine the gazing area according to the human eye data. The sensor can be any sensor capable of acquiring the position or azimuth angle of human eyes, such as an image acquisition device. For example, in one possible embodiment, the host is externally connected with an image acquisition device, the image acquisition device is used for shooting human eye images and sending the human eye images to the host, and the host determines the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye images and determines the visual field area of the user on the display according to the position and the azimuth angle, so as to generate the position sub-information for representing the visual field area.
In S202, if at least one pixel in a pixel row is in the gazing region, the pixel row is in the gazing region, and if all pixels in a pixel row are not in the gazing region, the pixel row is not in the gazing region. Sequential scanning means that: at most one of the pixels in the same order in each pixel row is in an on state at the same time. For example, assuming that a total of four pixel rows are in the gazing area, which are respectively referred to as a first pixel row, a second pixel row, a third pixel row and a fourth pixel row, if a first pixel in the first pixel row is in an on state within a time period from t 0 to t 1, a first pixel in the second pixel row, the third pixel row and the fourth pixel row is in an off state within a time period from t 0 to t 1. The scanning timing of the first pixel row, the second pixel row, the third pixel row and the fourth pixel row can be seen in fig. 3 a.
In S203, the simultaneous scanning means: the pixels with the same sequence in each pixel row are in an on state in the same time period. For example, assuming that a total of four pixel rows are not in the gazing area, which are respectively denoted as a fifth pixel row, a sixth pixel row, a seventh pixel row and an eighth pixel row, if the first pixel in the fifth pixel row is in an on state within t-0 to t-1, the first pixel in the sixth pixel row, the seventh pixel row and the eighth pixel row is also in an on state within t-0 to t-1. The scanning timing of the fifth pixel row, the sixth pixel row, the seventh pixel row and the eighth pixel row can be seen in fig. 3 b.
The value of N may be different according to different application scenarios, such as 3, 4, 5, 8, 16, and the like, which is not limited in this embodiment. It can be understood that, since the pixel rows in the same pixel group are scanned simultaneously, the pixels in the same order in the pixel rows in the same pixel group are turned on simultaneously, so that the pixels in the same order will display the same image data, and therefore the image data displayed by the pixel rows in the same pixel group is the same, and therefore the image displayed by the non-gazing area is compressed to 1/N in the row dimension, so that the image displayed by the non-gazing area has lower definition.
As described above, in some embodiments, the position of the gazing zone is calculated by the host computer, and information (hereinafter, referred to as position sub-information) indicating the position of the gazing zone is transmitted to the execution main body. The location sub information may be data of different forms according to a connection manner between the host and the execution main body, but the location sub information should be data that the connection manner between the host and the execution main body can transmit.
Illustratively, the position sub information is represented in the form of an image since the DP (display port) interface, through which the host and the execution body are connected, can transmit image data. How information is transferred between the host and the execution main via image data will be described below.
In a possible embodiment, the host adds a preset information line to the image to be displayed, the preset information line is one or more lines of pixels, the preset information line is used for representing the encoded information, the pixel value of each pixel in the preset information line corresponds to an M-bit code in the encoded information, M is any positive integer, an xth pixel in the preset information line corresponds to a (x-1) M + 1-xth-M-bit code in the encoded information, x is any positive integer with a value range of 1 to L, and L is the total number of pixels in the preset information line.
For convenience of description, M ═ 2 is taken as an example for explanation, and the principle is the same for the case where M ═ 1 and M > 2, and the description is omitted here. In the case where M is 2, each pixel corresponds to two bits in the encoded information, and in the case where the encoded information is represented in binary form, there are four cases of two-bit encoding, which are "00", "01", "10", "11", and to which four colors are respectively associated in advance, assuming that a first color corresponds to "00", a second color corresponds to "01", a third color corresponds to "10", and a fourth color corresponds to "11", if the encoding corresponding to the pixel in the preset information line is "00", the host sets the pixel value of the pixel to the first color, and if the encoding corresponding to the pixel in the preset information line is "01", the host sets the pixel value of the pixel to the second color, and so on. For example, assuming that the first bit in the encoded information is 0 and the second bit is 0, the corresponding encoding of the first pixel in the predetermined pixel row is "00", and therefore the host sets the pixel value of the first pixel in the predetermined pixel row to the first color.
The first color, the second color, the third color and the fourth color may be any four colors, but the color difference between any two colors should be as large as possible, and in one possible embodiment, the first color, the second color, the third color and the fourth color may be black, blue, red and white, respectively, by way of example.
After receiving the image to be displayed added with the preset information line, the execution main body determines the codes corresponding to the pixel values in the preset information line according to the preset corresponding relation between the codes and the pixel values to obtain the coded information, so that the information is transmitted between the host and the execution main body through the image data.
For example, if the first pixel in the preset information line is in the first color, the execution body may determine that the first bit in the encoded information is encoded as "0" and the second bit is encoded as "0", and if the second pixel in the preset information line is in the second color, the execution body may determine that the third bit in the encoded information is encoded as "0" and the fourth bit is encoded as "1", and so on, the execution body determines that the bit in the encoded information is encoded, that is, determines the encoded information.
According to the length of the coding information and the number of pixels in the preset information line, the execution main body can determine the codes corresponding to all pixel values in the preset information line, or only determine the codes corresponding to partial pixel values in the preset information line. For example, assuming that the preset information line includes 1920 pixels and the length of the encoded information is 80 bits, the encoding subject only determines the encoding corresponding to the first 40 pixels in the preset information line, and thus the encoded information can be determined without determining the encoding corresponding to the remaining 1880 pixels.
At least the position sub information should be included in the coding information so that the execution body determines the gazing zone according to the position sub information. According to actual requirements, other information besides the position sub-information can be included in the coded information. Illustratively, the encoded information may further include one or more of the following information:
area adjustment sub-information, compression method sub-information.
And the execution main body determines the value of N according to the compression mode represented by the compression mode sub information. As described above, since the non-gazing region can display only 1/N of image data when displaying an image, it corresponds to compressing the image data to 1/N, and therefore, if the compression method indicated by the compression method sub information is half compression, the execution subject determines that the value of N is 2, and if the compression method indicated by the compression method sub information is quarter compression, the execution subject determines that the value of N is 4, and so on.
The area adjustment sub information is used for indicating an adjustment mode, and the execution main body adjusts the area indicated by the position sub information according to the adjustment mode indicated by the area adjustment sub information when determining the watching area, so that an adjusted area is obtained and used as the watching area.
It will be appreciated that in some application scenarios, the gazing zone may undergo minor changes due to fine adjustment of the pose of the human eye, for example, as the position of the human eye moves backwards, the gazing zone will expand to a certain size. Although the area indicated by the updated position sub-information is consistent with the changed gazing area by updating the position sub-information, re-determining the position sub-information needs to occupy certain system resources. For example, assuming that the position sub information is expressed in the form of vertex coordinates of two vertices on a diagonal of the gazing region, the position sub information is newly determined to determine the vertex coordinates of the two vertices on the diagonal of the gazing region after the change. The host can determine the change mode of the gazing area according to the adjustment mode of the position and posture of the human eyes and generate the area adjustment sub-information for indicating the change mode, so that the execution main body adjusts the original gazing area into the changed gazing area according to the area adjustment sub-information. Since the position sub-information does not need to be re-determined, the occupied system resources are relatively less.
In addition to the position sub-information, the mode sub-information, the area adjustment sub-information, and the compression mode sub-information, the encoded information may further include other information according to different actual requirements, which is not limited in this embodiment. For example, in one possible embodiment, the information represented by each bit code in the coded information is shown in table 1:
Figure BDA0003407802150000111
TABLE 1 meaning of bits of the coded information
Take the second row example in table 1, where [79:64] indicates that 79 bits to 64 bits in the encoded information are encoded as a piece of sub information, 16 indicates that the length of the sub information is 16 bits, and the flag bit indicates that the sub information is used for indicating the flag bit, and the third to ninth rows are the same. [63:56] is encoded as the compression-mode sub-information, [39:8] is encoded as the position sub-information, [7:0] is the region adjustment sub-information,
[63:56] the code is shown in Table 2:
bit Encoding The information expressed
[63:62] 00 Sequential scanning
[63:62] 01 Non-sequential scanning
[61:60] 00 Not compressing
[61:60] 01 One-half compression
[61:60] 10 Quarter compression
[61:60] 11 One-eighth compression
[59:56] 0000 Reservation
TABLE 2 compression of the respective meanings of the sub-information
Wherein, the second row of table 2 indicates that the scanning order is sequential scanning when the encoding of [63:62] is "00", the third row indicates that the scanning order is non-sequential scanning when the encoding of [63:62] is "01", the fourth row indicates that the compression method is not compressed when the encoding of [61:60] is "00", the fifth row indicates that the compression method is half-compressed when the encoding of [61:60] is "01", and so on.
The sequential scanning refers to sequential scanning according to the line coordinates of each pixel row, and the non-sequential scanning refers to scanning the pixel rows in the gazing area first and then scanning the pixel rows not in the gazing area. For example, assume that there are four pixel rows, which are respectively referred to as a first pixel row, a second pixel row, a third pixel row and a fourth pixel row, where the second pixel row and the third pixel row are in the gazing region, the first pixel row and the fourth pixel row are not in the gazing region, and the order of the horizontal coordinates of each pixel row is: first pixel row → second pixel row → third pixel row → fourth pixel row. If the scanning is sequential, the scanning sequence is the first pixel row → the second pixel row → the third pixel row → the fourth pixel row, and if the scanning is non-sequential, the scanning sequence is: second pixel row → third pixel row → first pixel row → fourth pixel row.
In some application scenarios, in order to more conveniently control each pixel in the display, the pixel in the display is divided into a plurality of pixel islands, and each pixel island includes a plurality of pixels. In these application scenarios, the foregoing S202 may be implemented by the manner shown in fig. 4:
s2021, determining a first scanning order of the first pixels in the gazing region when sequentially scanning each pixel row in the gazing region in the display.
For example, assuming that a total of two pixel rows are in the gazing zone, which are respectively referred to as a first pixel row and a second pixel row, the first scanning sequence is: the first pixel of the first pixel row, the first pixel of the second pixel row, the second pixel of the first pixel row, the second pixel of the second pixel row, and so on.
S2022, determining a first turn-on sequence of the first switch for controlling each first pixel according to the pixel island to which the first pixel belongs and the first scanning sequence.
The division of the pixel islands may be different according to different application scenarios, and for example, in one possible embodiment, every 11 pixels may be divided into one pixel island. Each pixel is controlled by one switch and different pixels can be controlled by the same switch, as shown in fig. 5 for example. In fig. 6, there are two pixel islands, each pixel island includes 11 pixels, wherein the 1 st, 3 rd, 5 th, 7 th, 9 th, 11 th pixels in the first pixel island and the 2 nd, 4 th, 6 th, 8 th, 10 th pixels in the second pixel island are controlled by a MUX (data selector) 1 switch, and the 2 nd, 4 th, 6 th, 8 th, 10 th pixels in the first pixel island and the 1 st, 3 th, 5 th, 7 th, 9 th, 11 th pixels in the second pixel island are controlled by a MUX2 switch.
S2023, sequentially turning on the first switches of the pixel islands according to the first turn-on sequence.
Because the first starting sequence is determined according to the first scanning sequence, the first switches of the pixel islands are sequentially started according to the first starting sequence, so that the first pixels are started according to the first scanning sequence, and each pixel row in the gazing area in the display is sequentially scanned.
Similarly to the pixel row in the gazing region, for the pixel row not in the gazing region, the foregoing S203 can be implemented by the following manner as shown in fig. 5:
s2031, determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel group are scanned simultaneously for each pixel group which is not in the gazing area in turn in the display.
S2032, determining a second turn-on sequence of the second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence.
S2033, sequentially turning on the second switches in the pixel islands according to the second turn-on sequence.
With the embodiment, the scanning of the pixel rows can be converted into the control of each pixel in the pixel island through pixel rearrangement, so that the scanning accuracy is effectively improved.
Corresponding to the foregoing display driving method, an embodiment of the present invention further provides a display driving apparatus, as shown in fig. 7, including:
a gazing region determining module 701, configured to determine a field of view region of the user on the display as a gazing region;
a first scanning module 702, configured to scan each pixel row in the gazing region in the display in turn;
a second scanning module 703, configured to scan all pixel rows in each pixel group in the display sequentially for each pixel group not in the gazing region, where each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1.
In one possible embodiment, the gazing area determining module determines a visual field area of the user on the display as the gazing area, including:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
In one possible embodiment, the location sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
In a possible embodiment, the coding information further includes one or more sub information of region adjustment sub information, compression mode sub information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub information;
the gazing region determining module determines the region represented by the position sub information to obtain a gazing region, and the gazing region determining module comprises:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
In one possible embodiment, the first scanning module sequentially scans each pixel row in the display in the gazing zone, and includes:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the second scanning module scans all pixel rows in each pixel group in the display, wherein the pixel groups are not in the gazing area, and the scanning comprises the following steps:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
Referring to fig. 8, an embodiment of the present invention further provides an intelligent display system, including:
host 810, display 820.
The host 810 comprises an image acquisition device 811 and a processor 812, the display 820 comprises a panel 822 and a field programmable gate array FPGA 821;
the image acquisition equipment 811 is used for shooting human eye images of a user;
the processor 812 is configured to determine a visual field area of the user on the display according to the captured human eye image;
the FPGA821 is used for acquiring the view area determined by the processor as a gazing area; sequentially scanning each pixel row in the gazing region in the panel 822; sequentially scanning all pixel rows in each pixel group in the panel 822, which is not in the gazing area, simultaneously, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
the panel 822 is used for displaying an image to be displayed under the driving of the FPGA 821.
The aforementioned processor 812 may refer to a CPU and/or a GPU in the host 810, and for example, in one possible embodiment, the aforementioned processor 812 includes a CPU and a GPU, where the CPU is configured to determine a visual field area of the user on the display according to a captured human eye image, and the GPU is configured to add a preset information line for representing encoded information in an image to be displayed, where the encoded information includes position sub-information for representing a position of the visual field area. For the encoded information and the preset information row, reference may be made to the related description, which is not repeated herein.
The architecture of the FGPA821 may be as shown in fig. 9, and includes a mode control module 8211, a GOA (gate On Array, Array substrate row driver) timing module 8212, a MUX timing module 8213, an image compression module 8214, a data reordering module 8215, and a CEDS (Clock Embedded Differential signaling) module 8216.
The mode control module 8211 is configured to switch a signal source, for example, the signal source may be switched to a DP signal source, or may be switched to a BIST signal source. The GOA timing module 8212 is configured to control the GOA timing of the panel 822, and the MUX timing module 8213 is configured to control the panel MUX timing, and together with the GOA timing module 8212, implement scanning of each pixel on the panel. An image compression module 8214 is used for compressing image data, and a data rearrangement module 8215 is used for implementing the steps of S2021-S2023 and S2031-S2033 described above. The CEDS module 8216 is used to transmit the compressed image data to the panel 822, so that the panel 822 displays the image data under the driving of the FPGA 821.
The embodiment of the present invention further provides an electronic device, as shown in fig. 10, which includes a processor 1001, a communication interface 1002, a memory 1003 and a communication bus 1004, wherein the processor 1001, the communication interface 1002 and the memory 1003 complete mutual communication through the communication bus 1004,
a memory 1003 for storing a computer program;
the processor 1001 is configured to implement the following steps when executing the program stored in the memory 1003:
determining a visual field area of a user on a display as a gazing area;
sequentially scanning each pixel row in the gazing area in the display;
and scanning all pixel rows in the pixel groups simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
The communication bus mentioned in the electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
The communication interface is used for communication between the electronic equipment and other equipment.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but also Digital Signal Processors (DSPs), Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs) or other Programmable logic devices, discrete Gate or transistor logic devices, discrete hardware components.
In still another embodiment of the present invention, a computer-readable storage medium is further provided, in which a computer program is stored, and the computer program realizes the steps of any one of the above display driving methods when executed by a processor.
In yet another embodiment, a computer program product containing instructions is provided, which when run on a computer, causes the computer to perform any of the above-described display driving methods.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When loaded and executed on a computer, cause the processes or functions described in accordance with the embodiments of the invention to occur, in whole or in part. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that can be accessed by a computer or a data storage device, such as a server, a data center, etc., that incorporates one or more of the available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments of the apparatus, the electronic device, the system, the computer-readable storage medium, and the computer program product, which are substantially similar to the method embodiments, the description is relatively simple, and related matters can be found in the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (13)

1. A display driving method, the method comprising:
determining a visual field area of a user on a display as a gazing area;
sequentially scanning each pixel row in the gazing area in the display;
and scanning all pixel rows in the pixel groups simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1.
2. The method of claim 1, wherein determining the area of the user's field of view on the display as the gazing zone comprises:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
3. The method of claim 2, wherein the position sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
4. The method of claim 2, wherein the encoded information further comprises one or more sub-information of region adjustment sub-information, compression mode sub-information;
the method further comprises the following steps:
determining N according to the compression mode represented by the compression mode sub information;
the determining the area represented by the position sub information to obtain a gazing area includes:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
5. The method of claim 1, wherein said sequentially scanning each row of pixels in the display in the gazing zone comprises:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the scanning all pixel rows in the pixel group simultaneously sequentially aiming at each pixel group which is not in the gazing area in the display comprises:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
6. A display driving apparatus, comprising:
the gazing area determining module is used for determining a visual field area of a user on the display as a gazing area;
the first scanning module is used for scanning each pixel row in the gazing area in the display in sequence;
and a second scanning module, configured to scan all pixel rows in each pixel group in the display sequentially for each pixel group not in the gazing region, where each pixel group includes N adjacent pixel rows, and N is a positive integer greater than 1.
7. The apparatus of claim 6, wherein the gazing zone determining module determines a field of view of the user on the display as the gazing zone, comprising:
determining codes corresponding to pixel values in a preset information line in an image to be displayed according to a preset corresponding relation between the codes and the pixel values to obtain coded information, wherein the coded information comprises position sub-information, and the preset information is added to the image to be displayed by decoding equipment which is used for decoding the image to be displayed according to the coded information and the corresponding relation;
and determining the area represented by the position sub information to obtain the gazing area.
8. The apparatus of claim 7, wherein the position sub-information is obtained by:
determining the position and the azimuth angle of the eyes of the user relative to the display according to the shot human eye image of the user;
determining a field of view of the user on a display based on the position and the azimuth;
generating position sub-information representing the field of view region.
9. The apparatus of claim 7, wherein the encoded information further comprises one or more sub-information of a region adjustment sub-information, a compression mode sub-information;
the first scanning module is further configured to determine N according to the compression mode represented by the compression mode sub information;
the gazing region determining module determines the region represented by the position sub information to obtain a gazing region, and the gazing region determining module comprises:
and adjusting the area represented by the position sub information according to the area adjusting sub information to obtain the gazing area.
10. The apparatus of claim 6, wherein the first scanning module sequentially scans each pixel row in the display in the gazing zone, comprising:
determining a first scanning order of first pixels located in a gazing region when each pixel row in the gazing region in the display is scanned in sequence;
determining a first opening sequence of a first switch for controlling each first pixel according to the pixel island to which each first pixel belongs and the first scanning sequence;
sequentially turning on first switches in the pixel islands according to the first turning-on sequence;
the second scanning module scans all pixel rows in each pixel group in the display, wherein the pixel groups are not in the gazing area, and the scanning comprises the following steps:
determining a second scanning sequence of second pixels which are not in the gazing area when all pixel rows in the pixel groups are scanned simultaneously for each pixel group which is not in the gazing area in sequence in the display;
determining a second opening sequence of a second switch for controlling each second pixel according to the pixel island to which each second pixel belongs and the second scanning sequence;
and sequentially starting the second switches in the pixel islands according to the second starting sequence.
11. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the method steps of any one of claims 1 to 5 when executing a program stored in the memory.
12. An intelligent display system, comprising: a host and a display;
the host comprises image acquisition equipment and a processor, and the display comprises a panel and a Field Programmable Gate Array (FPGA);
the image acquisition equipment is used for shooting human eye images of a user;
the processor is used for determining a visual field area of the user on the display according to the shot human eye image;
the FPGA is used for acquiring the visual field area determined by the processor and taking the visual field area as a gazing area; sequentially scanning each pixel row in the gazing area in the panel; sequentially scanning all pixel rows in each pixel group which is not in the gazing area in the panel, wherein each pixel group comprises N adjacent pixel rows, and N is a positive integer greater than 1;
the panel is used for displaying an image to be displayed under the driving of the FPGA.
13. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when being executed by a processor, carries out the method steps of any one of the claims 1-5.
CN202111521857.4A 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system Active CN114217691B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111521857.4A CN114217691B (en) 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111521857.4A CN114217691B (en) 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system

Publications (2)

Publication Number Publication Date
CN114217691A true CN114217691A (en) 2022-03-22
CN114217691B CN114217691B (en) 2023-12-26

Family

ID=80701603

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111521857.4A Active CN114217691B (en) 2021-12-13 2021-12-13 Display driving method and device, electronic equipment and intelligent display system

Country Status (1)

Country Link
CN (1) CN114217691B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115547264A (en) * 2022-11-07 2022-12-30 北京显芯科技有限公司 Backlight dimming method and system based on human eye tracking

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11212517A (en) * 1997-11-18 1999-08-06 Matsushita Electric Ind Co Ltd Multi-tone image display device
CN106920501A (en) * 2017-05-12 2017-07-04 京东方科技集团股份有限公司 Display device and its driving method and drive circuit
US20190005884A1 (en) * 2017-06-30 2019-01-03 Lg Display Co., Ltd. Display device and gate driving circuit thereof, control method and virtual reality device
CN112102172A (en) * 2020-09-21 2020-12-18 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display system, and storage medium
US20200412983A1 (en) * 2018-03-08 2020-12-31 Sony Interactive Entertainment Inc. Electronic device, head-mounted display, gaze point detector, and pixel data readout method
US20210174724A1 (en) * 2017-11-13 2021-06-10 Beijing Boe Optoelectronics Technology Co., Ltd. Method for driving a display panel, display drive circuit and display device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11212517A (en) * 1997-11-18 1999-08-06 Matsushita Electric Ind Co Ltd Multi-tone image display device
CN106920501A (en) * 2017-05-12 2017-07-04 京东方科技集团股份有限公司 Display device and its driving method and drive circuit
US20190005884A1 (en) * 2017-06-30 2019-01-03 Lg Display Co., Ltd. Display device and gate driving circuit thereof, control method and virtual reality device
US20210174724A1 (en) * 2017-11-13 2021-06-10 Beijing Boe Optoelectronics Technology Co., Ltd. Method for driving a display panel, display drive circuit and display device
US20200412983A1 (en) * 2018-03-08 2020-12-31 Sony Interactive Entertainment Inc. Electronic device, head-mounted display, gaze point detector, and pixel data readout method
CN112102172A (en) * 2020-09-21 2020-12-18 京东方科技集团股份有限公司 Image processing method, image processing apparatus, display system, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115547264A (en) * 2022-11-07 2022-12-30 北京显芯科技有限公司 Backlight dimming method and system based on human eye tracking

Also Published As

Publication number Publication date
CN114217691B (en) 2023-12-26

Similar Documents

Publication Publication Date Title
CN106531073B (en) Processing circuit, display methods and the display device of display screen
US11763440B2 (en) Electronic apparatus and control method thereof
CN110235176B (en) Image processing method and device, data transmission method and device, storage medium
CN108109570A (en) Low resolution RGB for effectively transmitting is rendered
CN113506545A (en) Backlight driving method, device, computer equipment and storage medium
CN112102172A (en) Image processing method, image processing apparatus, display system, and storage medium
CN108076384B (en) An image processing method, device, device and medium based on virtual reality
CN105979201A (en) Intelligent wearable device based on parallel processor
US20200167896A1 (en) Image processing method and device, display device and virtual reality display system
CN105718047A (en) Visual data processing method and visual data processing system
EP2787738B1 (en) Tile-based compression for graphic applications
US7724396B2 (en) Method for dithering image data
CN114217691A (en) A display driving method, device, electronic device and intelligent display system
WO2025161593A1 (en) Image processing method and apparatus, computer device and image display method
US6919902B2 (en) Method and apparatus for fetching pixel data from memory
TW202141427A (en) Image processing method and device, camera equipment and storage medium
US9077606B2 (en) Data transmission device, data reception device, and data transmission method
US20110221775A1 (en) Method for transforming displaying images
CN112185312B (en) Image data processing method and device
CN114495771B (en) Virtual reality display device, host device, system and data processing method
US20110002538A1 (en) Method and apparatus for graphical data compression
Leung et al. Hardware realization of steganographic techniques
CN100395959C (en) Optimized data transmission system and method
CN112150345A (en) Image processing method and device, video processing method and sending card
TWI410136B (en) Data compression method and video processing system and display using thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant