[go: up one dir, main page]

CN111147934A - Electronic device and output picture determining method - Google Patents

Electronic device and output picture determining method Download PDF

Info

Publication number
CN111147934A
CN111147934A CN201911050093.8A CN201911050093A CN111147934A CN 111147934 A CN111147934 A CN 111147934A CN 201911050093 A CN201911050093 A CN 201911050093A CN 111147934 A CN111147934 A CN 111147934A
Authority
CN
China
Prior art keywords
coordinates
sight
line
coordinate system
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911050093.8A
Other languages
Chinese (zh)
Other versions
CN111147934B (en
Inventor
余祥瑞
张立人
欧葳
戴佳琪
陈佳志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Aten International Co Ltd
Original Assignee
Aten International Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aten International Co Ltd filed Critical Aten International Co Ltd
Publication of CN111147934A publication Critical patent/CN111147934A/en
Application granted granted Critical
Publication of CN111147934B publication Critical patent/CN111147934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Studio Devices (AREA)

Abstract

一种电子装置及输出画面决定方法,该输出画面决定方法包括:通过摄影装置,取得一或多摄影画面;取得该一或多摄影画面在系统坐标系中的多个画面坐标;根据该一或多摄影画面的该些画面坐标,决定该一或多摄影画面在该系统坐标系中的一或多画面边界;通过眼球追踪器,取得在该系统坐标系中的视线坐标;以及相应于该视线坐标位于该一或多画面边界之内,在该一或多摄影画面中决定相应于该视线坐标的局部输出画面。由此,直播者不须手动操作,而通过直播者的视线即可变化观看者能看到的直播画面。

Figure 201911050093

An electronic device and an output screen determination method, the output screen determination method comprising: obtaining one or more photographic screens through a photographic device; obtaining a plurality of screen coordinates of the one or more photographic screens in a system coordinate system; determining one or more screen boundaries of the one or more photographic screens in the system coordinate system according to the screen coordinates of the one or more photographic screens; obtaining sight line coordinates in the system coordinate system through an eye tracker; and determining a local output screen corresponding to the sight line coordinates in the one or more photographic screens, corresponding to the sight line coordinates being within the one or more screen boundaries. Thus, the live broadcaster does not need to operate manually, but can change the live broadcast screen that the viewer can see through the live broadcaster's sight line.

Figure 201911050093

Description

Electronic device and output picture determining method
Technical Field
The present disclosure relates to an electronic device and a method. Specifically, the present disclosure relates to an electronic device and an output screen determining method.
Background
With the development of electronic technology, photographic devices (such as live broadcasting devices) have been widely used in human life.
Typically, live broadcasting is performed by taking pictures through a camera lens of a mobile phone or a computer of a live broadcaster and transmitting the pictures to a display device of a viewer. Thereby enabling interaction between the viewer and the live viewer via the frame.
However, the mobile phone or computer limited by the live broadcast often needs to be fixed and placed, and the live broadcast picture is also fixed and cannot be flexibly changed, which causes inconvenience. Therefore, a solution should be proposed.
Disclosure of Invention
An embodiment of the present disclosure relates to an electronic device.
According to one embodiment of the present disclosure, an electronic device includes one or more processing elements, a memory, and one or more programs. The one or more programs are stored in the memory and are executable by the one or more processing elements to cause the one or more processing elements to: obtaining one or more photographic pictures; obtaining a plurality of picture coordinates of the one or more photographic pictures in a system coordinate system; determining one or more frame boundaries of the one or more frames in the system coordinate system according to the frame coordinates of the one or more frames; acquiring the sight line coordinates in the system coordinate system; and determining a local output frame corresponding to the sight line coordinate in the one or more photographing frames corresponding to the sight line coordinate within the one or more frame boundaries.
In one embodiment, executing the one or more programs further comprises: obtaining a plurality of reference coordinates of a plurality of reference points in the system coordinate system; obtaining a plurality of reference eyeball tracking data respectively corresponding to the reference points; establishing a first corresponding relation according to the reference coordinates and the reference eyeball tracking data; and converting the eyeball tracking data into the sight line coordinate according to the first corresponding relation.
In one embodiment, executing the one or more programs further comprises: obtaining a plurality of reference coordinates of a plurality of reference points in the system coordinate system; obtaining the positions of the reference points in the one or more photographic pictures; establishing one or more second corresponding relations according to the reference coordinates and the positions of the reference points in the one or more photographic pictures; and generating the picture coordinates according to the one or more second corresponding relations.
In one embodiment, executing the one or more programs further comprises: acquiring one or more sight line positions corresponding to the sight line coordinates in the one or more photographing pictures according to the one or more second corresponding relations and the sight line coordinates; and determining the local output picture according to the one or more sight line positions.
In an embodiment, when the gaze coordinate is located within only one of the one or more frame boundaries, the local output frame is a portion of the photographic frame corresponding to the frame boundary where the gaze coordinate is located.
Another embodiment of the present disclosure relates to an output screen determining method. According to an embodiment of the present disclosure, a method for determining an output frame includes: obtaining one or more photographic pictures through one or more photographic devices; obtaining a plurality of picture coordinates of the one or more photographic pictures in a system coordinate system; determining one or more frame boundaries of said one or more frames in said system coordinate system based on said frame coordinates of said one or more frames; acquiring a sight line coordinate in the system coordinate system through a eyeball tracker; and determining a partial output picture corresponding to the sight line coordinate in the one or more photographing pictures, wherein the partial output picture is located within the one or more picture boundaries corresponding to the sight line coordinate.
In one embodiment, the operation of obtaining the gaze coordinate in the system coordinate system comprises: obtaining a plurality of reference coordinates of a plurality of reference points in the system coordinate system; obtaining a plurality of reference eyeball tracking data respectively corresponding to the reference points; establishing a first corresponding relation according to the reference coordinates and the reference eyeball tracking data; and converting the eyeball tracking data into the sight line coordinate according to the first corresponding relation.
In one embodiment, the operation of obtaining the frame coordinates of the one or more frames in the system coordinate system comprises: obtaining a plurality of reference coordinates of a plurality of reference points in the system coordinate system; obtaining the positions of the reference points in the one or more photographic pictures; establishing one or more second corresponding relations according to the plurality of reference coordinates and the positions of the reference points in the one or more photographic pictures; and generating the plurality of picture coordinates according to the one or more second corresponding relations.
In an embodiment, the operation of determining the local output frame corresponding to the gaze coordinate in the one or more frames comprises: acquiring one or more sight line positions corresponding to the sight line coordinates in the one or more photographing pictures according to the one or more second corresponding relations and the sight line coordinates; and determining the local output picture according to the one or more sight line positions.
In an embodiment, when the gaze coordinate is located within only one of the one or more frame boundaries, the local output frame is a portion of the photographic frame corresponding to the frame boundary where the gaze coordinate is located.
By applying one of the above embodiments, a local output screen corresponding to the sight line coordinate can be determined in one or more photographing screens. Therefore, the live broadcast person does not need to manually operate, the live broadcast picture which can be seen by the viewer can be changed through the sight of the live broadcast person, the interaction between the live broadcast person and the viewer can be increased, and the convenience in use of the live broadcast person can be improved.
Drawings
FIG. 1 is a schematic diagram of an electronic device according to an embodiment of the invention;
FIG. 2 is a flowchart illustrating an output frame determining method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating an output frame determining method according to an exemplary embodiment of the present invention;
FIG. 4 is a diagram illustrating an output frame determining method according to an exemplary embodiment of the present invention;
FIG. 5 is a diagram illustrating an output frame determining method according to an exemplary embodiment of the present invention; and
FIG. 6 is a diagram illustrating an output frame determining method according to an exemplary embodiment of the present invention;
FIG. 7 is a flowchart illustrating the sub-operations of a method for determining an output frame according to an embodiment of the present invention;
FIG. 8 is a flowchart illustrating the sub-operations of a method for determining an output frame according to an embodiment of the present invention; and
FIG. 9 is a flowchart illustrating the sub-operations of a method for determining an output frame according to an embodiment of the present invention.
Description of the main element symbols:
20: eyeball tracker
30: image pickup apparatus
100: electronic device
110: processing element
120: memory device
200: method of producing a composite material
S1-S5: operation of
CMI: photographic picture
And (3) OBJ: target
NA: reference point
NB: reference point
NC: reference point
SYC: system coordinate system
CNT: picture borders
USI: user signals
ORI: partial output picture
600: output target device
Detailed Description
The spirit of the present disclosure will be apparent from the accompanying drawings and detailed description, and any person skilled in the art, who can understand the embodiments of the present disclosure, can make changes and modifications from the technology taught by the present disclosure without departing from the spirit and scope of the present disclosure.
As used herein, "coupled" may mean that two or more elements are in direct physical or electrical contact with each other, or in indirect physical or electrical contact with each other, and "coupled" may mean that two or more elements are in mutual operation or action.
As used herein, the terms "first," "second," …, etc. do not denote any order or order, nor are they used to limit the invention, but rather are used to distinguish one element from another element or operation described by the same technical terms.
As used herein, the terms "comprising," "including," "having," "containing," and the like are open-ended terms that mean including, but not limited to.
As used herein, "and/or" includes any and all combinations of the described items.
With respect to the terms (terms) used herein, it is common, unless otherwise noted, to have the ordinary meaning of each term used in the art, in the disclosure herein, and in the specific context. Certain terms used to describe the present disclosure will be discussed below or elsewhere in this specification to provide additional guidance to those skilled in the art in describing the present disclosure.
As used herein, the terms "substantially," "about," and the like are generally used to refer to any value or range that is close to the recited value or range, which may vary according to the particular art involved, and which is to be accorded the widest interpretation so as to encompass all such modifications and similar structures as would be appreciated by those skilled in the art. In some embodiments, the range of slight variations or errors that such terms modify is 20%, in some preferred embodiments 10%, and in some more preferred embodiments 5%. In addition, all numerical values mentioned herein mean approximate values, and in the case where no additional description is made, the words "substantially" and "about" are intended to be implied.
An example of the present disclosure relates to an electronic device. For clarity of explanation, in the following paragraphs, details of the electronic device will be described by taking a live box as an example. However, other electronic devices, such as tablet computers, smart phones, desktop computers, etc., are also within the scope of the present disclosure.
Fig. 1 is a schematic diagram of an electronic device 100 according to an embodiment of the disclosure. In the present embodiment, the electronic device 100 is electrically connected to the eye tracker 20 and the one or more photographing devices 30. In the present embodiment, the electronic device 100 is configured to receive the eyeball tracking data from the eyeball tracker 20. In the present embodiment, the electronic device 100 is configured to receive one or more pictures from one or more cameras 30.
In the present embodiment, the electronic device 100 includes one or more processing elements 110 and a memory 120. In the present embodiment, the one or more processing elements 110 are electrically connected to the memory 120. In fig. 1 and the following description, only one processing element 110 is taken as an example, and the disclosure is not limited thereto.
In one embodiment, the processing element 110 can be implemented by, but is not limited to, a central processing unit and/or a processor such as a microprocessor. In one embodiment, the memory 120 may include one or more memory devices, where each memory device or set of memory devices includes a computer-readable recording medium. The memory 120 may include a read-only memory, a flash memory, a floppy disk, a hard disk, a compact disk, a flash disk, a magnetic tape, a database accessible via a network, or a computer-readable recording medium with the same functions as those easily recognized by those skilled in the art.
In one embodiment, the processing element 110 may execute or execute various software programs and/or sets of instructions stored in the memory 120 to perform various functions of the electronic device 100.
It should be noted that the implementation manner of the components in the electronic device 100 is not limited to the embodiments disclosed above, and the connection relationship is not limited to the embodiments, and all the connection manners and implementation manners that are sufficient for the electronic device 100 to implement the following technical contents can be applied to the present invention.
In one embodiment, the processing element 110 can obtain one or more photographic frames through one or more photographic devices 30. The processing element 110 can map the photographed image to a system coordinate system and obtain the image boundary of the photographed image in the system coordinate system.
In addition, the processing element 110 can obtain a gaze coordinate in the system coordinate system according to the eye tracking data from the eye tracker 20, wherein the gaze coordinate corresponds to the gazing position of the user. In one embodiment, the processing element 110 is capable of analyzing and calculating eye tracking data to obtain eye gaze coordinates.
In the case that the sight line coordinate is located within the frame boundary of the one or more frames in the system coordinate system, the processing element 110 may determine a local output frame corresponding to the user's gaze location in the one or more frames. In an embodiment, the processing element 110 may output the local output frame to an output object device (e.g., the output object device 600 in fig. 6), such as a live broadcast server, an image capture box, a personal computer, but not limited thereto.
By such operation, the live broadcast person can change the live broadcast picture seen by the viewer by using the sight line, and further the interaction between the live broadcast person and the viewer can be increased.
Wherein the system coordinate system may be a desktop coordinate system or other similar coordinate system. The system coordinate system is, for example, a coordinate system based on a desktop on which the target OBJ and the reference points NA, NB, and NC shown in fig. 3 are located. For example, a first direction edge of the table top may be aligned with an x-axis of a system coordinate system and a second direction edge of the table top may be aligned with a y-axis of the system coordinate system. It should be noted that the above-mentioned embodiments are only examples, and the disclosure is not limited thereto.
The output frame determining method shown in fig. 2 is combined to provide more detailed details of the present disclosure, but the present disclosure is not limited to the following embodiments.
It should be noted that the output frame determining method can be applied to an electronic device having the same or similar structure as that shown in FIG. 1. For simplicity, the method for determining the output screen will be described below by taking the electronic device 100 in fig. 1 as an example according to an embodiment of the present invention, but the present invention is not limited to this application.
In addition, the output image determining method can also be implemented as a computer program and stored in a non-transitory computer readable recording medium, so that the computer or the electronic device executes the virtual reality method after reading the recording medium. The non-transitory computer readable recording medium can be a read-only memory, a flash memory, a floppy disk, a hard disk, a compact disk, a portable disk, a magnetic tape, a database accessible through a network, or a non-transitory computer readable recording medium with the same functions as those easily recognized by those skilled in the art.
In addition, it should be understood that the operations of the output frame determining method in the present embodiment, except for the specific description of the sequence, can be performed simultaneously or partially simultaneously by adjusting the sequence according to the actual requirement.
Moreover, such operations may be adaptively added, replaced, and/or omitted in various embodiments.
Referring to fig. 1 and 2, the output frame determining method 200 includes the following operations.
In operation S1, the processing element 110 obtains one or more photographic frames. In one embodiment, the one or more pictures are from one or more cameras 30, but the disclosure is not limited thereto.
In operation S2, the processing element 110 obtains frame coordinates of the one or more frames in a system coordinate system. In an embodiment, the origin and the coordinate axis of the system coordinate system may be a predetermined origin and coordinate axis, but not limited thereto. In one embodiment, the frame coordinates may correspond to a vertex, a boundary, a center, and/or other reference positions of the one or more frames, but not limited thereto. In one embodiment, the processing element 110 utilizes a plurality of reference points to obtain the frame coordinates.
For example, referring to fig. 3 and 4, in the present example, reference points NA, NB, and NC exist in the photographing screen CMI. As shown in fig. 4, the reference points NA, NB, and NC have their coordinates (hereinafter referred to as reference coordinates) in the system coordinate system SYC. In some embodiments, the reference coordinate may be a preset value. In some embodiments, the reference coordinates may be pre-stored in the electronic device 100, and the user may set the reference points NA, NB, and NC according to a preset instruction when the environment is initialized. It should be noted that the above description is only exemplary and the disclosure is not limited to the above embodiments.
Further, it is assumed that coordinates of the reference points NA, NB, and NC in the photographing screen CMI are (1, 3), (2, 4), and (3, 6), and coordinates of the reference points NA, NB, and NC in the system coordinate system SYC may be (-1, -3), (10, 20), (30, 40). Further, let the coordinates of the vertices of the photographic frame CMI in the photographic frame CMI be (0, 0), (50, 0), (0, 50), (50, 50), and the coordinates of these vertices in the system coordinate system SYC may be (-15 ), (-15, 100), (100, -15), (100 ).
In the present embodiment, the processing element 110 obtains the reference coordinates of the reference points NA, NB, NC in the system coordinate system SYC (operation S21 in fig. 7). On the other hand, the processing element 110 obtains respective positions of the reference points NA, NB, and NC in the photographing screen CMI (see fig. 3) (operation S22 in fig. 7). Then, the processing element 110 establishes a correspondence relationship (hereinafter referred to as a second correspondence relationship) between the photographing screen CMI and the system coordinate system SYC based on the reference coordinates of the reference points NA, NB, and NC in the system coordinate system SYC and the positions of the reference points NA, NB, and NC in the photographing screen CMI (fig. 7 operation S23). According to the second corresponding relationship, the processing element 110 may generate frame coordinates of the vertex, the boundary, the center, and/or other reference points of the photographed frame CMI in the system coordinate system SYC (fig. 7 operation S24). In an embodiment, the second correspondence relationship may be implemented by a transformation matrix, a look-up table, a mathematical function, or any other feasible manner, which is not limited by the disclosure.
For example, in other embodiments, there may be other cameras (i.e., there are a plurality of cameras) besides the camera CMI, the processing element 110 may establish a corresponding relationship (all referred to as a second corresponding relationship) between each camera and the system coordinate system SYC, and generate the frame coordinates of the vertex, the boundary, the center, and/or other reference points of each camera in the system coordinate system SYC according to the second corresponding relationship.
In this way, the processing element 110 may also obtain the frame coordinates of other photographed frames (if any) in the system coordinate system SYC.
In operation S3, the processing element 110 determines one or more frame boundaries CNT of the one or more frames in the system coordinate system SYC according to the frame coordinates of the one or more frames.
For example, referring to fig. 4, the processing element 110 may determine the frame boundary CNT of the photographic frame CMI in the system coordinate system SYC according to the frame coordinates (in the embodiment, the system coordinate system may be a coordinate system corresponding to the desktop in fig. 4, for example). Several embodiments are described herein, but the disclosure is not limited thereto. In one embodiment, the processing element 110 determines the frame boundary CNT in the system coordinate system SYC according to frame coordinates corresponding to at least some of the vertices (e.g., two diagonal vertices, a center point, and at least one vertex) of the captured frame CMI. In another embodiment, the processing element 110 may determine the frame boundary CNT in the system coordinate system SYC according to the frame coordinates corresponding to the boundary of the photographed frame CMI. In another embodiment, the processing element 110 may estimate the size of the photographic image CMI in the system coordinate system SYC according to the second corresponding relationship between the photographic image CMI and the system coordinate system SYC, and then determine the image boundary CNT in the system coordinate system SYC according to the image coordinates corresponding to the center of the photographic image CMI.
It should be noted that although the frame boundary CNT is shown as a circular dotted line in fig. 4, the frame boundary CNT may have other shapes, such as a fan shape, a parallelogram shape, or an irregular shape, and thus the shape of the frame boundary CNT in the present disclosure is not limited to that shown in fig. 4.
In operation S4, the processing element 110 obtains a gaze coordinate in the system coordinate system through the eye tracker 20. In one embodiment, the gaze coordinate corresponds to a specific location in the camera CMI, and the specific location is the location of the user's gaze in the camera CMI. It should be noted that operation S4 and operations S1, S2, S3 may be performed simultaneously, or in reverse order.
In one embodiment, the processing element 110 may obtain the aforementioned gaze coordinate in the system coordinate system according to a correspondence relationship (hereinafter referred to as a first correspondence relationship) between the system coordinate system and the eye tracking data from the eye tracker 20. In one embodiment, the processing element 110 utilizes a plurality of reference points to obtain a first corresponding relationship between the system coordinate system and the eye tracking data.
For example, referring to fig. 4 and 5, first, the processing element 110 obtains the reference coordinates of the reference points NA, NB, and NC in the system coordinate system SYC (operation S41 in fig. 8). The user can use the controller or other user input interface to transmit the user signal USI to the electronic device 100 while looking at the reference point NA. In one embodiment, the user signal USI is a trigger signal. In one embodiment, a user may press a controller, for example, to generate the user signal USI. The processing element 110 may retrieve the eye tracking data from the eye tracker 20 corresponding to the user signal USI as the reference eye tracking data corresponding to the reference point NA (operation S42 in fig. 8). In a similar manner, the processing element 110 may obtain reference eye tracking data corresponding to the reference points NB, NC. Then, the processing element 110 may obtain a first corresponding relationship between the system coordinate system SYC and the eye tracking data according to the reference coordinates of the reference points NA, NB, and NC in the system coordinate system SYC and the reference eye tracking data corresponding to the reference points NA, NB, and NC (fig. 8 operation S43). It is noted that in different embodiments, the processing element 110 may also establish the first correspondence between the system coordinate system SYC and the eye tracking data using other reference points than the reference points NA, NB, NC.
Then, the processing element 110 may obtain the line-of-sight coordinate of the gazing position of the user in the system coordinate system SYC according to the eyeball tracking data from the eyeball tracker 20 by using the first corresponding relationship between the system coordinate system SYC and the eyeball tracking data (operation S44 in fig. 8).
In operation S5, the processing element 110 determines a partial output frame corresponding to the line-of-sight coordinate in the one or more frames corresponding to the line-of-sight coordinate, wherein the partial output frame is located within a frame boundary corresponding to the one or more frames (e.g., when a target focused by the user is within a range covered by any one of the frames).
For example, referring to fig. 3, 4, and 6, when the user gazes at the target OBJ, the processing element 110 may obtain the line-of-sight coordinates corresponding to the target OBJ in the system coordinate system SYC. In one embodiment, the target OBJ may include, but is not limited to, an article, text, a frame, and the like. The processing element 110 can determine whether the line-of-sight coordinate is located in the frame boundary CNT. In the case that the line-of-sight coordinate is located in the frame boundary CNT, the processing element 110 may obtain the line-of-sight position (e.g., the position of the object OBJ in the photographic frame CMI in fig. 3) corresponding to the line-of-sight coordinate in the photographic frame CMI according to the second corresponding relationship between the photographic frame CMI and the system coordinate system SYC (operation S51 in fig. 9). Then, the processing element 110 may determine a local output image ORI (refer to fig. 6) according to the line-of-sight position corresponding to the line-of-sight coordinate in the photographing image CMI (fig. 9 operation S52).
For example, in other embodiments, besides the camera frame CMI and the frame boundary CNT, there may be other camera frames and frame boundaries (that is, there are a plurality of camera frames and a plurality of frame boundaries of the plurality of camera frames in the system coordinate system SYC), the processing element 110 may obtain the view-line position corresponding to the view-line coordinate in the camera frame according to the frame boundary where the view-line coordinate is located and the corresponding second corresponding relation, and then determine to output the local output frame according to the view-line position corresponding to the view-line coordinate in the camera frame.
In contrast, in an embodiment, in the case that the sight line coordinate is located outside the frame boundary of all the photographing frames in the system coordinate system, the processing element 110 may determine that the output frame is one of the one or more photographing frames (e.g., the output frame is the photographing frame CMI in fig. 3). That is, when there is the photographing screen A, B, C and the user's gaze coordinate is outside the range covered by the photographing screen A, B, C, the processing element 110 can determine that the output screen is a predetermined one of the photographing screens A, B, C (e.g., the photographing screen aimed at the main photographing target). Therefore, when the sight of the user is temporarily removed, the output picture can return to the preset main shooting picture, and the output of meaningless shooting pictures can be avoided.
In one embodiment, in the case that the line-of-sight coordinate is located within only one of the one or more frame boundaries (e.g., the frame CMI in FIG. 3), the local output frame is a portion of the frame corresponding to the one of the one or more frame boundaries (e.g., the portion of the frame CMI in FIG. 3 corresponding to the target OBJ). For example, when there is the photographic image A, B, C (i.e., there are a plurality of photographic images) and the user is only gazing at the target within the range covered by the photographic image a, the local output image only shows a part of the photographic image a, and does not show any part of the photographic image B, C.
In other embodiments, for example, only the captured frame B exists (i.e., only one captured frame exists) and the line-of-sight coordinate is located only in the frame boundary corresponding to the captured frame B, the local output frame is a portion corresponding to only the captured frame (in this example, the local output frame is a portion corresponding to the captured frame B). In one embodiment, in the case that the gaze coordinate is located within more than one of the one or more frame boundaries, the processing element 110 may determine the local output frame to be a part of the captured frame corresponding to one of the one or more frame boundaries according to the actual requirement or the predetermined priority, that is, when the captured frame A, B, C exists and the target watched by the user is within the range covered by the captured frame A, B, C, the processing element 110 may determine which part of the captured frame a, the captured frame B, or the captured frame C is to be displayed by the local output frame according to the actual requirement or the predetermined priority.
It should be noted that although the present disclosure is described with respect to live broadcasting as an example, the application field of the present disclosure is not limited to live broadcasting, and other applications in which a local output screen corresponding to the sight line coordinate can be determined in a photographic screen are also within the scope of the present disclosure.
By applying one of the above embodiments, a local output frame corresponding to the line-of-sight coordinate can be determined in the photographing frame CMI. Therefore, the live broadcast user does not need to manually operate, the live broadcast picture which can be seen by the viewer can be changed, the interaction between the live broadcast user and the viewer can be further increased, and the convenience in use of the live broadcast user can be improved.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (10)

1.一种电子装置,其特征在于,包括:1. an electronic device, is characterized in that, comprises: 处理元件;以及processing elements; and 存储器,电性连接所述的处理元件;a memory, electrically connected to the processing element; 其中所述的存储器存储程序,且所述的处理元件执行所述的程序,以进行以下操作:The memory stores a program, and the processing element executes the program to perform the following operations: 取得一或多摄影画面;obtain one or more photographic frames; 取得所述的一或多摄影画面在系统坐标系中的多个画面坐标;obtaining multiple frame coordinates of the one or more photographic frames in the system coordinate system; 根据所述的一或多摄影画面的所述的多个画面坐标,决定所述的一或多摄影画面在所述的系统坐标系中的一或多画面边界;According to the plurality of frame coordinates of the one or more photographic frames, determine one or more frame boundaries of the one or more photographic frames in the system coordinate system; 取得在所述的系统坐标系中的视线坐标;以及obtain sight coordinates in said system coordinate system; and 相应于所述的视线坐标位于所述的一或多画面边界之内,在所述的一或多摄影画面中决定相应于所述的视线坐标的局部输出画面。Corresponding to the coordinates of the sight line being within the boundaries of the one or more pictures, a partial output picture corresponding to the coordinates of the sight line is determined in the one or more photographing pictures. 2.根据权利要求1所述的电子装置,其特征在于,执行所述的程序还包括以下操作:2. The electronic device according to claim 1, wherein executing the program further comprises the following operations: 取得多个参考点在所述的系统坐标系中的多个参考坐标;Obtain multiple reference coordinates of multiple reference points in the system coordinate system; 取得分别对应所述的多个参考点的多笔参考眼球追踪数据;obtaining multiple reference eye tracking data corresponding to the multiple reference points; 根据所述的多个参考坐标及所述的多笔参考眼球追踪数据,建立第一对应关系;以及establishing a first correspondence according to the plurality of reference coordinates and the plurality of reference eye tracking data; and 根据所述的第一对应关系,将眼球追踪数据转换为所述的视线坐标。According to the first correspondence, the eye tracking data is converted into the sight coordinates. 3.根据权利要求1所述的电子装置,其特征在于,执行所述的程序还包括以下操作:3. The electronic device according to claim 1, wherein executing the program further comprises the following operations: 取得多个参考点在所述的系统坐标系中的多个参考坐标;Obtain multiple reference coordinates of multiple reference points in the system coordinate system; 取得所述的多个参考点在所述的摄影画面中的位置;obtaining the positions of the plurality of reference points in the photographing frame; 根据所述的多个参考坐标及所述的多个参考点在所述的一或多摄影画面中的位置,建立一或多第二对应关系;以及establishing one or more second correspondences according to the plurality of reference coordinates and the positions of the plurality of reference points in the one or more photographic frames; and 根据所述的一或多第二对应关系,产生所述的画面坐标。The screen coordinates are generated according to the one or more second correspondences. 4.根据权利要求3所述的电子装置,其特征在于,执行所述的程序还包括以下操作:4. The electronic device according to claim 3, wherein executing the program further comprises the following operations: 根据所述的一或多第二对应关系及所述的视线坐标,取得所述的一或多摄影画面中相应于所述的视线坐标的一或多视线位置;以及Obtaining one or more line-of-sight positions corresponding to the line-of-sight coordinates in the one or more photographic frames according to the one or more second correspondences and the line-of-sight coordinates; and 根据所述的一或多视线位置决定所述的局部输出画面。The partial output picture is determined according to the one or more line-of-sight positions. 5.根据权利要求1至4中任一项所述的电子装置,其特征在于,在所述的视线坐标仅位于所述的一或多画面边界中的一者之内的情况下,所述的局部输出画面为所述的视线坐标所位于的画面边界所对应的摄影画面的一部份。5. The electronic device according to any one of claims 1 to 4, wherein, in the case that the line-of-sight coordinates are only within one of the one or more screen boundaries, the The partial output picture of is a part of the photographing picture corresponding to the picture boundary where the line-of-sight coordinates are located. 6.一种输出画面决定方法,其特征在于,包括:6. A method for determining an output picture, comprising: 通过摄影装置,取得一或多摄影画面;Obtain one or more photographic images through a photographic device; 取得所述的一或多摄影画面在系统坐标系中的多个画面坐标;obtaining multiple frame coordinates of the one or more photographic frames in the system coordinate system; 根据所述的一或多摄影画面的所述的多个画面坐标,决定所述的一或多摄影画面在所述的系统坐标系中的一或多画面边界;According to the plurality of frame coordinates of the one or more photographic frames, determine one or more frame boundaries of the one or more photographic frames in the system coordinate system; 通过眼球追踪器,取得在所述的系统坐标系中的视线坐标;以及Obtaining sight coordinates in the system coordinate system through an eye tracker; and 相应于所述的视线坐标位于所述的一或多画面边界之内,在所述的一或多摄影画面中决定相应于所述的视线坐标的局部输出画面。Corresponding to the coordinates of the sight line being within the boundaries of the one or more pictures, a partial output picture corresponding to the coordinates of the sight line is determined in the one or more photographing pictures. 7.根据权利要求6所述的输出画面决定方法,其特征在于,取得在所述的系统坐标系中的所述的视线坐标的操作包括:7. The method for determining an output image according to claim 6, wherein the operation of obtaining the line-of-sight coordinates in the system coordinate system comprises: 取得多个参考点在所述的系统坐标系中的多个参考坐标;Obtain multiple reference coordinates of multiple reference points in the system coordinate system; 取得分别对应所述的些参考点的多笔参考眼球追踪数据;obtaining multiple reference eye tracking data corresponding to the reference points; 根据所述的些参考坐标及所述的些参考眼球追踪数据,建立一第一对应关系;以及establishing a first correspondence according to the reference coordinates and the reference eye tracking data; and 根据所述的第一对应关系,将眼球追踪数据转换为所述的视线坐标。According to the first correspondence, the eye tracking data is converted into the sight coordinates. 8.根据权利要求6所述的输出画面决定方法,其特征在于,取得所述的一或多摄影画面在所述的系统坐标系中的所述的多个画面坐标的操作包括:8 . The method for determining an output picture according to claim 6 , wherein the operation of obtaining the plurality of picture coordinates of the one or more photographic pictures in the system coordinate system comprises: 8 . 取得多个参考点在所述的系统坐标系中的多个参考坐标;Obtain multiple reference coordinates of multiple reference points in the system coordinate system; 取得所述的多个参考点在所述的一或多摄影画面中的位置;obtaining the positions of the plurality of reference points in the one or more photographic frames; 根据所述的多个参考坐标及所述的多个参考点在所述的一或多摄影画面中的位置,建立一或多第二对应关系;以及establishing one or more second correspondences according to the plurality of reference coordinates and the positions of the plurality of reference points in the one or more photographic frames; and 根据所述的一或多第二对应关系,产生所述的多个画面坐标。According to the one or more second correspondences, the plurality of screen coordinates are generated. 9.根据权利要求8所述的输出画面决定方法,其特征在于,在所述的一或多摄影画面中决定相应于所述的视线坐标的所述的局部输出画面的操作包括:9 . The method for determining an output image according to claim 8 , wherein the operation of determining the partial output image corresponding to the line-of-sight coordinates in the one or more photographic images comprises: 10 . 根据所述的一或多第二对应关系及所述的视线坐标,取得所述的一或多摄影画面中相应于所述的视线坐标的一或多视线位置;以及Obtaining one or more line-of-sight positions corresponding to the line-of-sight coordinates in the one or more photographic frames according to the one or more second correspondences and the line-of-sight coordinates; and 根据所述的一或多视线位置决定所述的局部输出画面。The partial output picture is determined according to the one or more line-of-sight positions. 10.根据权利要求6至9中任一项所述的输出画面决定方法,其特征在于,在所述的视线坐标仅位于所述的一或多画面边界中的一者之内的情况下,所述的局部输出画面为所述的视线坐标所位于的画面边界所对应的摄影画面的一部份。10. The method for determining an output picture according to any one of claims 6 to 9, wherein, in the case that the line-of-sight coordinates are only within one of the one or more picture boundaries, The partial output picture is a part of the photographing picture corresponding to the picture boundary where the sight line coordinates are located.
CN201911050093.8A 2018-11-02 2019-10-31 Electronic device and output picture determining method Active CN111147934B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
TW107139053 2018-11-02
TW107139053A TWI725351B (en) 2018-11-02 2018-11-02 Electronic device and output image determination method

Publications (2)

Publication Number Publication Date
CN111147934A true CN111147934A (en) 2020-05-12
CN111147934B CN111147934B (en) 2022-02-25

Family

ID=70516964

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911050093.8A Active CN111147934B (en) 2018-11-02 2019-10-31 Electronic device and output picture determining method

Country Status (2)

Country Link
CN (1) CN111147934B (en)
TW (1) TWI725351B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499253A (en) * 2008-01-28 2009-08-05 宏达国际电子股份有限公司 Output screen adjustment method and device
CN102456137A (en) * 2010-10-20 2012-05-16 上海青研信息技术有限公司 Sight line tracking preprocessing method based on near-infrared reflection point characteristic
CN104915013A (en) * 2015-07-03 2015-09-16 孙建德 Eye tracking and calibrating method based on usage history
US20160018645A1 (en) * 2014-01-24 2016-01-21 Osterhout Group, Inc. See-through computer display systems
US20160116979A1 (en) * 2014-01-21 2016-04-28 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
CN106445104A (en) * 2016-08-25 2017-02-22 蔚来汽车有限公司 HUD display system and method for vehicle
CN107003737A (en) * 2014-12-03 2017-08-01 微软技术许可有限责任公司 The indicator projection inputted for natural user
US20180067317A1 (en) * 2016-09-06 2018-03-08 Allomind, Inc. Head mounted display with reduced thickness using a single axis optical system
CN107884947A (en) * 2017-11-21 2018-04-06 中国人民解放军海军总医院 Auto-stereoscopic mixed reality operation simulation system
CN107991775A (en) * 2016-10-26 2018-05-04 中国科学院深圳先进技术研究院 It can carry out the wear-type visual device and human eye method for tracing of people's ocular pursuit
CN108417171A (en) * 2017-02-10 2018-08-17 宏碁股份有限公司 Display device and display parameter adjusting method thereof

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ITTO20030662A1 (en) * 2003-08-29 2005-02-28 Fiat Ricerche VIRTUAL VISUALIZATION ARRANGEMENT FOR A FRAMEWORK
CN102842301B (en) * 2012-08-21 2015-05-20 京东方科技集团股份有限公司 Display frame adjusting device, display device and display method
TW201438940A (en) * 2013-04-11 2014-10-16 Compal Electronics Inc Image display method and image display system
TW201539251A (en) * 2014-04-09 2015-10-16 Utechzone Co Ltd Electronic apparatus and method for operating electronic apparatus
CN106799994A (en) * 2017-01-13 2017-06-06 曾令鹏 A kind of method and apparatus for eliminating motor vehicle operator vision dead zone

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101499253A (en) * 2008-01-28 2009-08-05 宏达国际电子股份有限公司 Output screen adjustment method and device
CN102456137A (en) * 2010-10-20 2012-05-16 上海青研信息技术有限公司 Sight line tracking preprocessing method based on near-infrared reflection point characteristic
US20160116979A1 (en) * 2014-01-21 2016-04-28 Osterhout Group, Inc. Eye glint imaging in see-through computer display systems
US20160018645A1 (en) * 2014-01-24 2016-01-21 Osterhout Group, Inc. See-through computer display systems
CN107003737A (en) * 2014-12-03 2017-08-01 微软技术许可有限责任公司 The indicator projection inputted for natural user
CN104915013A (en) * 2015-07-03 2015-09-16 孙建德 Eye tracking and calibrating method based on usage history
CN106445104A (en) * 2016-08-25 2017-02-22 蔚来汽车有限公司 HUD display system and method for vehicle
US20180067317A1 (en) * 2016-09-06 2018-03-08 Allomind, Inc. Head mounted display with reduced thickness using a single axis optical system
CN107991775A (en) * 2016-10-26 2018-05-04 中国科学院深圳先进技术研究院 It can carry out the wear-type visual device and human eye method for tracing of people's ocular pursuit
CN108417171A (en) * 2017-02-10 2018-08-17 宏碁股份有限公司 Display device and display parameter adjusting method thereof
CN107884947A (en) * 2017-11-21 2018-04-06 中国人民解放军海军总医院 Auto-stereoscopic mixed reality operation simulation system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUANG-DAH CHEN: "《The study on motion message of rotational motion with eye tracking》", 《2018 IEEE INTERNATIONAL CONFERENCE ON APPLIED SYSTEM INVENTION》 *
杨晓晖: "《立体电视中多视点视频增强和实现跟踪方法研究》", 《中国优秀博士学位论文全文数据库》 *

Also Published As

Publication number Publication date
TWI725351B (en) 2021-04-21
CN111147934B (en) 2022-02-25
TW202018463A (en) 2020-05-16

Similar Documents

Publication Publication Date Title
EP3579544B1 (en) Electronic device for providing quality-customized image and method of controlling the same
CN111052727B (en) Electronic device and control method thereof
EP3326360B1 (en) Image capturing apparatus and method of operating the same
KR102666977B1 (en) Electronic device and method for photographing image thereof
US20180131869A1 (en) Method for processing image and electronic device supporting the same
US10284817B2 (en) Device for and method of corneal imaging
CN110636218B (en) Focusing method, focusing device, storage medium and electronic equipment
US10863077B2 (en) Image photographing method, apparatus, and terminal
US10970821B2 (en) Image blurring methods and apparatuses, storage media, and electronic devices
CN108377342A (en) Double-camera shooting method and device, storage medium and terminal
US10506221B2 (en) Field of view rendering control of digital content
CN106454086B (en) Image processing method and mobile terminal
KR20190032818A (en) An electronic device including a plurality of camera using a rolling shutter system
CN107637063B (en) Method and camera for controlling functions based on user's gestures
CN109040524A (en) Artifact eliminating method, device, storage medium and terminal
CN112541553B (en) Target object status detection method, device, medium and electronic device
CN105430269B (en) A kind of photographic method and device applied to mobile terminal
CN110706283A (en) Calibration method, device, mobile terminal and storage medium for gaze tracking
JP6283329B2 (en) Augmented Reality Object Recognition Device
US10009545B2 (en) Image processing apparatus and method of operating the same
CN108665510B (en) Rendering method, device, storage medium and terminal for continuous shooting images
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN111147934B (en) Electronic device and output picture determining method
US11917295B2 (en) Method for correcting shaking at high magnification and electronic device therefor
CN111221410A (en) Method, head mounted display and computer device for transmitting eye tracking information

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant