CN119545060A - Image display method, device and electronic equipment - Google Patents
Image display method, device and electronic equipment Download PDFInfo
- Publication number
- CN119545060A CN119545060A CN202411700875.2A CN202411700875A CN119545060A CN 119545060 A CN119545060 A CN 119545060A CN 202411700875 A CN202411700875 A CN 202411700875A CN 119545060 A CN119545060 A CN 119545060A
- Authority
- CN
- China
- Prior art keywords
- event
- key frame
- image
- images
- key
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000007704 transition Effects 0.000 claims abstract description 58
- 238000004590 computer program Methods 0.000 claims description 7
- 230000000694 effects Effects 0.000 abstract description 10
- 238000013473 artificial intelligence Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 19
- 230000033001 locomotion Effects 0.000 description 17
- 230000003287 optical effect Effects 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000001514 detection method Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000000903 blocking effect Effects 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000012015 optical character recognition Methods 0.000 description 3
- 238000013523 data management Methods 0.000 description 2
- 241000803586 Gloridonus yolo Species 0.000 description 1
- 238000005452 bending Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
- H04N21/43076—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen of the same content streams on multiple devices, e.g. when family members are watching the same movie on different devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
- H04N21/440281—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the temporal resolution, e.g. by frame skipping
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Television Signal Processing For Recording (AREA)
Abstract
The application provides an image display method, an image display device and electronic equipment, and relates to the technical field of artificial intelligence, wherein the method comprises the steps of obtaining M key frame images in a first live stream, wherein the M key frame images are images corresponding to key events, and M is an integer larger than 1; acquiring transition frame images between two adjacent key frame images, wherein the two key frame images are two adjacent frame images in the M key frame images; and generating a first dynamic image based on the M key frame images and the transition frame images, and displaying the first dynamic image. According to the embodiment of the application, when the user needs to pay attention to a plurality of live streams at the same time, at least part of live streams can be displayed in a dynamic image mode, so that the user can conveniently obtain key contents in the plurality of live streams, and the display effect of the plurality of live streams can be improved.
Description
Technical Field
The present application relates to the field of artificial intelligence technologies, and in particular, to an image display method, an image display device, and an electronic device.
Background
With the increase of network bandwidth and the development of electronic devices, a plurality of high-definition live streams, such as sports events, news reports, etc., can be played on the electronic devices at the same time. Taking the sporting event as an example, when multiple sporting events are stepped on simultaneously, some spectators are interested in multiple events that are performed simultaneously and wish to be able to focus on multiple events simultaneously, so that the electronic device may be used to focus on multiple live sporting events on the same screen.
In the related art, a multi-thread or parallel processing technology is generally adopted for processing a plurality of live streams, each live stream is processed in an independent thread, and the plurality of live streams do not interfere with each other. However, this method is affected by the access point network bandwidth and the performance of the playback device, and there may be problems of slow, stuck, or interrupted live stream loading.
Disclosure of Invention
The embodiment of the application provides an image display method, an image display device and electronic equipment, which are used for solving the problems of slow loading, blocking or interruption of live streams when a plurality of live streams are played.
In order to solve the technical problems, the application is realized as follows:
in a first aspect, an embodiment of the present application provides an image display method, including:
obtaining M key frame images in a first direct-current stream, wherein the M key frame images are images corresponding to key events, and M is an integer larger than 1;
acquiring transition frame images between two adjacent key frame images, wherein the two key frame images are two adjacent frame images in the M key frame images;
and generating a first dynamic image based on the M key frame images and the transition frame images, and displaying the first dynamic image.
Optionally, the key event includes a first sub-event and a second sub-event, the M key frame images include a first key frame image corresponding to the first sub-event and a second key frame image corresponding to the second sub-event, and the acquiring M key frame images in the first direct current stream includes:
acquiring a first moment when the first sub-event actually occurs and a second moment when the second sub-event actually occurs, wherein the first sub-event is the sub-event which occurs first in the key event;
Determining a first timestamp corresponding to the first sub-event in the first live stream, and calculating a time offset of the second moment relative to the first moment;
Determining a second timestamp corresponding to the second sub-event in the first live stream based on the first timestamp and the time offset;
And determining the first key frame image based on the video images in the first time range corresponding to the first time stamp, and determining the second key frame image based on the video images in the second time range corresponding to the second time stamp.
Optionally, the key event is a key event in a sporting event, and the determining that the first sub-event corresponds to before a first timestamp in the first direct current stream further comprises:
identifying at least one of an action of a target object in the first live stream and score information for the sporting event;
And determining the current image of the first live stream as a video image corresponding to the first sub-event under the condition that the action of the target object is detected to be matched with a preset action or the score information of the sports event is detected to be matched with a preset score information, wherein the first sub-event is an event used for representing the start of the sports event.
Optionally, the determining the first key frame image based on the video images in the first time range corresponding to the first timestamp includes:
in the first direct-current stream, determining K frames of video images in a first time range corresponding to the first time stamp, wherein K is an integer larger than 1;
And determining a video frame image with the least optical flow movement times as the first key frame image in the K frame video images.
Optionally, the two adjacent keyframe images include a third keyframe image and a fourth keyframe image, and the acquiring a transitional frame image between the two adjacent keyframe images includes:
acquiring a first action state of a target object in the third key frame image and a second action state of the target object in the fourth key frame image;
Acquiring the transition frame image between the third key frame image and the fourth key frame image under the condition that the difference degree between the first action state and the second action state is larger than a preset value;
the generating a first dynamic image based on the M key frame images and the transition frame image includes:
Acquiring the sequence number and the frame inserting position of the transition frame image;
and sequentially combining the M key frame images and the transition frame images according to the sequence numbers and the frame inserting positions of the transition frame images to generate the first dynamic image.
Optionally, the displaying the first dynamic image includes:
Circularly displaying the first dynamic image under the condition that no newly added key frame image sequence is detected;
After the cyclically displaying the first dynamic image, the method further includes:
and generating a second dynamic image based on the newly added key frame image sequence and displaying the second dynamic image under the condition that the newly added key frame image sequence is detected.
Optionally, before the acquiring M key frame images in the first direct current stream, the method further includes:
receiving a first input of information of the second live broadcast stream and a second input of information of the first live broadcast stream by a user in sequence under the condition that a display screen of the electronic equipment displays the information of the first live broadcast stream and the information of the second live broadcast stream;
Displaying live images of the second live stream in a first window according to a first display mode corresponding to the input sequence of the first input;
the displaying the first dynamic image includes:
Displaying the dynamic image of the first direct-current stream in a second window according to a second display mode corresponding to the input sequence of the second input;
The first display mode is a mode of displaying live images, and the second display mode is a mode of displaying dynamic images.
In a second aspect, an embodiment of the present application provides an image display apparatus, including:
The first acquisition module is used for acquiring M key frame images in the first direct-current stream, wherein the M key frame images are images corresponding to key events, and M is an integer greater than 1;
The second acquisition module is used for acquiring a transition frame image between two adjacent key frame images, wherein the two key frame images are two adjacent frame images in the M key frame images;
And the first display module is used for generating a first dynamic image based on the M key frame images and the transition frame images and displaying the first dynamic image.
Optionally, the key event includes a first sub-event and a second sub-event, the M key frame images include a first key frame image corresponding to the first sub-event and a second key frame image corresponding to the second sub-event, and the first acquisition module includes:
The first acquisition sub-module is used for acquiring a first moment when the first sub-event actually occurs and a second moment when the second sub-event actually occurs, wherein the first sub-event is the first sub-event occurring in the key event;
A first determining sub-module, configured to determine a first timestamp corresponding to the first sub-event in the first live stream, and calculate a time offset of the second time relative to the first time;
A second determining sub-module, configured to determine, based on the first timestamp and the time offset, a second timestamp corresponding to the second sub-event in the first live stream;
And the third determining submodule is used for determining the first key frame image based on the video images in the first time range corresponding to the first time stamp and determining the second key frame image based on the video images in the second time range corresponding to the second time stamp.
Optionally, the key event is a key event in a sporting event, and the apparatus further comprises:
An identification module for identifying at least one of an action of a target object in the first live stream and score information of the sports event;
The determining module is configured to determine that the current image of the first live stream is a video image corresponding to the first sub-event when the motion of the target object is detected to be matched with a preset motion or score information of the sports event is detected to be matched with preset score information, where the first sub-event is an event for indicating the start of the sports event.
Optionally, the third determining submodule includes:
The first determining unit is used for determining K frames of video images in a first time range corresponding to the first time stamp in the first direct current stream, wherein K is an integer larger than 1;
And the second determining unit is used for determining the video frame image with the least optical flow moving times as the first key frame image in the K frame video images.
Optionally, the two adjacent key frame images include a third key frame image and a fourth key frame image, and the second acquisition module includes:
A second obtaining sub-module, configured to obtain a first action state of the target object in the third key frame image and a second action state of the target object in the fourth key frame image;
and the third acquisition sub-module is used for acquiring the transition frame image between the third key frame image and the fourth key frame image under the condition that the difference degree between the first action state and the second action state is larger than a preset value.
The first display module includes:
A fourth obtaining sub-module, configured to obtain a sequence number and an interpolation position of the transition frame image;
And the combining sub-module is used for sequentially combining the M key frame images and the transition frame images according to the sequence numbers and the frame inserting positions of the transition frame images to generate the first dynamic image.
Optionally, the first display module includes:
The first display sub-module is used for circularly displaying the first dynamic image under the condition that the newly added key frame image sequence is not detected;
the apparatus further comprises:
and the second display sub-module is used for generating a second dynamic image based on the newly added key frame image sequence and displaying the second dynamic image under the condition that the newly added key frame image sequence is detected.
Optionally, the apparatus further comprises:
the receiving module is used for receiving a first input of the information of the second live broadcast stream and a second input of the information of the first live broadcast stream from a user in sequence under the condition that the display screen of the electronic equipment displays the information of the first live broadcast stream and the information of the second live broadcast stream;
The second display module is used for displaying live images of the second live stream in a first window according to a first display mode corresponding to the input sequence of the first input;
The first display module includes:
A third display sub-module, configured to display, in a second window, the dynamic image of the first direct current according to a second display mode corresponding to an input sequence of the second input;
The first display mode is a mode of displaying live images, and the second display mode is a mode of displaying dynamic images.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program stored in the memory and executable on the processor, where the program when executed by the processor implements the steps of the image display method according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, on which a computer program is stored, the computer program implementing the steps of the image display method according to the first aspect when being executed by a processor.
In a fifth aspect, there is provided a computer program product comprising computer instructions which, when executed by a processor, implement the steps of the image display method according to the first aspect.
In the embodiment of the application, a key frame image and a transition frame image in a first direct current stream are acquired, and a first dynamic image is generated based on the key frame image and the transition frame image. Therefore, when the user needs to pay attention to a plurality of live streams at the same time, at least part of the live streams can be displayed in a dynamic image mode, so that the user can conveniently acquire key contents in the live streams, and the display effect of the live streams can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments of the present application will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort to a person of ordinary skill in the art.
FIG. 1 is a flowchart of an image display method according to an embodiment of the present application;
FIG. 2 is a time stamp of a critical event on a live stream timeline provided by an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating extraction of key frames of a key event according to an embodiment of the present application;
Fig. 4 is a play flow chart of a dynamic image of a key event according to an embodiment of the present application;
FIG. 5 is an interface schematic diagram of an electronic device according to an embodiment of the present application;
Fig. 6 is a schematic diagram of a process for generating a dynamic image of a key event according to an embodiment of the present application;
FIG. 7 is a schematic diagram of recognition of a tee and a football provided by an embodiment of the present application;
FIG. 8 is a schematic diagram of gesture detection of a tee according to an embodiment of the present application;
FIG. 9 is a schematic diagram of recognition of a tee action provided by an embodiment of the present application;
FIG. 10 is a schematic diagram of identifying a score plate according to an embodiment of the present application;
FIG. 11 is a schematic diagram of character recognition according to an embodiment of the present application;
fig. 12 is a schematic structural view of an image display device according to an embodiment of the present application;
Fig. 13 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application provides an image display method, an image display device and electronic equipment, which are used for solving the problems of slow loading, blocking or interruption of live streams when a plurality of live streams are played.
Referring to fig. 1, fig. 1 is a flowchart of an image display method according to an embodiment of the present application, as shown in fig. 1, the method includes the following steps:
Step 101, obtaining M key frame images in a first direct broadcast stream, wherein the M key frame images are images corresponding to key events, and M is an integer larger than 1;
102, acquiring transition frame images between two adjacent key frame images, wherein the two key frame images are two adjacent frame images in the M key frame images;
And 103, generating a first dynamic image based on the M key frame images and the transition frame images, and displaying the first dynamic image.
The first direct current stream may be a video image collected and transmitted in real time, or may be a source stream corresponding to the video image collected in real time.
When the first direct current stream needs to be displayed, M key frame images in the first direct current stream are acquired, wherein the M key frame images are images corresponding to key events in the first direct current stream. For example, in a live stream corresponding to a sports event, the key event may include a goal event, a penalty event, a top half end event, a bottom half start event, etc., and the key frame image may be an image containing information corresponding to the key event. Each key event may acquire one or more frames of images as key frame images to reveal the key event.
In acquiring a key frame image, it can be acquired in various ways. For example, the time when each key event occurs in the actual game is converted into a time stamp corresponding to each key event on the first live stream, and according to the time period corresponding to the key event in the actual game and the time stamp, a video image corresponding to the time period is obtained in the live stream as a key frame image. And when the plurality of key events are included, acquiring a plurality of key frame images corresponding to the plurality of key events. In addition, key events in the live stream can be identified, and key frame images of the key events can be generated by combining the duration of the key events in the actual game.
When the key events corresponding to the plurality of key frame images are discontinuous, the situation of action discontinuity exists among the plurality of key frame images. In order to make the motion of the object between the key frames smooth, the intermediate frames are acquired from the adjacent key frames as transition, that is, transition frames. For example, since the difference in motion between two key frames corresponding to the foul event and the goal event is large, an image frame between the foul event and the goal event is acquired as a transition frame, so that the moving image can clearly show the rough progress from the foul event to the goal event. The number of transition frame images may be specifically determined according to factors such as a difference between key events, a difference between transition frame images and key frame images, and the like.
After the key frame and the transition frame are acquired, the key frame and the transition frame are combined according to the time stamp of the key frame and the frame inserting position and sequence corresponding to the transition frame, and a first dynamic image is generated.
When a user needs to pay attention to a plurality of live streams using the same electronic device, at least part of the live streams may be displayed as corresponding dynamic images. For example, a user needs to pay attention to live streams of two individual sports events at the same time, one of the sports events can be displayed by means of live streams, and the other sports event can be displayed by means of dynamic images.
In addition, under the limitation of the conditions of poor network environment, poor performance of electronic equipment, insufficient network flow and the like, when a user needs to watch a single live stream or a plurality of live streams, the image corresponding to the live stream can be displayed in a dynamic image mode, or the live stream is switched into the dynamic image, so that the display effect of the live stream is improved, and the use requirement of the user is met.
Optionally, the key event includes a first sub-event and a second sub-event, the M key frame images include a first key frame image corresponding to the first sub-event and a second key frame image corresponding to the second sub-event, and the acquiring M key frame images in the first direct current stream includes:
acquiring a first moment when the first sub-event actually occurs and a second moment when the second sub-event actually occurs, wherein the first sub-event is the sub-event which occurs first in the key event;
Determining a first timestamp corresponding to the first sub-event in the first live stream, and calculating a time offset of the second moment relative to the first moment;
Determining a second timestamp corresponding to the second sub-event in the first live stream based on the first timestamp and the time offset;
And determining the first key frame image based on the video images in the first time range corresponding to the first time stamp, and determining the second key frame image based on the video images in the second time range corresponding to the second time stamp.
The key event may include a plurality of sub-events, for example, a key frame corresponding to the first sub-event and the second sub-event is obtained, and a manner of obtaining a key frame corresponding to the other sub-events may be the same as that of obtaining a key frame corresponding to the first sub-event and the second sub-event.
Considering that the time of each event provided by the event data is the time when the event actually occurs, the time does not have a one-to-one correspondence with the time stamp of the video frame in the first direct current stream. Accordingly, the actual occurrence time of each sub-event in the key event is correspondingly converted into a time stamp in the first direct-current stream, and a key frame image is determined based on the video image of the time period corresponding to the time stamp.
The first sub-event is the first sub-event in the key event, and may be understood as a sub-event indicating the start of the event. For example, in a football match, a tee-shot event is determined to be a first sub-event, which when occurring indicates that the football match is to begin. The tee-up event may be determined by identifying a character action and a ball motion state. When the live stream is a news story, the start time may be determined by identifying keywords that represent the start of the news.
And acquiring a first moment when the first sub-event actually occurs and a second moment when the second sub-event actually occurs, and calculating the time offset of the second moment relative to the first moment. When the time is correspondingly converted into the first direct-current stream, the first time is taken as a starting point of the first direct-current stream, namely the time stamp corresponding to the first direct-current stream is 00:00:00, and the time stamp of the second sub-event in the first direct-current stream is determined according to the time offset.
Taking a football game and a basketball game as examples, taking a ball opening event as a game start, acquiring the actual occurrence time (note: yyy-mm-dd HH: mm: ss) of the ball opening event, taking the time as a time base point 00:00:00 of a live stream, and acquiring the offset (note: yyy ' -mm ' -dd ' HH ' to mm ' ss ') of the actual occurrence time (note: yyyy ' -mm ' -dd ' HH: mm: ss) of other sub events (second sub event, third sub event and fourth sub event) relative to the actual occurrence time of the ball opening event based on the time base point, thereby acquiring data shown in the following table:
List one
The time offsets of the other sub-events (the second sub-event, the third sub-event, and the fourth sub-event) determined in the above manner with respect to the first sub-event may be determined, and the first sub-event and the other sub-events may correspond to time stamps on the first direct current and are labeled one by one in the first direct current, as shown in fig. 2.
For example, the actual occurrence time of the tee-hole event is 19:00:00, the actual occurrence time of the card-yellow event is 19:14:09, and the time offset of the card-yellow event relative to the tee-hole event is 00:14:09. Taking the ball-opening event as a time base point of the live stream, determining that the time stamp of the ball-opening event in the live stream is 00:00:00, and determining that the time stamp of the yellow card event in the live stream is 00:14:09.
After determining that each sub-event corresponds to a time stamp in the first direct current stream, video images within a first time range in which the first time stamp is located can be acquired, and first key frame images are determined among the video images, video images within a second time range in which the second time stamp is located are acquired, and second key frame images are determined among the video images. When determining the first time range, a time t 1 before the first time stamp and a time t 2 after the first time stamp may be acquired, a time period between t 1 and t 2 is taken as a first time period, the first time period includes the first time stamp, and the second time period may also be determined in the above manner.
By the method, the time of each sub-event is converted into the corresponding time stamp in the first direct current, so that the key frame image is determined based on the time stamp, the efficiency of acquiring the key frame image can be improved, and the calculation difficulty of identifying each sub-event is reduced.
Optionally, the key event is a key event in a sporting event, and the determining that the first sub-event corresponds to before a first timestamp in the first direct current stream further comprises:
identifying at least one of an action of a target object in the first live stream and score information for the sporting event;
And determining the current image of the first live stream as a video image corresponding to the first sub-event under the condition that the action of the target object is detected to be matched with a preset action or the score information of the sports event is detected to be matched with a preset score information, wherein the first sub-event is an event used for representing the start of the sports event.
In order to correspond the time of each sub-event to the first direct current stream, a key frame image corresponding to each sub-event is identified, thereby determining the time position of the image in the first direct current stream.
Specifically, whether the action of the target object in the first direct-play stream is matched with the preset action or not can be identified, and when the action matched with the preset action is detected, an image corresponding to the detected first sub-event is determined. The preset action may be an action for indicating the start of an event, for example, detecting whether an action of a player matches with a preset tee shot action, where the tee shot action includes actions such as standing, bending, batting, and the like, and further identifying a keyframe image where the tee shot action is located after the tee shot action is identified.
It may also be determined whether the first sub-event is a first sub-event by identifying data of a real-time score board, including a name of a team of two parties to the game, a time of day, and a score. The preset score information may be information for indicating the start of the game, for example, the score is 00:00, and the counted time is the start time.
And under the condition that the score data is matched with the preset score information or the action of the target object is matched with the preset action, determining that the current frame image is a video image corresponding to the first sub-event, namely a key frame image where the first sub-event is located, and acquiring the time position (note: yyyy-MM-dd HH: MM: ss) of the key frame where the first sub-event is located in the first direct current. The first sub-event is a start event of a key event corresponding to the first direct-play stream, and represents the start of the event. For example, when a tee-up action is detected, a ball game is initiated.
The first sub-event in the first direct-broadcasting stream is identified, so that the actual occurrence time of the sub-event is converted into the time stamp of the direct-broadcasting stream, the time stamp of the image corresponding to the sub-event in the direct-broadcasting stream is identified, the time stamp is used as the starting time of the key event, the manual participation is reduced, the process is simplified, and the identification efficiency is improved.
Optionally, the determining the first key frame image based on the video images in the first time range corresponding to the first timestamp includes:
in the first direct-current stream, determining K frames of video images in a first time range corresponding to the first time stamp, wherein K is an integer larger than 1;
And determining a video frame image with the least optical flow movement times as the first key frame image in the K frame video images.
Wherein the first time range may be determined based on the first time stamp, which is fixed for a time before and after, e.g. subtracting X and X seconds from the first time stamp (HH: mm: ss), X seconds may determine the corresponding value based on different events, different events. Determining a first time range based on the first time stamp minus X seconds and the time range after X seconds:
HH: mm ss-X < = event time range < = HH: mm ss+X
And in the time range, respectively extracting key frames from the video streams of all events based on the attribute of the object motion characteristics, analyzing the optical flow of the object motion in the video streams, and selecting the video frame with the minimum optical flow moving times in the video streams as the extracted key frame.
Wherein, the optical flow method calculates the motion formula of the video frame:
M(k)=∑∑|Lx(i,j,k)|+|Ly(i,j,k)|
Where M (K) in the formula represents the amount of motion of the kth frame, lx (i, j, K) represents the component of the optical flow X at the kth frame pixel (i, j), ly (i, j, K) represents the component of the optical flow y at the kth frame pixel (i, j).
After the calculation is completed, taking the local minimum value as the key frame to be extracted, wherein the calculation formula is as follows:
M(ki)=min[M(k)]
for example, as shown in fig. 3, in a time range corresponding to a certain sub-event, one frame image is extracted every second as a key frame, a plurality of key frame images are extracted in a plurality of seconds corresponding to the time range, and the plurality of key frame images are rearranged according to sequence numbers. And rearranging the key frame images based on the plurality of sub-events to obtain a plurality of arranged key frame images.
By the method, the key frame image corresponding to each sub-event is determined, and the moment in the live stream is determined based on the key frame image, so that the precision of moment determination can be improved, and the display effect of the dynamic image can be improved.
Optionally, the two adjacent keyframe images include a third keyframe image and a fourth keyframe image, and the acquiring a transitional frame image between the two adjacent keyframe images includes:
acquiring a first action state of a target object in the third key frame image and a second action state of the target object in the fourth key frame image;
Acquiring the transition frame image between the third key frame image and the fourth key frame image under the condition that the difference degree between the first action state and the second action state is larger than a preset value;
the generating a first dynamic image based on the M key frame images and the transition frame image includes:
Acquiring the sequence number and the frame inserting position of the transition frame image;
and sequentially combining the M key frame images and the transition frame images according to the sequence numbers and the frame inserting positions of the transition frame images to generate the first dynamic image.
Since there may be a difference such as object motion inconsistency between two neighboring key frame images, the difference is solved by an inter-frame interpolation technique.
Taking the adjacent third key frame and fourth key frame as an example, detecting the first action state of the target object in the third key frame image and the second action state of the target object in the fourth key frame image, and judging whether the difference degree between the first action state and the second action state is larger or not, namely, if the difference degree is larger than a preset value, for example, larger than 50% or larger than 40%, the difference degree is larger. When the difference of the target object in the third key frame image and the fourth key frame image is large, in order to ensure the consistency of the actions of the target object, a transition frame image is acquired in the third key frame image and the fourth key frame image, and the transition frame image is inserted between the third key frame image and the fourth key frame image.
In the case of difference values, the types of difference values that may be used include:
1) Linear interpolation-the value of an intermediate frame is A simple linear combination of the values of two adjacent key frames (e.g., two key frames A and B with 10 intermediate frames in between, then the value of the first intermediate frame is one tenth of the value of A plus (B-A), the value of the second intermediate frame is one twentieth of the value of A plus (B-A), and so on);
2) Bezier curve interpolation, which is typically used to define a path between intermediate frames (e.g., two key frames A and B, the intermediate frame between which can be determined by taking a point on the Bezier curve between A and B);
3) Spline interpolation-using curves to approximate the path between key frames (e.g., the curves determine the value of the intermediate frames from the key frame's position and velocity information to ensure the smoothness and naturalness of the transitional dynamic map).
After the transition frame images are acquired, the sequence number of each transition frame image and the interpolation position are determined.
For example, the image to be inserted between the third key frame and the fourth key frame includes three-frame images, serial numbers C1, C2, and C3 are sequentially inserted between the third key frame and the fourth key frame according to the time stamp of the third key frame, the time stamp of the fourth key frame, and the order of C1, C2, and C3, respectively.
The M key frames can be arranged according to the time stamp and the sequence number, the transition frames are arranged according to the frame inserting position and the sequence number, the arrangement position is determined according to the frame inserting position and the sequence number, and after the M key frames and the transition frames are recombined, the dynamic image is synthesized according to the sequence and the speed.
By means of the method, transition frame interpolation is carried out, generated dynamic images can be smoothly transitioned, and the effect of the dynamic images is improved.
Optionally, the displaying the first dynamic image includes:
Circularly displaying the first dynamic image under the condition that no newly added key frame image sequence is detected;
After the cyclically displaying the first dynamic image, the method further includes:
and generating a second dynamic image based on the newly added key frame image sequence and displaying the second dynamic image under the condition that the newly added key frame image sequence is detected.
As shown in fig. 4, after the first moving image is generated, if no newly added key frame image sequence is detected, the first moving image is displayed in a loop, and if the newly added key frame image sequence is detected, the second moving image is displayed. After the second dynamic image is displayed, if no newly added key frame image sequence is detected, the first dynamic image and the second dynamic image are circularly displayed, if the newly added key frame image sequence is detected, a third dynamic image is displayed based on the newly added key frame image sequence, and so on.
Therefore, the dynamic images corresponding to the updated key events can be displayed in real time, so that the user can obtain the updated key events in time, and the display effect of the dynamic images is improved.
Optionally, before the acquiring M key frame images in the first direct current stream, the method further includes:
receiving a first input of information of the second live broadcast stream and a second input of information of the first live broadcast stream by a user in sequence under the condition that a display screen of the electronic equipment displays the information of the first live broadcast stream and the information of the second live broadcast stream;
Displaying live images of the second live stream in a first window according to a first display mode corresponding to the input sequence of the first input;
the displaying the first dynamic image includes:
Displaying the dynamic image of the first direct-current stream in a second window according to a second display mode corresponding to the input sequence of the second input;
The first display mode is a mode of displaying live images, and the second display mode is a mode of displaying dynamic images.
In the case where the display screen of the electronic device includes a plurality of display windows, the live stream may be displayed as a moving image in one or more of the display windows.
Taking the first direct-current stream and the second direct-current stream as examples, when the display screen of the electronic device displays the information of the first direct-current stream and the information of the second direct-current stream, the user can operate the information of the first direct-current stream and the information of the second direct-current stream so as to play the first direct-current stream or the second direct-current stream. The information of the first live stream and the second live stream may be introduction information, preview information, etc. of the live broadcast.
And under the condition that the user performs first input on the second live stream and performs second input on the second live stream, the electronic equipment acquires the input sequence of the first input and the second input, and determines the display mode of the live stream according to the input sequence. For example, the user performs a click operation on the information of the second live broadcast stream, then performs a click operation on the information of the first live broadcast stream, determines that the second live broadcast stream is displayed in a live broadcast manner according to the previous click of the information of the second live broadcast stream according to the operation sequence, and displays the information of the first live broadcast stream according to a dynamic image after clicking, that is, the manner of extracting the key frame and the transition frame to display the dynamic image according to the above embodiment of the present application. When the user inputs the second live stream again, the display mode of the second live stream can be switched from the first display mode to the second display mode, and when the user inputs the first live stream again, the display mode of the first live stream can also be switched from the second display mode to the first display mode.
In addition, the display mode of the live stream can be determined according to the input modes according to the different input modes of the user. For example, when the user performs a click operation, the first display mode is the first display mode, and when the user performs a drag operation, the second display mode is the second display mode.
When a user needs to watch a plurality of live streams, a larger display window is adopted for displaying a second live stream of primary interest according to the requirement of the user, and a first live stream of secondary interest can be displayed in a smaller display window in a dynamic image mode. And the information of the first direct broadcast stream and the information of the second direct broadcast stream can be respectively dragged to display windows corresponding to different display modes in a dragging mode according to the attention of the user.
As shown in fig. 5, a plurality of display windows are displayed on a screen of the electronic device, and each display window may display an image display mode corresponding to the window, including live stream display or dynamic graph display. The forenotice information of the football tournament is displayed in the screen, and the user can operate the forenotice information of the match, for example, click on "+" displayed beside the first match, and when the match starts, the dynamic diagram information corresponding to the match source flow is displayed in the corresponding display window. When the user needs to pay attention to multiple games, the information of the games can be added to the display window corresponding to the live stream and/or the dynamic diagram according to the attention degree of the multiple games. When the user is in a poor network environment or needs to save traffic, the user can operate on the display interface, and all live stream data are displayed in a dynamic diagram mode.
In order to facilitate understanding of the above embodiments, the following description is given by way of example with reference to specific application scenarios.
When the electronic device is used to play a plurality of video streams simultaneously, the problems of play blocking, interruption and the like exist, and the electronic device is easy to generate heat, power consumption is increased and performance is reduced. In the absence of WIFI, playing multiple video streams simultaneously using a mobile phone accelerates the consumption of data traffic.
If multiple paths of live streams are adopted to form a path of live stream in a merging mode, video pieces are required to be compressed or transcoded, video quality is reduced, loss exists in the quality of the merged video when the resolution and bit rate of the original live stream are different, the flexibility of the live stream is lacking, and a user cannot select specific content to be watched.
In order to solve the problems, in the application, different services are distinguished on the multi-path live broadcast streams according to the attention degree (important attention and secondary attention) of the user, and the requirement that the user watches a plurality of live broadcast streams on the same screen is met under the condition that the accessed network bandwidth is limited. Taking live streaming as a ball game, as shown in fig. 6, the image display mode may include the following steps:
The live stream can identify the event of starting the match in real time, specifically comprises the steps of identifying the play action of football and the jump action of basketball, and can also be combined with a real-time score board to identify the event of starting the match.
In the case of identifying a tee shot, a timestamp of a video keyframe in which the tee shot event is located is determined, and whether it is a game start event is determined based on the timestamp.
Upon identifying other critical events of the ball game, such as, for example, a yellow board, a penalty, a goal, etc., each critical event data is obtained via the event data management system (EVENT DATA MANAGEMENT SYSTEM, EDMS) and a time offset of the critical event based on the game start event is calculated to determine a time stamp of the video keyframe where the critical event is located.
And extracting key frames according to the acquired competition starting events and key events, acquiring intermediate transition frames, carrying out inter-frame interpolation and key frame combination to form dynamic images, naming and outputting the dynamic images, and storing the dynamic images in a server.
In the above process, mainly comprises the following steps:
1. Real-time identification of "tee shot" in live streams
Based on the consideration of implementation complexity and efficiency, all key events are not required to be identified one by one, and only the tee-shot actions in the live stream are identified. The identification of the "kicking action" in the live sports stream requires the distinction of different sports events, such as football being the "kicking" as the starting point of the game and basketball being the "kicking" as the starting point of the game, and in addition, the identification of the real-time score can be used as an auxiliary means.
The player and ball on the playing field are detected using an algorithm for object detection (as shown in fig. 7), while the player's pose and motion are detected using a pose estimation model, such as that corresponding to the pose estimation algorithm (OpenPose). As shown in fig. 8, by detecting key points corresponding to main parts of the player, including parts of the head, shoulders, knees, feet, etc., the posture of the player is detected as standing vertically. Next, the player's actions (as shown in fig. 9) are analyzed by an action recognition model (e.g., a deep learning based convolutional neural network algorithm (Convolutional Neural Networks, CNN)), to determine if they are tee-shots, and further to identify the keyframes in which the tee-shots are located.
The identification of the score board can be seen in fig. 10, where the score board area is first extracted by image segmentation of the game video, the score board area is detected by using a target detection algorithm (e.g. YOLO (You Only Look Once) algorithm), the detection area is distinguished from other elements of the video, and then the character recognition is performed on the score board area by using optical character recognition (Optical Character Recognition, OCR), the identification process is as shown in fig. 11.
2. Determining a timestamp of a video keyframe where a "tee shot event" is located
And acquiring the 'tee-shot event' and the dynamic change of the score board in the live stream through the steps, and determining the corresponding moment (note: yyyy-MM-dd HH: MM: ss) of the keyframe of the 'tee-shot event' in the live stream.
3. Calculating time offsets for other critical events based on the game "tee-up event
By calculating the time offset of other key events (other than the tee-shot event) based on the tee-shot event, the time point of the key frame where each key event is located is sequentially identified in the live stream.
Before dynamic map generation is performed, the actual occurrence time of an "tee-off event" provided by event data is required to be used as a time base point (note: yyyy-mm-dd HH: mm: ss) of the beginning of a game, other key events (note: the event actual occurrence time is yyyy '-mm' -dd 'HH': mm ': ss') are respectively calculated according to the time base point, and corresponding time offsets (note: HH: mm: ss = yyyy '-mm' -dd 'HH': mm ': ss' -yyyyy-mm-dd HH: mm: ss) are respectively calculated according to the time base point to form data shown in the table.
4. Determining time stamps of other key events in video key frames
And respectively calculating the time stamp of each key event according to the time offset of each key event relative to the 'kicking event' calculated in the step and the time stamp of the 'kicking event' on the source stream, and marking the time stamp of each key event on the source stream, as shown in fig. 2.
5. Dynamic diagram generation flow of live stream based on key event
The sports match consists of a series of consecutive key events in time sequence, and because the key events have larger difference (such as a goal event, an offside event and the like), the technical difficulty exists in realizing the identification of all the key events and automatically generating the moving pictures by simply relying on the system, and the situation of concurrent match exists in each event, so that the difficulty of system processing is further aggravated, currently, the manual identification is generally carried out by adopting an operation mode of operators and the moving pictures are manually generated, and thus, higher requirements are put forward for the skills and the quantity of the operators.
The application identifies the time stamp of each key event on the live stream through identifying the 'ball opening action', dividing the board and combining the sports event data, processing and calculating the related steps, and then automatically generating the moving picture of each key event by the system through technologies such as key frame extraction, inter-frame interpolation and the like for each key event.
For key frame extraction, a timestamp of each key event in a source stream is first determined, and as shown in fig. 2, the timestamp is fixed to be the time range of occurrence of each key event. In the above time range, for each video stream of each event, the algorithm for extracting the key frame based on the attribute of the object motion feature analyzes the optical flow of the object motion in the video stream, and selects the video frame with the minimum optical flow moving times in the video stream each time as the extracted key frame, as shown in fig. 3.
After the key frames are extracted, calculating the state of an intermediate frame between two adjacent key frames so as to carry out smooth transition between the key frames, filling the blank between the frames and ensuring the fluency and consistency of the dynamic diagram. In the implementation, frames can be interpolated by adopting modes such as linear difference, bezier curve interpolation, spline interpolation and the like. And sequentially recombining the extracted and interpolated key frames according to the time stamp, the sequence number and the sequence of the inserted frames of the key frames, and synthesizing the extracted and interpolated key frames into a dynamic image according to the sequence and the speed. The name of the generated dynamic graph needs to have a certain naming specification of PID-serial number GIF (note that PID is used as a unique match ID, serial numbers are sequentially generated dynamic graph serial numbers which can be sequentially increased from 0000), and the generated dynamic graph is automatically uploaded to a picture server after being named according to the naming specification.
6. Multi-screen display logic of video client side for live broadcast stream and dynamic degree
Referring to fig. 5, the display interface includes a plurality of display windows, the game played by the user first selecting (representing important focus) is played in the first display window in the form of a live stream, the window can output prompt information displayed as the live stream, the game played by the user second selecting (representing secondary focus) is displayed in the second display window in the form of a dynamic diagram, and the display window of the dynamic diagram can output prompt information.
And when the competition is over, all dynamic images are generated and displayed in a circulating mode according to the sequence number, as shown in figure 4.
The embodiment of the application identifies a 'play action' in a live stream of a sports event based on an AI model and identifies a corresponding time stamp of the action in the live stream as the starting time of the event, then combines real-time sports event data and sequentially identifies the time stamps of the rest sports key events of the event in the live stream through calculation, and then uses technologies such as key frame extraction and the like to generate a moving picture of each key event of the sports event in real time, finally, a video APP sequentially plays the moving picture generated in real time corresponding to each key event contained in the event according to the action corresponding to the attention degree of a user to the live stream so as to meet the requirement of watching the concurrent event by the user.
The embodiment of the application can provide another option of synchronous viewing of the events for the user in a manner of generating the dynamic diagram in real time for the key events of the events, has lower requirements on network bandwidth, CPU and memory of the access terminal, can distinguish which events are played in a live stream mode according to the attention of the user, and can display which events are displayed in a dynamic diagram mode, thereby meeting the requirements of the user in the scene of viewing multiple events on the same screen.
Referring to fig. 12, fig. 12 is a schematic structural diagram of an image display device according to an embodiment of the present application, and as shown in fig. 12, an image display device 1200 includes:
A first obtaining module 1201, configured to obtain M key frame images in a first direct current, where the M key frame images are images corresponding to a key event, and M is an integer greater than 1;
A second obtaining module 1202, configured to obtain a transition frame image located between two adjacent key frame images, where the two key frame images are two adjacent frame images in the M key frame images;
The first display module 1203 is configured to generate a first moving image based on the M key frame images and the transition frame image, and display the first moving image.
Optionally, the key event includes a first sub-event and a second sub-event, the M key frame images include a first key frame image corresponding to the first sub-event and a second key frame image corresponding to the second sub-event, and the first acquisition module includes:
The first acquisition sub-module is used for acquiring a first moment when the first sub-event actually occurs and a second moment when the second sub-event actually occurs, wherein the first sub-event is the first sub-event occurring in the key event;
A first determining sub-module, configured to determine a first timestamp corresponding to the first sub-event in the first live stream, and calculate a time offset of the second time relative to the first time;
A second determining sub-module, configured to determine, based on the first timestamp and the time offset, a second timestamp corresponding to the second sub-event in the first live stream;
And the third determining submodule is used for determining the first key frame image based on the video images in the first time range corresponding to the first time stamp and determining the second key frame image based on the video images in the second time range corresponding to the second time stamp.
Optionally, the key event is a key event in a sporting event, and the apparatus further comprises:
An identification module for identifying at least one of an action of a target object in the first live stream and score information of the sports event;
The determining module is configured to determine that the current image of the first live stream is a video image corresponding to the first sub-event when the motion of the target object is detected to be matched with a preset motion or score information of the sports event is detected to be matched with preset score information, where the first sub-event is an event for indicating the start of the sports event.
Optionally, the third determining submodule includes:
The first determining unit is used for determining K frames of video images in a first time range corresponding to the first time stamp in the first direct current stream, wherein K is an integer larger than 1;
And the second determining unit is used for determining the video frame image with the least optical flow moving times as the first key frame image in the K frame video images.
Optionally, the two adjacent key frame images include a third key frame image and a fourth key frame image, and the second acquisition module includes:
A second obtaining sub-module, configured to obtain a first action state of the target object in the third key frame image and a second action state of the target object in the fourth key frame image;
A third obtaining sub-module, configured to obtain the transition frame image between the third key frame image and the fourth key frame image when the degree of difference between the first action state and the second action state is greater than a preset value;
The first display module includes:
A fourth obtaining sub-module, configured to obtain a sequence number and an interpolation position of the transition frame image;
And the combining sub-module is used for sequentially combining the M key frame images and the transition frame images according to the sequence numbers and the frame inserting positions of the transition frame images to generate the first dynamic image.
Optionally, the first display module includes:
The first display sub-module is used for circularly displaying the first dynamic image under the condition that the newly added key frame image sequence is not detected;
the apparatus further comprises:
and the second display sub-module is used for generating a second dynamic image based on the newly added key frame image sequence and displaying the second dynamic image under the condition that the newly added key frame image sequence is detected.
Optionally, the apparatus further comprises:
the receiving module is used for receiving a first input of the information of the second live broadcast stream and a second input of the information of the first live broadcast stream from a user in sequence under the condition that the display screen of the electronic equipment displays the information of the first live broadcast stream and the information of the second live broadcast stream;
The second display module is used for displaying live images of the second live stream in a first window according to a first display mode corresponding to the input sequence of the first input;
The first display module includes:
A third display sub-module, configured to display, in a second window, the dynamic image of the first direct current according to a second display mode corresponding to an input sequence of the second input;
The first display mode is a mode of displaying live images, and the second display mode is a mode of displaying dynamic images.
The image display device can implement each process implemented in the method embodiment of fig. 1, and achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
As shown in fig. 13, the embodiment of the present application further provides an electronic device 1300, which includes a processor 1301, a memory 1302, and a program stored in the memory 1302 and capable of running on the processor 1301, where the program when executed by the processor 1301 implements each process of the above embodiment of the image display method, and the same technical effects can be achieved, so that repetition is avoided, and no further description is given here.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor, implements the processes of the above embodiment of the image display method, and can achieve the same technical effects, so that repetition is avoided, and no further description is given here. Wherein the computer readable storage medium is selected from Read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The embodiment of the present application further provides a computer program product, which includes computer instructions, where the computer instructions, when executed by a processor, implement each process of the embodiment of the method shown in fig. 1 and achieve the same technical effects, and in order to avoid repetition, are not described herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (10)
1. An image display method, comprising:
obtaining M key frame images in a first direct-current stream, wherein the M key frame images are images corresponding to key events, and M is an integer larger than 1;
acquiring transition frame images between two adjacent key frame images, wherein the two key frame images are two adjacent frame images in the M key frame images;
and generating a first dynamic image based on the M key frame images and the transition frame images, and displaying the first dynamic image.
2. The method of claim 1, wherein the key event comprises a first sub-event and a second sub-event, the M key frame images comprise a first key frame image corresponding to the first sub-event and a second key frame image corresponding to the second sub-event, and the acquiring M key frame images in the first direct-cast stream comprises:
acquiring a first moment when the first sub-event actually occurs and a second moment when the second sub-event actually occurs, wherein the first sub-event is the sub-event which occurs first in the key event;
Determining a first timestamp corresponding to the first sub-event in the first live stream, and calculating a time offset of the second moment relative to the first moment;
Determining a second timestamp corresponding to the second sub-event in the first live stream based on the first timestamp and the time offset;
And determining the first key frame image based on the video images in the first time range corresponding to the first time stamp, and determining the second key frame image based on the video images in the second time range corresponding to the second time stamp.
3. The method of claim 2, wherein the critical event is a critical event in a sporting event, wherein the determining that the first sub-event corresponds to before a first timestamp in the first direct current stream, the method further comprises:
identifying at least one of an action of a target object in the first live stream and score information for the sporting event;
And determining the current image of the first live stream as a video image corresponding to the first sub-event under the condition that the action of the target object is detected to be matched with a preset action or the score information of the sports event is detected to be matched with a preset score information, wherein the first sub-event is an event used for representing the start of the sports event.
4. The method of any one of claims 1 to 3, wherein the adjacent two key frame images include a third key frame image and a fourth key frame image, wherein the acquiring a transition frame image between the adjacent two key frame images includes:
acquiring a first action state of a target object in the third key frame image and a second action state of the target object in the fourth key frame image;
Acquiring the transition frame image between the third key frame image and the fourth key frame image under the condition that the difference degree between the first action state and the second action state is larger than a preset value;
the generating a first dynamic image based on the M key frame images and the transition frame image includes:
Acquiring the sequence number and the frame inserting position of the transition frame image;
and sequentially combining the M key frame images and the transition frame images according to the sequence numbers and the frame inserting positions of the transition frame images to generate the first dynamic image.
5. A method according to any one of claims 1 to 3, wherein said displaying said first dynamic image comprises:
Circularly displaying the first dynamic image under the condition that no newly added key frame image sequence is detected;
After the cyclically displaying the first dynamic image, the method further includes:
and generating a second dynamic image based on the newly added key frame image sequence and displaying the second dynamic image under the condition that the newly added key frame image sequence is detected.
6. A method according to any one of claims 1 to 3, wherein prior to said obtaining M key frame images in the first direct stream, the method further comprises:
receiving a first input of information of the second live broadcast stream and a second input of information of the first live broadcast stream by a user in sequence under the condition that a display screen of the electronic equipment displays the information of the first live broadcast stream and the information of the second live broadcast stream;
Displaying live images of the second live stream in a first window according to a first display mode corresponding to the input sequence of the first input;
the displaying the first dynamic image includes:
Displaying the dynamic image of the first direct-current stream in a second window according to a second display mode corresponding to the input sequence of the second input;
The first display mode is a mode of displaying live images, and the second display mode is a mode of displaying dynamic images.
7. An image display device, comprising:
The first acquisition module is used for acquiring M key frame images in the first direct-current stream, wherein the M key frame images are images corresponding to key events, and M is an integer greater than 1;
The second acquisition module is used for acquiring a transition frame image between two adjacent key frame images, wherein the two key frame images are two adjacent frame images in the M key frame images;
And the first display module is used for generating a first dynamic image based on the M key frame images and the transition frame images and displaying the first dynamic image.
8. An electronic device comprising a processor, a memory and a program stored on the memory and executable on the processor, the program when executed by the processor implementing the steps of the image display method according to any one of claims 1 to 6.
9. A computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the image display method according to any one of claims 1 to 6.
10. A computer program product comprising computer instructions which, when executed by a processor, implement the steps of the image display method of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411700875.2A CN119545060A (en) | 2024-11-25 | 2024-11-25 | Image display method, device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411700875.2A CN119545060A (en) | 2024-11-25 | 2024-11-25 | Image display method, device and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119545060A true CN119545060A (en) | 2025-02-28 |
Family
ID=94702037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411700875.2A Pending CN119545060A (en) | 2024-11-25 | 2024-11-25 | Image display method, device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119545060A (en) |
-
2024
- 2024-11-25 CN CN202411700875.2A patent/CN119545060A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8665374B2 (en) | Interactive video insertions, and applications thereof | |
US8675021B2 (en) | Coordination and combination of video sequences with spatial and temporal normalization | |
US8214741B2 (en) | Synchronization of video and data | |
JP4218915B2 (en) | Image processing method, image processing apparatus, and storage medium | |
US8358346B2 (en) | Video processing device, video processing method, and program | |
US8121462B2 (en) | Video edition device and method | |
CN112312142B (en) | Video playing control method and device and computer readable storage medium | |
CN113301358B (en) | Content providing and displaying method and device, electronic equipment and storage medium | |
US7868914B2 (en) | Video event statistic tracking system | |
JP2008048279A (en) | Video-reproducing device, method, and program | |
JP2009505553A (en) | System and method for managing the insertion of visual effects into a video stream | |
US8306109B2 (en) | Method for scaling video content based on bandwidth rate | |
CN111757147A (en) | Method, device and system for event video structuring | |
CN106658030A (en) | Method and device for playing composite video comprising single-path audio and multipath videos | |
Carlier et al. | Crowdsourced automatic zoom and scroll for video retargeting | |
CN115175005A (en) | Video processing method and device, electronic equipment and storage medium | |
Lai et al. | Tennis Video 2.0: A new presentation of sports videos with content separation and rendering | |
WO2020154425A1 (en) | System and method for generating probabilistic play analyses from sports videos | |
US20220353435A1 (en) | System, Device, and Method for Enabling High-Quality Object-Aware Zoom-In for Videos | |
CN111988635A (en) | AI (Artificial intelligence) -based competition 3D animation live broadcast method and system | |
CN119545060A (en) | Image display method, device and electronic equipment | |
US20230013988A1 (en) | Enhancing viewing experience by animated tracking of user specific key instruments | |
KR102652647B1 (en) | Server, method and computer program for generating time slice video by detecting highlight scene event | |
EP4527477A1 (en) | An apparatus, method and computer program for producing a stream of video data of a virtual environment | |
US20240406497A1 (en) | Techniques for automatically generating replay clips of media content for key events |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |