[go: up one dir, main page]

WO2008063112A1 - A method for combining video sequences and an apparatus thereof - Google Patents

A method for combining video sequences and an apparatus thereof Download PDF

Info

Publication number
WO2008063112A1
WO2008063112A1 PCT/SE2007/001013 SE2007001013W WO2008063112A1 WO 2008063112 A1 WO2008063112 A1 WO 2008063112A1 SE 2007001013 W SE2007001013 W SE 2007001013W WO 2008063112 A1 WO2008063112 A1 WO 2008063112A1
Authority
WO
WIPO (PCT)
Prior art keywords
video sequence
video
frame
mark
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/SE2007/001013
Other languages
French (fr)
Inventor
Staffan Sölve
Jörgen Lagerstedt
Peter Addin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from SE0602478A external-priority patent/SE0602478L/en
Application filed by Individual filed Critical Individual
Priority to EP07835211A priority Critical patent/EP2100440A4/en
Publication of WO2008063112A1 publication Critical patent/WO2008063112A1/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay

Definitions

  • the present invention generally relates to a method for combining a first and a second video sequence into a combined video sequence, as well as a module, an apparatus and a computer-readable medium thereof.
  • the TV commercial is broadcasted in connection to a popular event or show
  • the viewers may switch channel or leave the TV during the TV commercials.
  • the advertising companies try to create more attractive TV commercials by means of better music, movie stars participating, etc., as well as replacing their TV commercials more frequent.
  • the TV commercials are made more attractive and are replaced frequently, many viewers will switch channel or leave the TV apparatuses during the TV commercials, and hence the messages from the advertising companies will not reach the viewers.
  • an objective of the invention is to solve or at least reduce the problems discussed above.
  • an objective is to present a method for generating a TV commercial by combining a first video sequence, corresponding to a newly sent sequence, e.g. a highlight of an original video sequence, with a second video sequence, herein also referred to as TV commercial template, configured to be combined with the first video sequence.
  • a unique TV commercial including a highlight sequence such as a touch down in the Super Bowl, may be generated rapidly and broadcasted in the TV commercial break.
  • An advantage of this is that there is a connection between the broadcasted show and the TV commercial. Therefore, since highlights are included in the TV commercial, more viewers will stay tuned during the TV commercial break.
  • Another advantage is that the TV commercial will vary from time to time, which means that more viewers will stay tuned and see the TV commercials. Still an advantage is the use of templates. In order to generate a TV commercial including a highlight, the editing of such a TV commercial must be made rapidly. This is solved by providing video sequence combination.
  • the object is provided according to a first aspect of the invention by a method comprising receiving an original video sequence, wherein said original video sequence comprises a number of consecutive frames, receiving a set of mark-up signals, associating said mark-up signals to a set of mark-up frames, wherein said mark-up frames are frames of said received original video sequence, wherein a time stamp of each of said mark-up signals corresponds to a time stamp of each mark-up frame respectively, generating a set of first video sequences of said original video sequence, wherein each first video sequence comprises one of said mark-up frames, presenting at least one frame of a number of generated first video sequences, selecting one of said first video sequences, presenting at least one frame of a set of second video sequences, selecting one of said second video sequences, and combining said selected first video sequence with said selected second video sequence into a combined video sequence.
  • first video sequences may be generated by mark-up signals during the broadcasting of the original video sequence. This implies that the combination of the first and second video sequence may be made more rapidly, which in turn implies that TV commercials including a highlight of the original video sequence may be sent in close connection to the original video sequence.
  • the second video sequence may be predetermined.
  • the first video sequence may be presented by said mark-up frame.
  • An advantage of presenting said first video by the associated mark-up frame is that the mark-up frame illustrates the content of the first video sequence properly.
  • a frame of said second video sequence may comprise an area configured for insertion of video data from said first video sequence.
  • An advantage of this is that how the combination of the first and second video sequences are to be combined can be prepared in advance, which means that the combination process can be made more effective and faster.
  • so-called matte technique In order to replace the replacement area of said second video sequence with video data from the said first video sequence so-called matte technique may be used.
  • An example of such a technique is the so-called blue screen area technique.
  • a frame of said second video sequence may be configured to be superposed onto video data of a frame of said first video sequence.
  • said second video sequence may be divided into a number of layers, such as a foreground layer comprising predetermined video data and a background layer in which video data of said first video sequence is to be placed.
  • the background layer of the frame of said second video sequence may be replaced with video data from the frame of said first video sequence. Thereafter, the two layers of the frame of said second video sequence may be merged into a combined frame comprising one layer.
  • the combining of this first aspect may further comprise adjusting a duration of said first video sequence to correspond to the duration of said second video sequence, or a selected part of it.
  • duration i.e. the length in time
  • the adjustment may be made by adjusting the frame rate of the first video sequence or by removing frames of the first video sequence, or a combination thereof.
  • the combining of this first aspect may further comprise adjusting a duration of said second video sequence to correspond to the duration of said first video sequence, or a selected part of it.
  • the duration, i.e. the length in time, of the second video sequence is adjusted to correspond to the duration of the first video sequence, or a selected part of it, which in turn means that the combination of the two sequences is made more easily.
  • the adjustment may be made by adjusting the frame rate of the second video sequence or by removing frames of the second video sequence, or a combination thereof.
  • the combining may be performed within two hours from the generation of said first video sequence.
  • a module for combining a first and a second video sequence comprising a video data receiver for receiving an original video sequence, a mark-up signal receiver for receiving a mark-up signal, an associater configured to associate said mark-up signal to a mark-up frame of said video data, wherein a time stamp of said mark-up signal corresponds to a time stamp of said mark-up frame, a video sequence generator configured to generate a first video sequence comprising said mark-up frame, a memory configured to store said first video sequence, a memory configured to store said second video sequence, a first presentation frame transmitter configured to transmit a frame presenting said first video sequence, a second presentation frame transmitter configured to transmit a frame presenting said second video sequence, a first video selection signal receiver configured to receive a first video selection signal, a second video selection signal receiver configured to receive a second video selection signal, and a combiner configured to combine video data of said first video sequence and video data of said second video sequence into a combined video sequence.
  • first video sequences may be generated by mark-up signals during the broadcasting of the original video sequence. This implies that the combination of the first and second video sequence may be made more rapidly, which in turn implies that TV commercials including a highlight of the original video sequence may be sent in close connection to the original video sequence.
  • the frame presenting said first video sequence may comprise said mark-up frame.
  • the combiner may be configured to insert video data of a frame of said first video sequence in an area of a frame of said second video sequence.
  • the combiner may be configured to superpose video data of said second video sequence onto video data of said first video sequence.
  • the combiner may be configured to adjust a duration of said first video sequence to correspond to the duration of said second video sequence, or a selected part of it. In this second aspect of the invention the combiner may be configured to adjust a duration of said second video sequence to correspond to the duration of said first video sequence, or a selected part of it.
  • an apparatus comprising a receiver configured to receive an original video sequence from a communications network, a module according to any of claim 9-15 configured to receive said original video sequence, to transmit a first presentation frame and a second presentation frame, to receive a first video selection signal and a second video selection signal and to transmit a combined video sequence, a display configured to show said first presentation frame and said second presentation frame, an input device configured to transmit said first video selection signal and second video selection signal, and a transmitter configured to transmit said combined video sequence to a communications network.
  • a receiver configured to receive an original video sequence from a communications network
  • a module according to any of claim 9-15 configured to receive said original video sequence, to transmit a first presentation frame and a second presentation frame, to receive a first video selection signal and a second video selection signal and to transmit a combined video sequence
  • a display configured to show said first presentation frame and said second presentation frame
  • an input device configured to transmit said first video selection signal and second video selection signal
  • a transmitter configured to transmit said combined video sequence to a communications network.
  • a computer-readable medium having computer-executable components comprising instructions for receiving an original video sequence, wherein said original video sequence comprises a number of consecutive frames, receiving a set of mark-up signals, associating said mark-up signals to a set of mark-up frames, wherein said mark-up frames are frames of said received original video sequence, wherein a time stamp of each of said mark-up signals corresponds to a time stamp of each mark-up frame respectively, generating a set of first video sequences of said original video sequence, wherein each first video sequence comprises one of said mark-up frames, presenting at least one frame of a number of generated first video sequences, selecting one of said first video sequences, presenting at least one frame of a set of second video sequences, selecting one of said second video sequences, and combining said selected first video sequence with said selected second video sequence into a combined video sequence.
  • the first video sequence may be presented by said mark-up frame.
  • a frame of said second video sequence may comprise an area configured for insertion of video data from said first video sequence.
  • a frame of said second video sequence may be configured to be superposed onto video data of a frame of said first video sequence.
  • the combining may further comprise adjusting a duration of said first video sequence to correspond to the duration of said second video sequence, or a part of it.
  • the combining may further comprise adjusting a duration of said second video sequence to correspond to the duration of said first video sequence, or a part of it.
  • Fig 1 generally illustrates an example of an original video sequence.
  • Fig 2 generally illustrates the principle for combining a first and a second video sequence into a combined video sequence.
  • Fig 3 generally illustrates a way to combine the first and the second video sequence into a combined video sequence.
  • Fig 4 generally illustrates another way to combine the first and the second video sequence into a combined video sequence.
  • Fig 5 generally illustrates an example of generation of a TV commercial comprising video data from a recent broadcasted live show, such as a football match.
  • Fig 6 illustrates an example of a first step of a graphical user interface (GUI), wherein the first step is used for inputting mark-up signals.
  • GUI graphical user interface
  • Fig 7 illustrates an example of a second step of the graphical user interface (GUI), wherein the second step is used for selecting a first and a second video sequence to be combined.
  • GUI graphical user interface
  • Fig 8 illustrates an example of a third step of the graphical user interface (GUI), wherein the third step is used for adjusting a duration of the first or the second video sequence to be combined.
  • GUI graphical user interface
  • Fig 9 illustrates an example of a fourth step of the graphical user interface (GUI), wherein the fourth step is used to show a preview of the combined video sequence.
  • GUI graphical user interface
  • Fig 10 illustrates a general method for combining a first and a second video sequence into a combined video sequence according to the present invention.
  • Fig 11 generally illustrates a module arranged to combine a first and a second video sequence into a combined video sequence according to the present invention.
  • Fig 12 generally illustrates an apparatus for combining a first and a second video sequence into a combined video sequence according to the present invention.
  • Fig 1 generally illustrates an example of an original video sequence comprising a number of consecutive frames, denoted 1 to 14.
  • Such an original video sequence may, for instance, comprise a live video recording of a football match.
  • Each of the frames comprise video data and audio data corresponding to a certain point of time.
  • a mark-up signal is input and associated to a time corresponding frame.
  • a special event such as a goal
  • a mark-up signal is input and associated to a time corresponding frame.
  • a special event has occurred in connection to a frame denoted 7, and correspondingly a mark-up signal has been associated to this frame 7.
  • a frame with an associated mark-up signal may be denoted as a mark-up frame.
  • a number of frames before the mark-up frame 7, denoted as pre-mark-up frames, the mark-up frame 7 itself and a number of frames after the mark-up frame 7, denoted as post-mark-up frames, may be utilised.
  • pre-mark-up frames a number of frames before the mark-up frame 7, denoted as pre-mark-up frames, the mark-up frame 7 itself and a number of frames after the mark-up frame 7, denoted as post-mark-up frames
  • post-mark-up frames a number of frames before the mark-up frame 7, denoted as pre-mark-up frames, the mark-up frame 7 itself and a number of frames after the mark-up frame 7, denoted as post-mark-up frames.
  • mark-up frame has been input incorrect, it may be possible to change mark-up frame to another of the frames of said original video sequence.
  • the mark-up signal may be input too late, i.e. after the special event has occurred, and in this case it is advantageous to change the mark-up frame.
  • the mark-up signal may be input manually by an operator or automatically by an automatic detection system.
  • an automatic detection system may be a sound detection system configured to detect certain sounds, such as applauses, or, in the case of a football match, the system may connected to a goal detection system.
  • a first video sequence comprising a special event of the original video sequence.
  • the frames denoted 4 to 10 are used to generate the first video sequence.
  • the first video sequence comprising the frames 4 to 10
  • the second video sequence comprising frames denoted A to G, comprises a video sequence to be combined with another video sequence.
  • a number of the frames of the second video sequence may, for instance, comprise blue screen areas to be replaced by video data from the first video sequence. Other matte techniques, such as green screen, may be used as well. Moreover, a number of the frames of the second video sequence may be configured to be superposed onto frames of the first video sequence.
  • a combined video sequence comprising combined frames, denoted 4A, 5B, 6C, 7D, 8E, 9F and 10G, is generated.
  • audio data associated to the frames used to generate the first and second video sequences may also be considered when combining the first and second video sequence.
  • the applause of the audience may be comprised in the combined video sequence.
  • Fig 3 illustrates combination of the first video sequence, herein illustrated as a single frame 300, and the second video sequence, herein illustrated as a single frame 302.
  • the procedure illustrated in fig 3 may be repeated for each of a number of frames in the first and second video sequence.
  • An area 304 of the frame 302 of the second video sequence is configured to be replaced by video data from the frame 300 of the first video sequence.
  • Such a configuration may be made, for instance, by defining the coordinate data for the boundaries of the area 304, or by representing the area 304 in a specific colour, i.e. using a matte technique.
  • the frame 300 of the first video sequence and the frame 302 of the second video sequence are input to a combiner 306, such as a processor, wherein a frame 308 of the combined video sequence is generated.
  • a combiner 306 such as a processor
  • the frame 300 of the first video sequence may be inserted in an area 310 of the combined frame 308, which corresponds to the area 304 of the frame 302.
  • only a part of the frame 300 of the first video sequence may be used instead of using the entire frame 300 of the first video sequence.
  • the second video sequence may be a predetermined video sequence comprising a template of the TV commercial.
  • Fig 4 illustrates combination of the first and the second video sequence.
  • the first video sequence is illustrated as a single frame 400 and the second video sequence is illustrated as a single frame 402.
  • the frame 402 is in this case divided into a foreground layer and a background layer 404.
  • the video data comprised in the second video sequence may be comprised in the foreground layer, while the video data of the first video sequence can be inserted in the background layer 404.
  • the three black men illustrates the foreground layer and the underlying area drawn with diagonal lines illustrates the background layer 404.
  • the frame 400 of the first video sequence and the frame 402 of the second video sequence can be input to a combiner 406, wherein a frame 408 of the combined video sequence can be generated. After such a combination, a combined frame comprising the foreground layer of the frame 402 and the frame 400 is generated.
  • a combiner 406 wherein a frame 408 of the combined video sequence can be generated.
  • a combined frame comprising the foreground layer of the frame 402 and the frame 400 is generated.
  • the procedure illustrated in fig 4 may be repeated for each of a number of frames in the first and second video sequence.
  • the ways for combining the first and second frames illustrated in fig 3 and 4 may be combined, i.e. video data from the frame 400 of the first video sequence may be inserted in an area of the background layer.
  • Fig 5 generally illustrates an example of generation of a TV commercial comprising video data from a recent broadcasted live show, such as a football match.
  • an original video sequence can be generated frame by frame.
  • a mark-up signal can be input.
  • a first video sequence can be generated as illustrated in fig 1.
  • two mark-up signals, m and n has been input, wherein m corresponds to a goal and n corresponds to a happy face of the scorer.
  • m corresponds to a goal
  • n corresponds to a happy face of the scorer.
  • a first video sequence 500 can be generated and based on n a first video sequence 502 can be generated.
  • the two first video sequences, 500 and 502 may be edited if the number of pre-mark-up frames or post-mark-up frames are too small or too large, or if the mark-up frame is incorrect.
  • the first video sequences may also be used for replays during the broadcasted TV show.
  • the first video sequences may be categorized into a number of categories, such as goals, tackles etc.
  • the first video sequences may be associated to a priority, e.g. a beautiful long-distance shot in the top corner of the goal may be given a high priority and a rough situation close to the goal ending up in a goal may be given a low priority.
  • the first video sequences 500, 502 can be stored in a memory 504 and a number of second video sequences can be stored in a memory 506. Alternatively, the first and second video sequences can be stored in one and the same memory. Short before a TV commercial break, a first video sequence, comprising recent video data from the broadcasted show, and a second video sequence, comprising a number of frames configured to be combined with other video data as illustrated in fig 3 and 4, can be chosen. The selection may be made manually by an operator or automatically by a selection algorithm. An example of such a procedure is further illustrated in fig 6 to 9.
  • first video sequences having high priorities can be emphasized.
  • a second video sequence can comprise information of which category of first video sequences it is aimed be combined with.
  • the first and second video sequence After having chosen the first and second video sequence, these can be input to a combiner 508 wherein a combined video sequence 510, such as a TV commercial with recent video data, is generated. Next, the combined video sequence may be broadcasted.
  • a combined video sequence 510 such as a TV commercial with recent video data
  • Fig 6 illustrates an example of a first step of a graphical user interface (GUI), wherein the first step is used for inputting mark-up signals.
  • GUI graphical user interface
  • the first step of the GUI can be shown on a display 600.
  • the first step of the GUI can comprise a window 602 for showing the original video sequence, and a mark-up button 604 for inputting a mark-up signal.
  • a mark-up signal is input. Thereafter, the input mark-up signal can be associated to the current frame of the original video sequence shown in the window 602. After the mark-up signal has been associated to the current frame, a first video sequence can be generated, as described above.
  • the GUI may be controlled by utilising a cursor.
  • a regret button may be available in order to remove a video sequence.
  • an application specific input device may be used in order to make the process more effective.
  • buttons such as keys on a key board, may be used to control the GUI.
  • Fig 7 illustrates an example of a second step of a graphical user interface (GUI), wherein the second step is used for selecting a first and a second video sequence to be combined.
  • GUI graphical user interface
  • the second step of the GUI can be shown on a display 700.
  • the second step of the GUI can comprise a set 702 of first video sequences, a set 704 of second video sequences and a confirmation button 706.
  • physical buttons such as keys on a key board, may be used to control the GUI.
  • a first video sequence comprised in the set 702 can be shown by the mark-up frame of this first video sequence.
  • a second video sequence comprised in the set 704 can be shown by an image illustrating the content of this second video sequence.
  • the selection of a first video sequence of the set 702 and a second video sequence of the set 704 may be made by using a cursor 708, or by using a keyboard.
  • a border 710 of the frame illustrating the first video sequence may be emphasized.
  • a border 712 of the frame illustrating the second video sequence can be emphasized.
  • Fig 8 illustrates an example of a third step of a graphical user interface (GUI), wherein the third step is used for adjusting a duration of the first or the second video sequence to be combined.
  • GUI graphical user interface
  • the third step of the GUI can be shown on a display 800.
  • the third step of the GUI can comprise a number of frames 802 illustrating the selected first video sequence, and a number of frames 804 illustrating the selected second video sequence.
  • the third step of the GUI may comprise a confirmation button 806, a first duration adjustment button 808, a second duration adjustment button 810 and a cursor 812.
  • physical buttons such as keys on a key board, may be used to control the GUI.
  • the frame rate of the first video sequence is automatically adjusted such that the length of the selected first video sequence is similar to the length of the selected second video sequence.
  • a number of frames of the first video sequence and/or a number of frames of the second video sequence may be deselected by using the cursor 812. Hence, these deselected frames are not considered when adjusting the duration of the first video sequence in accordance to the second video sequence.
  • the duration of the second video sequence may be adjusted in accordance to the first video sequence. This type of adjustment is achieved by pressing the second duration adjustment button 810.
  • Fig 9 illustrates an example of the fourth step of a graphical user interface (GUI), wherein the fourth step is used to show a preview of the combined video sequence.
  • GUI graphical user interface
  • the fourth step of the GUI can be shown on a display 900.
  • the fourth step of the GUI can comprise a window 902 for showing the combined video sequence, a button 904 for re-entering the third step of the GUI, and a confirmation button 906.
  • buttons such as keys on a key board, may be used to control the GUI. If the user of the GUI is pleased with the shown combined video sequence, this video sequence may be transmitted to a broadcasting terminal by pressing the confirmation button 906. However, if the user is not pleased with the combined video sequence, he may re-enter the third step by pressing the button 904. Although only one original video sequence is present in the example described above, there may be several original video sequences present. This may be the case when the show is recorded by a number of cameras simultaneously.
  • Fig 10 illustrates a general method for combining a first and a second video sequence into a combined video sequence according to the present invention.
  • an original video sequence can be received.
  • This original video sequence may, for instance, be a broadcasted live show, such as a football match.
  • a set of mark-up signals can be received.
  • These signals may, for instance, be received via a GUI as described above.
  • the set of mark-up signals can be associated to a set of mark-up frames. This association may be made by comparing time stamps of the mark-up signals and time stamps of the mark-up frames. Further, this association may be made frame by frame as soon as a mark-up signal is received.
  • a set of first video sequences can be generated, as illustrated in fig 1.
  • a fifth step 1008 at least one frame of the generated set of first video sequences can be presented.
  • the first video sequences may be presented on a display.
  • one of the first video sequences can be selected.
  • at least one frame of the second video sequences can be presented.
  • the second video sequences may be presented on a display.
  • one of the second video sequences can be selected.
  • the selected first video sequence and the selected second video sequence can be combined into a combined video sequence.
  • Fig 11 illustrates a module 1100 configured to combine a first and a second video sequence into a combined video sequence.
  • the module may be realised as a software module or as a hardware module, or as a combination thereof, such as an FPGA circuit, an ASIC with pre-installed software etc.
  • the module 1100 can comprise a video data receiver 1102 configured to receive an original video sequence and a mark-up receiver 1104 configured to receive mark-up signals.
  • the received original video sequence and the received mark-up signals can be transmitted to an associater 1106, where the mark-up signals can be associated to frames of the original video sequence.
  • Such an association may be made by comparing time stamps of the frames of the received original video sequence and time stamps of the received mark-up signals.
  • the frames to which the mark-up signals are associated are referred to as mark-up frames.
  • the original video sequence with associated mark-up signals can be transmitted to a video sequence generator 1108 for generating first video sequences of said original video sequence.
  • the generated first video sequences can then be stored in a first video sequence memory 1110.
  • the generated first video sequences can be transmitted to a first presentation frame transmitter 1112.
  • This first presentation frame transmitter 1112 may output one or several presentation frames of the first video sequences. These presentation frames can be used to select which of the first video sequences that is to be combined with a second video sequence.
  • a set of second video sequences can be stored in a second video sequence memory 1114.
  • the stored second video sequences can be transmitted to a second presentation frame transmitter 1116.
  • This second presentation frame transmitter 1116 can output one or several presentation frames of the second video sequences. These presentation frames can be used to select which of the second video sequences that is to be combined with a first video sequence.
  • a first video selection signal receiver 1118 can be configured to receive a first video selection signal, where the first video selection signal can comprise information of which of the first video sequences that is to be combined with a second video sequence.
  • a second video selection signal receiver 1120 can be configured to receive a second video selection signal, where the second video selection signal can comprise information of which of the second video sequences that is to be combined with the first video sequence.
  • the first video selection signal and the second video selection signal can be transmitted to a combiner 1122. Based on these signals, a first video sequence pointed out by said first video selection signal can be gathered from the first video sequence memory 1110, and a second video sequence pointed out by said second video selection signal can be gathered from the second video sequence memory 1114.
  • Fig 12 illustrates an apparatus 1200 for combining a first and a second video sequence into a combined video sequence.
  • the apparatus can comprise the module 1100, a receiver 1202 configured to receive an original video sequence from a communications network 1204, such as a broadcasting network, a display 1206 configured to show the presentation frame(s) of the first and second video sequences output from the module 1100, an input device 1208, such as a keyboard, configured to transmit the first video selection signal and the second video selection signal to the module 1100, and a transmitter 1210 configured to transmit the combined video sequence to the communications network 1204.
  • the transmitted combined video sequence may thereafter be transmitted to a number of receivers 1212a-1212d, such as TV apparatuses, mobile phones, personal computers and other devices suitable for displaying video sequences.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The invention relates to a method for rapid combination of a first and a second video sequence into a combined video sequence, such as a TV commercial comprising a highlight sequence from a recently broadcasted show. In a first step, a number of first video sequences is generated from an original video sequence, such as a live show, by using mark-up signals. Next, one of the generated first video sequences are chosen to be combined with one of a number of second video sequences, wherein the second video sequences are configured to be combined with said first video sequences. Finally, the first and second video sequence are combined into the combined video sequence.

Description

A METHOD FOR COMBINING VIDEO SEQUENCES AND AN APPARATUS THEREOF
Technical field
The present invention generally relates to a method for combining a first and a second video sequence into a combined video sequence, as well as a module, an apparatus and a computer-readable medium thereof.
Background of the invention
Many companies offering consumer products advertise their products via TV commercials. In order to reach the target group for their products, it is important to have a TV commercial that is attractive to the target group, and, perhaps even more important, to broadcast the TV commercial in connection to a show or event that the target group finds interesting or amusing. Therefore, the cost of broadcasting a TV commercial in connection to an event having many viewers, such as the Super Bowl, is very high.
However, although the TV commercial is broadcasted in connection to a popular event or show, the viewers may switch channel or leave the TV during the TV commercials. In order to make the viewers stay at their TV apparatuses during the commercials, the advertising companies try to create more attractive TV commercials by means of better music, movie stars participating, etc., as well as replacing their TV commercials more frequent. Nevertheless, even though the TV commercials are made more attractive and are replaced frequently, many viewers will switch channel or leave the TV apparatuses during the TV commercials, and hence the messages from the advertising companies will not reach the viewers.
Summary
In view of the above, an objective of the invention is to solve or at least reduce the problems discussed above. In particular, an objective is to present a method for generating a TV commercial by combining a first video sequence, corresponding to a newly sent sequence, e.g. a highlight of an original video sequence, with a second video sequence, herein also referred to as TV commercial template, configured to be combined with the first video sequence. In this way a unique TV commercial including a highlight sequence, such as a touch down in the Super Bowl, may be generated rapidly and broadcasted in the TV commercial break.
An advantage of this is that there is a connection between the broadcasted show and the TV commercial. Therefore, since highlights are included in the TV commercial, more viewers will stay tuned during the TV commercial break.
Another advantage is that the TV commercial will vary from time to time, which means that more viewers will stay tuned and see the TV commercials. Still an advantage is the use of templates. In order to generate a TV commercial including a highlight, the editing of such a TV commercial must be made rapidly. This is solved by providing video sequence combination.
The object is provided according to a first aspect of the invention by a method comprising receiving an original video sequence, wherein said original video sequence comprises a number of consecutive frames, receiving a set of mark-up signals, associating said mark-up signals to a set of mark-up frames, wherein said mark-up frames are frames of said received original video sequence, wherein a time stamp of each of said mark-up signals corresponds to a time stamp of each mark-up frame respectively, generating a set of first video sequences of said original video sequence, wherein each first video sequence comprises one of said mark-up frames, presenting at least one frame of a number of generated first video sequences, selecting one of said first video sequences, presenting at least one frame of a set of second video sequences, selecting one of said second video sequences, and combining said selected first video sequence with said selected second video sequence into a combined video sequence.
An advantage of this is that first video sequences may be generated by mark-up signals during the broadcasting of the original video sequence. This implies that the combination of the first and second video sequence may be made more rapidly, which in turn implies that TV commercials including a highlight of the original video sequence may be sent in close connection to the original video sequence. In this first aspect of the invention the second video sequence may be predetermined.
An advantage of this is that the second video sequence may be prepared to be combined with another video sequence, which implies that the combination of the first and second video sequences may be made rapidly.
In this first aspect of the invention the first video sequence may be presented by said mark-up frame.
An advantage of presenting said first video by the associated mark-up frame is that the mark-up frame illustrates the content of the first video sequence properly.
In this first aspect of the invention, a frame of said second video sequence may comprise an area configured for insertion of video data from said first video sequence.
An advantage of this is that how the combination of the first and second video sequences are to be combined can be prepared in advance, which means that the combination process can be made more effective and faster.
In order to replace the replacement area of said second video sequence with video data from the said first video sequence so-called matte technique may be used. An example of such a technique is the so-called blue screen area technique.
In this first aspect of the invention, a frame of said second video sequence may be configured to be superposed onto video data of a frame of said first video sequence. An advantage of this is that more sophisticated combined video sequences may be achieved.
In order to superpose said second video sequence onto said first video sequence, said second video sequence may be divided into a number of layers, such as a foreground layer comprising predetermined video data and a background layer in which video data of said first video sequence is to be placed. When combining a frame of said video sequence and a frame of said first video sequence, the background layer of the frame of said second video sequence may be replaced with video data from the frame of said first video sequence. Thereafter, the two layers of the frame of said second video sequence may be merged into a combined frame comprising one layer.
Optionally, it is possible to combine this approach with the approach of replacement area described above. The combining of this first aspect may further comprise adjusting a duration of said first video sequence to correspond to the duration of said second video sequence, or a selected part of it.
An advantage of this is that the duration, i.e. the length in time, of the first video sequence is adjusted to correspond to the duration of the second video sequence, or a selected part of it, which in turn means that the combination of the two sequences is made more easily. The adjustment may be made by adjusting the frame rate of the first video sequence or by removing frames of the first video sequence, or a combination thereof. The combining of this first aspect may further comprise adjusting a duration of said second video sequence to correspond to the duration of said first video sequence, or a selected part of it.
An advantage of this is that the duration, i.e. the length in time, of the second video sequence is adjusted to correspond to the duration of the first video sequence, or a selected part of it, which in turn means that the combination of the two sequences is made more easily. The adjustment may be made by adjusting the frame rate of the second video sequence or by removing frames of the second video sequence, or a combination thereof. In this first aspect of the invention the combining may be performed within two hours from the generation of said first video sequence.
The object is provided according to a second aspect of the invention by a module for combining a first and a second video sequence comprising a video data receiver for receiving an original video sequence, a mark-up signal receiver for receiving a mark-up signal, an associater configured to associate said mark-up signal to a mark-up frame of said video data, wherein a time stamp of said mark-up signal corresponds to a time stamp of said mark-up frame, a video sequence generator configured to generate a first video sequence comprising said mark-up frame, a memory configured to store said first video sequence, a memory configured to store said second video sequence, a first presentation frame transmitter configured to transmit a frame presenting said first video sequence, a second presentation frame transmitter configured to transmit a frame presenting said second video sequence, a first video selection signal receiver configured to receive a first video selection signal, a second video selection signal receiver configured to receive a second video selection signal, and a combiner configured to combine video data of said first video sequence and video data of said second video sequence into a combined video sequence.
An advantage of this is that first video sequences may be generated by mark-up signals during the broadcasting of the original video sequence. This implies that the combination of the first and second video sequence may be made more rapidly, which in turn implies that TV commercials including a highlight of the original video sequence may be sent in close connection to the original video sequence.
The advantages of the first aspect are also applicable to this second aspect of the invention.
In this second aspect of the invention the frame presenting said first video sequence may comprise said mark-up frame.
In this second aspect of the invention the combiner may be configured to insert video data of a frame of said first video sequence in an area of a frame of said second video sequence.
In this second aspect of the invention the combiner may be configured to superpose video data of said second video sequence onto video data of said first video sequence.
In this second aspect of the invention the combiner may be configured to adjust a duration of said first video sequence to correspond to the duration of said second video sequence, or a selected part of it. In this second aspect of the invention the combiner may be configured to adjust a duration of said second video sequence to correspond to the duration of said first video sequence, or a selected part of it.
The object is provided according to a third aspect of the invention by an apparatus comprising a receiver configured to receive an original video sequence from a communications network, a module according to any of claim 9-15 configured to receive said original video sequence, to transmit a first presentation frame and a second presentation frame, to receive a first video selection signal and a second video selection signal and to transmit a combined video sequence, a display configured to show said first presentation frame and said second presentation frame, an input device configured to transmit said first video selection signal and second video selection signal, and a transmitter configured to transmit said combined video sequence to a communications network. The advantages of the first aspect are also applicable to this third aspect of the invention.
The object is provided according to a fourth aspect of the invention by a computer-readable medium having computer-executable components comprising instructions for receiving an original video sequence, wherein said original video sequence comprises a number of consecutive frames, receiving a set of mark-up signals, associating said mark-up signals to a set of mark-up frames, wherein said mark-up frames are frames of said received original video sequence, wherein a time stamp of each of said mark-up signals corresponds to a time stamp of each mark-up frame respectively, generating a set of first video sequences of said original video sequence, wherein each first video sequence comprises one of said mark-up frames, presenting at least one frame of a number of generated first video sequences, selecting one of said first video sequences, presenting at least one frame of a set of second video sequences, selecting one of said second video sequences, and combining said selected first video sequence with said selected second video sequence into a combined video sequence.
The advantages of the first aspect are also applicable to this fourth aspect of the invention.
In this fourth aspect of the invention the first video sequence may be presented by said mark-up frame.
In this fourth aspect of the invention, a frame of said second video sequence may comprise an area configured for insertion of video data from said first video sequence.
In this fourth aspect of the invention, a frame of said second video sequence may be configured to be superposed onto video data of a frame of said first video sequence. In this fourth aspect of the invention the combining may further comprise adjusting a duration of said first video sequence to correspond to the duration of said second video sequence, or a part of it.
In this fourth aspect of the invention the combining may further comprise adjusting a duration of said second video sequence to correspond to the duration of said first video sequence, or a part of it.
Although, only TV commercial has been discussed, the general principle may be utilised by any media broadcasting shows with commercial or information breaks. Other objectives, features and advantages of the present invention will appear from the following detailed disclosure, from the attached dependent claims as well as from the drawings.
Generally, all terms used in the claims are to be interpreted according to their ordinary meaning in the technical field, unless explicitly defined otherwise herein. All references to "a/an/the [element, device, component, means, step, etc]" are to be interpreted openly as referring to at least one instance of said element, device, component, means, step, etc., unless explicitly stated otherwise. The steps of any method disclosed herein do not have to be performed in the exact order disclosed, unless explicitly stated.
Brief description of the drawings
The above, as well as additional objects, features and advantages of the present invention, will be better understood through the following illustrative and non-limiting detailed description of preferred embodiments of the present invention, with reference to the appended drawings, where the same reference numerals will be used for similar elements, wherein:
Fig 1 generally illustrates an example of an original video sequence.
Fig 2 generally illustrates the principle for combining a first and a second video sequence into a combined video sequence. Fig 3 generally illustrates a way to combine the first and the second video sequence into a combined video sequence.
Fig 4 generally illustrates another way to combine the first and the second video sequence into a combined video sequence.
Fig 5 generally illustrates an example of generation of a TV commercial comprising video data from a recent broadcasted live show, such as a football match. Fig 6 illustrates an example of a first step of a graphical user interface (GUI), wherein the first step is used for inputting mark-up signals.
Fig 7 illustrates an example of a second step of the graphical user interface (GUI), wherein the second step is used for selecting a first and a second video sequence to be combined.
Fig 8 illustrates an example of a third step of the graphical user interface (GUI), wherein the third step is used for adjusting a duration of the first or the second video sequence to be combined.
Fig 9 illustrates an example of a fourth step of the graphical user interface (GUI), wherein the fourth step is used to show a preview of the combined video sequence.
Fig 10 illustrates a general method for combining a first and a second video sequence into a combined video sequence according to the present invention. Fig 11 generally illustrates a module arranged to combine a first and a second video sequence into a combined video sequence according to the present invention.
Fig 12 generally illustrates an apparatus for combining a first and a second video sequence into a combined video sequence according to the present invention.
Detailed description of preferred embodiments
Fig 1 generally illustrates an example of an original video sequence comprising a number of consecutive frames, denoted 1 to 14. Such an original video sequence may, for instance, comprise a live video recording of a football match. Each of the frames comprise video data and audio data corresponding to a certain point of time.
When a special event, such as a goal, occurs in the original video sequence, a mark-up signal is input and associated to a time corresponding frame. In the example illustrated in fig 1 , a special event has occurred in connection to a frame denoted 7, and correspondingly a mark-up signal has been associated to this frame 7. A frame with an associated mark-up signal may be denoted as a mark-up frame.
In order to generate a video sequence comprising the special event corresponding to the mark-up signal, a number of frames before the mark-up frame 7, denoted as pre-mark-up frames, the mark-up frame 7 itself and a number of frames after the mark-up frame 7, denoted as post-mark-up frames, may be utilised. In the example illustrated in fig 1 , three pre-mark-up frames 4, 5, 6 and three post-mark-up frames 8, 9, 10 are used. However, depending of, for instance, frame rate, other number of frames may be used. Further, it may be possible to adjust the number of pre-mark-up frames and post-mark-up frames from case to case.
If the mark-up frame has been input incorrect, it may be possible to change mark-up frame to another of the frames of said original video sequence. For instance, the mark-up signal may be input too late, i.e. after the special event has occurred, and in this case it is advantageous to change the mark-up frame.
The mark-up signal may be input manually by an operator or automatically by an automatic detection system. Such an automatic detection system may be a sound detection system configured to detect certain sounds, such as applauses, or, in the case of a football match, the system may connected to a goal detection system.
After having determined the mark-up frame, as well as pre-mark-up frames and post-mark-up frames, a first video sequence, comprising a special event of the original video sequence, may be generated. In the example illustrated in fig 1 , the frames denoted 4 to 10 are used to generate the first video sequence.
In fig 2 the general principle for combining the first and a second video sequence into a combined video sequence is illustrated.
The first video sequence, comprising the frames 4 to 10, comprises a special event, such as a goal, of the original video sequence and the second video sequence, comprising frames denoted A to G, comprises a video sequence to be combined with another video sequence.
A number of the frames of the second video sequence may, for instance, comprise blue screen areas to be replaced by video data from the first video sequence. Other matte techniques, such as green screen, may be used as well. Moreover, a number of the frames of the second video sequence may be configured to be superposed onto frames of the first video sequence.
By combining the first video sequence and the second video sequence, a combined video sequence comprising combined frames, denoted 4A, 5B, 6C, 7D, 8E, 9F and 10G, is generated.
Optionally, audio data associated to the frames used to generate the first and second video sequences may also be considered when combining the first and second video sequence. In this way, for instance, the applause of the audience may be comprised in the combined video sequence.
Fig 3 illustrates combination of the first video sequence, herein illustrated as a single frame 300, and the second video sequence, herein illustrated as a single frame 302. The procedure illustrated in fig 3 may be repeated for each of a number of frames in the first and second video sequence.
An area 304 of the frame 302 of the second video sequence is configured to be replaced by video data from the frame 300 of the first video sequence. Such a configuration may be made, for instance, by defining the coordinate data for the boundaries of the area 304, or by representing the area 304 in a specific colour, i.e. using a matte technique.
Further, the frame 300 of the first video sequence and the frame 302 of the second video sequence are input to a combiner 306, such as a processor, wherein a frame 308 of the combined video sequence is generated. After such a combination, the frame 300 of the first video sequence may be inserted in an area 310 of the combined frame 308, which corresponds to the area 304 of the frame 302. Alternatively, instead of using the entire frame 300 of the first video sequence, only a part of the frame 300 of the first video sequence may be used.
If the combined video sequence is intended to be broadcasted as a TV commercial, the second video sequence may be a predetermined video sequence comprising a template of the TV commercial.
Fig 4 illustrates combination of the first and the second video sequence. As in fig 3, the first video sequence is illustrated as a single frame 400 and the second video sequence is illustrated as a single frame 402. The frame 402 is in this case divided into a foreground layer and a background layer 404. The video data comprised in the second video sequence may be comprised in the foreground layer, while the video data of the first video sequence can be inserted in the background layer 404. In the example illustrated in fig 4, the three black men illustrates the foreground layer and the underlying area drawn with diagonal lines illustrates the background layer 404.
The frame 400 of the first video sequence and the frame 402 of the second video sequence can be input to a combiner 406, wherein a frame 408 of the combined video sequence can be generated. After such a combination, a combined frame comprising the foreground layer of the frame 402 and the frame 400 is generated. Alternatively, instead of using the entire frame 400 of the first video sequence, only a part of the frame 400 may be used.
The procedure illustrated in fig 4 may be repeated for each of a number of frames in the first and second video sequence. The ways for combining the first and second frames illustrated in fig 3 and 4 may be combined, i.e. video data from the frame 400 of the first video sequence may be inserted in an area of the background layer.
Fig 5 generally illustrates an example of generation of a TV commercial comprising video data from a recent broadcasted live show, such as a football match.
As the show proceeds, an original video sequence can be generated frame by frame. When a special event occurs, such as a goal is being made, a mark-up signal can be input. Based on the mark-up signal, a first video sequence can be generated as illustrated in fig 1. In the example illustrated in fig 5, two mark-up signals, m and n, has been input, wherein m corresponds to a goal and n corresponds to a happy face of the scorer. Based on the mark-up signal m a first video sequence 500 can be generated and based on n a first video sequence 502 can be generated.
The two first video sequences, 500 and 502, may be edited if the number of pre-mark-up frames or post-mark-up frames are too small or too large, or if the mark-up frame is incorrect.
Optionally, it is possible to add extra mark-up signals afterwards, if an appropriate storing media for the original video sequence is being used. Optionally, the first video sequences may also be used for replays during the broadcasted TV show.
Optionally, the first video sequences may be categorized into a number of categories, such as goals, tackles etc. Moreover, the first video sequences may be associated to a priority, e.g. a beautiful long-distance shot in the top corner of the goal may be given a high priority and a rough situation close to the goal ending up in a goal may be given a low priority.
The first video sequences 500, 502 can be stored in a memory 504 and a number of second video sequences can be stored in a memory 506. Alternatively, the first and second video sequences can be stored in one and the same memory. Short before a TV commercial break, a first video sequence, comprising recent video data from the broadcasted show, and a second video sequence, comprising a number of frames configured to be combined with other video data as illustrated in fig 3 and 4, can be chosen. The selection may be made manually by an operator or automatically by a selection algorithm. An example of such a procedure is further illustrated in fig 6 to 9.
Optionally, in order to facilitate and speed up the selection process, first video sequences having high priorities can be emphasized. Moreover, a second video sequence can comprise information of which category of first video sequences it is aimed be combined with.
After having chosen the first and second video sequence, these can be input to a combiner 508 wherein a combined video sequence 510, such as a TV commercial with recent video data, is generated. Next, the combined video sequence may be broadcasted.
Fig 6 illustrates an example of a first step of a graphical user interface (GUI), wherein the first step is used for inputting mark-up signals.
The first step of the GUI can be shown on a display 600. The first step of the GUI can comprise a window 602 for showing the original video sequence, and a mark-up button 604 for inputting a mark-up signal.
When the mark-up button 604 is pressed a mark-up signal is input. Thereafter, the input mark-up signal can be associated to the current frame of the original video sequence shown in the window 602. After the mark-up signal has been associated to the current frame, a first video sequence can be generated, as described above.
Optionally, the GUI may be controlled by utilising a cursor.
Optionally, a regret button may be available in order to remove a video sequence. Optionally, an application specific input device may be used in order to make the process more effective.
Optionally, as an alternative to having screen buttons, physical buttons, such as keys on a key board, may be used to control the GUI.
Fig 7 illustrates an example of a second step of a graphical user interface (GUI), wherein the second step is used for selecting a first and a second video sequence to be combined.
The second step of the GUI can be shown on a display 700. The second step of the GUI can comprise a set 702 of first video sequences, a set 704 of second video sequences and a confirmation button 706. Optionally, as an alternative to having screen buttons, physical buttons, such as keys on a key board, may be used to control the GUI. A first video sequence comprised in the set 702 can be shown by the mark-up frame of this first video sequence.
A second video sequence comprised in the set 704 can be shown by an image illustrating the content of this second video sequence. The selection of a first video sequence of the set 702 and a second video sequence of the set 704 may be made by using a cursor 708, or by using a keyboard. When a selection of a first video sequence has been made, a border 710 of the frame illustrating the first video sequence may be emphasized. In the same way, a border 712 of the frame illustrating the second video sequence can be emphasized.
Fig 8 illustrates an example of a third step of a graphical user interface (GUI), wherein the third step is used for adjusting a duration of the first or the second video sequence to be combined.
The third step of the GUI can be shown on a display 800. The third step of the GUI can comprise a number of frames 802 illustrating the selected first video sequence, and a number of frames 804 illustrating the selected second video sequence. Further, the third step of the GUI may comprise a confirmation button 806, a first duration adjustment button 808, a second duration adjustment button 810 and a cursor 812. Optionally, as an alternative to having screen buttons, physical buttons, such as keys on a key board, may be used to control the GUI.
When the first duration adjustment button 808 is pressed the frame rate of the first video sequence is automatically adjusted such that the length of the selected first video sequence is similar to the length of the selected second video sequence.
Optionally, a number of frames of the first video sequence and/or a number of frames of the second video sequence may be deselected by using the cursor 812. Hence, these deselected frames are not considered when adjusting the duration of the first video sequence in accordance to the second video sequence.
Instead of adjusting the duration of the first video sequence in accordance to the second video sequence, the duration of the second video sequence may be adjusted in accordance to the first video sequence. This type of adjustment is achieved by pressing the second duration adjustment button 810.
After having adjusted the duration the confirmation button 806 may be pressed, and a fourth step of the GUI is reached. Fig 9 illustrates an example of the fourth step of a graphical user interface (GUI), wherein the fourth step is used to show a preview of the combined video sequence.
The fourth step of the GUI can be shown on a display 900. The fourth step of the GUI can comprise a window 902 for showing the combined video sequence, a button 904 for re-entering the third step of the GUI, and a confirmation button 906.
Optionally, as an alternative to having screen buttons, physical buttons, such as keys on a key board, may be used to control the GUI. If the user of the GUI is pleased with the shown combined video sequence, this video sequence may be transmitted to a broadcasting terminal by pressing the confirmation button 906. However, if the user is not pleased with the combined video sequence, he may re-enter the third step by pressing the button 904. Although only one original video sequence is present in the example described above, there may be several original video sequences present. This may be the case when the show is recorded by a number of cameras simultaneously.
Fig 10 illustrates a general method for combining a first and a second video sequence into a combined video sequence according to the present invention.
In a first step 1000, an original video sequence can be received. This original video sequence may, for instance, be a broadcasted live show, such as a football match. In a second step 1002, a set of mark-up signals can be received.
These signals may, for instance, be received via a GUI as described above.
In a third step 1004, the set of mark-up signals can be associated to a set of mark-up frames. This association may be made by comparing time stamps of the mark-up signals and time stamps of the mark-up frames. Further, this association may be made frame by frame as soon as a mark-up signal is received.
In a fourth step 1006, a set of first video sequences can be generated, as illustrated in fig 1.
In a fifth step 1008, at least one frame of the generated set of first video sequences can be presented. The first video sequences may be presented on a display.
In a sixth step 1010, one of the first video sequences can be selected. In a seventh step 1012, at least one frame of the second video sequences can be presented. The second video sequences may be presented on a display.
In an eighth step 1014, one of the second video sequences can be selected.
In a ninth step 1016, the selected first video sequence and the selected second video sequence can be combined into a combined video sequence.
Fig 11 illustrates a module 1100 configured to combine a first and a second video sequence into a combined video sequence. The module may be realised as a software module or as a hardware module, or as a combination thereof, such as an FPGA circuit, an ASIC with pre-installed software etc.
The module 1100 can comprise a video data receiver 1102 configured to receive an original video sequence and a mark-up receiver 1104 configured to receive mark-up signals. The received original video sequence and the received mark-up signals can be transmitted to an associater 1106, where the mark-up signals can be associated to frames of the original video sequence. Such an association may be made by comparing time stamps of the frames of the received original video sequence and time stamps of the received mark-up signals. The frames to which the mark-up signals are associated are referred to as mark-up frames.
Next, the original video sequence with associated mark-up signals can be transmitted to a video sequence generator 1108 for generating first video sequences of said original video sequence. The generated first video sequences can then be stored in a first video sequence memory 1110.
The generated first video sequences can be transmitted to a first presentation frame transmitter 1112. This first presentation frame transmitter 1112 may output one or several presentation frames of the first video sequences. These presentation frames can be used to select which of the first video sequences that is to be combined with a second video sequence.
A set of second video sequences can be stored in a second video sequence memory 1114.
The stored second video sequences can be transmitted to a second presentation frame transmitter 1116. This second presentation frame transmitter 1116 can output one or several presentation frames of the second video sequences. These presentation frames can be used to select which of the second video sequences that is to be combined with a first video sequence.
A first video selection signal receiver 1118 can be configured to receive a first video selection signal, where the first video selection signal can comprise information of which of the first video sequences that is to be combined with a second video sequence.
Further, a second video selection signal receiver 1120 can be configured to receive a second video selection signal, where the second video selection signal can comprise information of which of the second video sequences that is to be combined with the first video sequence.
The first video selection signal and the second video selection signal can be transmitted to a combiner 1122. Based on these signals, a first video sequence pointed out by said first video selection signal can be gathered from the first video sequence memory 1110, and a second video sequence pointed out by said second video selection signal can be gathered from the second video sequence memory 1114.
The gathered first video sequence and the gathered second video sequence can then be combined into a combined video sequence. Finally, the combined video sequence can be output from the module 1100. Fig 12 illustrates an apparatus 1200 for combining a first and a second video sequence into a combined video sequence. The apparatus can comprise the module 1100, a receiver 1202 configured to receive an original video sequence from a communications network 1204, such as a broadcasting network, a display 1206 configured to show the presentation frame(s) of the first and second video sequences output from the module 1100, an input device 1208, such as a keyboard, configured to transmit the first video selection signal and the second video selection signal to the module 1100, and a transmitter 1210 configured to transmit the combined video sequence to the communications network 1204. The transmitted combined video sequence may thereafter be transmitted to a number of receivers 1212a-1212d, such as TV apparatuses, mobile phones, personal computers and other devices suitable for displaying video sequences.
The invention has mainly been described above with reference to a few embodiments. However, as is readily appreciated by a person skilled in the art, other embodiments than the ones disclosed above are equally possible within the scope of the invention, as defined by the appended patent claims.

Claims

1. A method comprising receiving an original video sequence, wherein said original video sequence comprises a number of consecutive frames, receiving a set of mark-up signals, associating said mark-up signals to a set of mark-up frames, wherein said mark-up frames are frames of said received original video sequence, wherein a time stamp of each of said mark-up signals corresponds to a time stamp of each mark-up frame respectively, generating a set of first video sequences of said original video sequence, wherein each first video sequence comprises one of said mark-up frames, presenting at least one frame of a number of generated first video sequences, selecting one of said first video sequences, presenting at least one frame of a set of second video sequences, selecting one of said second video sequences, and combining said selected first video sequence with said selected second video sequence into a combined video sequence.
2. The method according to claim 1 , wherein said second video sequence is predetermined.
3. The method according to any of the preceding claims, wherein said first video sequence is presented by said mark-up frame.
4. The method according to any of the preceding claims, wherein a frame of said second video sequence comprises an area configured for insertion of video data from said first video sequence.
5. The method according to any of the preceding claims, wherein a frame of said second video sequence is configured to be superposed onto video data of a frame of said first video sequence.
6. The method according to any of the preceding claims, wherein said combining further comprises adjusting a duration of said first video sequence to correspond to the duration of said second video sequence, or a selected part of it.
7. The method according to any of claim 1-5, wherein said combining further comprises adjusting a duration of said second video sequence to correspond to the duration of said first video sequence, or a selected part of it.
8. The method according to any of the preceding claims, wherein said combining is performed within two hours from the generation of said first video sequence.
9. A module for combining a first and a second video sequence, comprising a video data receiver for receiving an original video sequence, a mark-up signal receiver for receiving a mark-up signal, an associater configured to associate said mark-up signal to a mark-up frame of said video data, wherein a time stamp of said mark-up signal corresponds to a time stamp of said mark-up frame, a video sequence generator configured to generate a first video sequence comprising said mark-up frame, a memory configured to store said first video sequence, a memory configured to store said second video sequence, a first presentation frame transmitter configured to transmit a frame presenting said first video sequence, a second presentation frame transmitter configured to transmit a frame presenting said second video sequence, a first video selection signal receiver configured to receive a first video selection signal, a second video selection signal receiver configured to receive a second video selection signal, and a combiner configured to combine video data of said first video sequence and video data of said second video sequence into a combined video sequence.
10. The module according to claim 9, wherein said frame presenting said first video sequence comprises said mark-up frame.
11 The module according to any of claim 9-10, wherein said combiner is configured to insert video data of a frame of said first video sequence in an area of a frame of said second video sequence.
12. The module according to any of claim 9-10, wherein said combiner is configured to superpose video data of said second video sequence onto video data of said first video sequence.
13. The module according to any of claim 9-12, wherein said combiner is configured to adjust a duration of said first video sequence to correspond to the duration of said second video sequence, or a selected part of it.
14. The module according to any of claim 9-12, wherein said combiner is configured to adjust a duration of said second video sequence to correspond to the duration of said first video sequence, or a selected part of it.
15. An apparatus comprising a receiver configured to receive an original video sequence from a communications network, a module according to any of claim 9-15 configured to receive said original video sequence, to transmit a first presentation frame and a second presentation frame, to receive a first video selection signal and a second video selection signal and to transmit a combined video sequence, a display configured to show said first presentation frame and said second presentation frame, an input device configured to transmit said first video selection signal and second video selection signal, and a transmitter configured to transmit said combined video sequence to a communications network.
16. A computer-readable medium having computer-executable components comprising instructions for receiving an original video sequence, wherein said original video sequence comprises a number of consecutive frames, receiving a set of mark-up signals, associating said mark-up signals to a set of mark-up frames, wherein said mark-up frames are frames of said received original video sequence, wherein a time stamp of each of said mark-up signals corresponds to a time stamp of each mark-up frame respectively, generating a set of first video sequences of said original video sequence, wherein each first video sequence comprises one of said mark-up frames, presenting at least one frame of a number of generated first video sequences, selecting one of said first video sequences, presenting at least one frame of a set of second video sequences, selecting one of said second video sequences, and combining said selected first video sequence with said selected second video sequence into a combined video sequence.
17. The computer readable medium according to claim 16, wherein said first video sequence is presented by said mark-up frame.
18. The computer readable medium according to any of claim 16-17, wherein a frame of said second video sequence comprises an area configured for insertion of video data from said first video sequence.
19. The computer readable medium according to any of claim 16-17, wherein a frame of said second video sequence is configured to be superposed onto video data of a frame of said first video sequence.
20. The computer readable medium according to any of claim 16-19, wherein said combining further comprises adjusting a duration of said first video sequence to correspond to the duration of said second video sequence, or a part of it.
21. The computer readable medium according to any of claim 16-19, wherein said combining further comprises adjusting a duration of said second video sequence to correspond to the duration of said first video sequence, or a part of it.
PCT/SE2007/001013 2006-11-22 2007-11-16 A method for combining video sequences and an apparatus thereof Ceased WO2008063112A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP07835211A EP2100440A4 (en) 2006-11-22 2007-11-16 A method for combining video sequences and an apparatus thereof

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US86047806P 2006-11-22 2006-11-22
SE0602478-0 2006-11-22
US60/860478 2006-11-22
SE0602478A SE0602478L (en) 2006-11-22 2006-11-22 A method of combining video sequences and an apparatus thereof

Publications (1)

Publication Number Publication Date
WO2008063112A1 true WO2008063112A1 (en) 2008-05-29

Family

ID=39429956

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SE2007/001013 Ceased WO2008063112A1 (en) 2006-11-22 2007-11-16 A method for combining video sequences and an apparatus thereof

Country Status (2)

Country Link
EP (1) EP2100440A4 (en)
WO (1) WO2008063112A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5818542A (en) * 1996-04-10 1998-10-06 Discreet Logic, Inc. Processing image data
US6144391A (en) * 1992-03-13 2000-11-07 Quantel Limited Electronic video processing system
US20030184679A1 (en) * 2002-03-29 2003-10-02 Meehan Joseph Patrick Method, apparatus, and program for providing slow motion advertisements in video information
US20040100581A1 (en) * 2002-11-27 2004-05-27 Princeton Video Image, Inc. System and method for inserting live video into pre-produced video
US20040183949A1 (en) * 2003-02-05 2004-09-23 Stefan Lundberg Method and apparatus for combining video signals to one comprehensive video signal
US20050001852A1 (en) * 2003-07-03 2005-01-06 Dengler John D. System and method for inserting content into an image sequence

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1993021636A1 (en) * 1992-04-10 1993-10-28 Avid Technology, Inc. A method and apparatus for representing and editing multimedia compositions
US6426778B1 (en) * 1998-04-03 2002-07-30 Avid Technology, Inc. System and method for providing interactive components in motion video
US6473094B1 (en) * 1999-08-06 2002-10-29 Avid Technology, Inc. Method and system for editing digital information using a comparison buffer

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6144391A (en) * 1992-03-13 2000-11-07 Quantel Limited Electronic video processing system
US5818542A (en) * 1996-04-10 1998-10-06 Discreet Logic, Inc. Processing image data
US20030184679A1 (en) * 2002-03-29 2003-10-02 Meehan Joseph Patrick Method, apparatus, and program for providing slow motion advertisements in video information
US20040100581A1 (en) * 2002-11-27 2004-05-27 Princeton Video Image, Inc. System and method for inserting live video into pre-produced video
US20040183949A1 (en) * 2003-02-05 2004-09-23 Stefan Lundberg Method and apparatus for combining video signals to one comprehensive video signal
US20050001852A1 (en) * 2003-07-03 2005-01-06 Dengler John D. System and method for inserting content into an image sequence

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2100440A4 *

Also Published As

Publication number Publication date
EP2100440A1 (en) 2009-09-16
EP2100440A4 (en) 2010-06-02

Similar Documents

Publication Publication Date Title
US20230239539A1 (en) Content-aware progress bar
US10812856B2 (en) Dynamic advertisement insertion
CN100477757C (en) Digital broadcast receiving apparatus and control method thereof
US8453169B2 (en) Video output device and video output method
US8358907B2 (en) Display control apparatus, display control method, and program
JP4616274B2 (en) Still image content creation device with caption, still image content creation program with caption, and still image content creation system with caption
US20150248918A1 (en) Systems and methods for displaying a user selected object as marked based on its context in a program
US20090307721A1 (en) Providing content related to an item in an interactive data scroll
KR100848597B1 (en) Display signal control apparatus, and display signal control method
US12238373B2 (en) Apparatus, systems and methods for accessing information based on an image presented on a display
KR20080089912A (en) Video data display method and mobile terminal using same
US20230133692A1 (en) Automatic video augmentation
CN106792095A (en) The method and system of intelligent television advertisement insertion
JP4192476B2 (en) Video conversion apparatus and video conversion method
TW200824451A (en) Method and related system capable of notifying and buffering predetermined events in a program
JP2010044776A (en) Method for modifying user interface of consumer electronic apparatus, corresponding apparatus, signal, and data carrier
WO2009110491A1 (en) Content display apparatus, content display method, program, and recording medium
WO2008063112A1 (en) A method for combining video sequences and an apparatus thereof
JP4367535B2 (en) Subtitled video playback device and program
JP2008098793A (en) Receiver
KR100735188B1 (en) How to display EP of digital TV
JP2011061670A (en) Display apparatus, method and program for displaying summary content
JP2008042333A (en) Video playback method and apparatus
JPH11225296A (en) Video display control device
JP2005333551A (en) TV receiver

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07835211

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007835211

Country of ref document: EP