TW201739264A - Method and system for automatically embedding interactive elements into multimedia content - Google Patents
Method and system for automatically embedding interactive elements into multimedia content Download PDFInfo
- Publication number
- TW201739264A TW201739264A TW105113478A TW105113478A TW201739264A TW 201739264 A TW201739264 A TW 201739264A TW 105113478 A TW105113478 A TW 105113478A TW 105113478 A TW105113478 A TW 105113478A TW 201739264 A TW201739264 A TW 201739264A
- Authority
- TW
- Taiwan
- Prior art keywords
- multimedia content
- interactive
- video
- audio
- embedding
- Prior art date
Links
- 230000002452 interceptive effect Effects 0.000 title claims abstract description 130
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000003993 interaction Effects 0.000 claims description 20
- 230000000977 initiatory effect Effects 0.000 abstract description 3
- 230000000875 corresponding effect Effects 0.000 abstract 2
- 230000002596 correlated effect Effects 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 8
- 230000001960 triggered effect Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000005236 sound signal Effects 0.000 description 4
- 241001465754 Metazoa Species 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000009191 jumping Effects 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
Landscapes
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Description
本發明為一種內嵌互動元件至多媒體內容的方法與系統,特別是一種根據多媒體內容與內容特徵自動內嵌互動元件的方法與系統。 The present invention is a method and system for embedding interactive components to multimedia content, and more particularly to a method and system for automatically embedding interactive components based on multimedia content and content features.
在傳統觀看電視或影音內容的過程中,總是單向接收電視台或是影音內容供應者提供的內容,當觀看者看到有興趣的片段,比如想知道影音內容某個景點、人物、衣飾、物品的細節,需要花費心力去尋找,並沒有有效的方式得到相關資訊。 In the traditional process of watching TV or audio-visual content, the content provided by the TV station or the audio-visual content provider is always received in one direction. When the viewer sees the interested piece, for example, wants to know the audio-visual content, an attraction, a character, and a clothing. The details of the items need to be spent on finding and there is no effective way to get relevant information.
在習知技術中,若影音內容供應者要讓觀看者獲得特定資訊時,會以人工觀看影音內容,透過軟體手段以手動方式在需要產生額外資訊的位置,如某個時間片段,插入特定資訊,讓之後播放影音內容時,播放器會在播放影音內容時偵測到此特定資訊,將相關訊息播出。 In the prior art, if the video content provider wants the viewer to obtain the specific information, the video content is manually viewed, and the specific information is manually inserted in a location where additional information needs to be generated, such as a certain time segment, by software means. When the audio and video content is played later, the player will detect this specific information while playing the audio and video content, and broadcast the related information.
本發明揭露書揭示一種內嵌互動元件至多媒體內容的方法與系統,可以透過偵測多媒體內容的特徵自動安排內嵌特定觸發資訊的時間碼,對應一時間段落,之後輸出內嵌有一或多個觸發資 訊的新的多媒體內容,可在之後播放多媒體內容時,可以根據此觸發資訊在安排的時間碼播出相關資訊。 The invention discloses a method and a system for embedding an interactive component to a multimedia content, which can automatically arrange a time code embedded with a specific triggering information by detecting a feature of the multimedia content, corresponding to a time period, and then output one or more embedded components. Trigger The new multimedia content of the newsletter can be used to broadcast the related content according to the triggering information when the multimedia content is played later.
根據實施方案之一,自動多媒體內容內嵌互動元件的方法的流程中,先接收一多媒體內容,經解析多媒體內容以得出其中影音特徵,再比對各影音特徵與觸發特徵資訊,能得出符合觸發特徵資訊的多媒體內容的一或多個時間碼。其中觸發特徵資訊為可為記載用以比對影音特徵的影片幀的視頻特徵,以及音頻特徵,每個時間碼對應內嵌一互動元件,並設定互動元件所關聯的互動事件,例如在終端裝置開啟網頁、執行軟體、顯示訊息或是啟始一互動畫面,經完成內嵌互動元件的多媒體內容,即輸出包括互動元件與所關聯互動事件的一新的多媒體內容。 According to one embodiment, in the process of the method for embedding an interactive component in an automatic multimedia content, a multimedia content is first received, and the multimedia content is parsed to obtain a video and audio feature, and then the audio and video features and the trigger feature information are compared. One or more time codes of the multimedia content that match the trigger feature information. The trigger feature information is a video feature that records a movie frame for comparing the audio and video features, and an audio feature. Each time code corresponds to an embedded interactive component, and sets an interaction event associated with the interactive component, for example, in the terminal device. Opening a web page, executing a software, displaying a message, or initiating an interactive picture, after completing the multimedia content of the embedded interactive component, outputting a new multimedia content including the interactive component and the associated interactive event.
在一實施例中,新的多媒體內容係輸出至一提供多媒體內容服務的影音內容服務平台,提供各終端裝置可以存取其中包括互動元件與所關聯互動事件的多媒體內容,當在終端裝置上播放多媒體內容時,播放軟體會偵測觸發特徵,在符合設定觸發特徵的時間碼啟動互動元件,並執行對應的互動事件。 In an embodiment, the new multimedia content is output to an audio-visual content service platform that provides a multimedia content service, and provides that each terminal device can access the multimedia content including the interactive component and the associated interactive event, when playing on the terminal device. In the case of multimedia content, the playback software detects the trigger feature, activates the interactive component in the time code that matches the set trigger feature, and executes the corresponding interactive event.
在自動多媒體內容內嵌互動元件的系統的一實施例中,系統可為電腦系統所實現,其中軟體以及/或硬體實現的功能單元包括有解析多媒體內容以取得其中影音特徵的解析單元;比對各影音特徵與觸發特徵資訊的比對單元,可以得出符合觸發特徵資訊的多媒體內容的一或多個時間碼;一編輯單元,在此單元中,可於所得出符合觸發特徵資訊的各時間碼對應內嵌一互動元件,並設定互動元件所關聯的互動事件;再有一影音形成單元,係形成包括至少一互動元件與所關聯互動事件的新的多媒體內容。 In an embodiment of the system for embedding interactive components in the automatic multimedia content, the system can be implemented by a computer system, wherein the software and/or the hardware implemented functional unit includes an analysis unit that parses the multimedia content to obtain the audio and video features therein; For comparing units of audio and video features and triggering feature information, one or more time codes of the multimedia content that meet the triggering feature information may be obtained; an editing unit in which each of the matching triggering feature information is obtained The time code corresponds to embedding an interactive component and setting an interaction event associated with the interactive component; and further comprising a video forming unit, forming a new multimedia content including the at least one interactive component and the associated interactive event.
系統另可包括一記憶單元,其中記錄比對影音特徵的視頻特徵以及/或音頻特徵以及對應的互動事件。 The system can further include a memory unit in which the video features and/or audio features of the audiovisual features and corresponding interactive events are recorded.
為了能更進一步瞭解本發明為達成既定目的所採取之技術、方法及功效,請參閱以下有關本發明之詳細說明、圖式,相信本 發明之目的、特徵與特點,當可由此得以深入且具體之瞭解,然而所附圖式僅提供參考與說明用,並非用來對本發明加以限制者。 In order to further understand the techniques, methods, and effects of the present invention for achieving the intended purpose, refer to the following detailed description, drawings, and The objects, features, and characteristics of the invention are to be understood as a part of the invention.
11‧‧‧影片庫 11‧‧‧Video Library
12‧‧‧影音內容服務平台 12‧‧‧Video and audio content service platform
10‧‧‧自動影音內容內嵌互動元件系統 10‧‧‧Automatic audio and video content embedded interactive component system
101,102,103‧‧‧終端裝置 101,102,103‧‧‧ Terminal devices
201‧‧‧解析單元 201‧‧‧ analytical unit
202‧‧‧比對單元 202‧‧‧ comparison unit
203‧‧‧編輯單元 203‧‧‧editing unit
204‧‧‧影音形成單元 204‧‧‧Video-forming unit
205‧‧‧資料庫 205‧‧‧Database
30‧‧‧資料庫 30‧‧‧Database
60‧‧‧顯示螢幕 60‧‧‧ display screen
601‧‧‧跳出畫面 601‧‧‧ Jump out of the screen
602‧‧‧行動裝置畫面 602‧‧‧Mobile device screen
步驟S301~S317‧‧‧自動多媒體內容內嵌互動元件的流程 Step S301~S317‧‧‧ Flow of embedded interactive components in automatic multimedia content
步驟S401~S413‧‧‧自動多媒體內容內嵌互動元件的流程 Step S401~S413‧‧‧ Flow of embedded interactive components in automatic multimedia content
步驟S501~S507‧‧‧自動多媒體內容內嵌互動元件的流程 Step S501~S507‧‧‧ Flow of embedded interactive components in automatic multimedia content
圖1顯示本發明自動多媒體內容內嵌互動元件的系統示意圖;圖2顯示本發明自動多媒體內容內嵌互動元件的系統功能方塊實施例圖;圖3顯示本發明自動多媒體內容內嵌互動元件的方法實施例流程圖之一;圖4顯示本發明自動多媒體內容內嵌互動元件的方法實施例流程圖之二;圖5顯示本發明自動多媒體內容內嵌互動元件的方法實施例流程圖之三;圖6顯示本發明自動多媒體內容內嵌互動元件的系統實施例圖。 1 is a schematic diagram of a system for embedding interactive components of an automatic multimedia content according to the present invention; FIG. 2 is a diagram showing an embodiment of a system function block for embedding interactive components of an automatic multimedia content according to the present invention; and FIG. 3 is a diagram showing a method for embedding interactive components of an automatic multimedia content according to the present invention; FIG. 4 is a second flowchart of an embodiment of a method for embedding an interactive component in an automatic multimedia content according to the present invention; FIG. 5 is a third flowchart of a method for embedding an interactive component in an automatic multimedia content according to the present invention; 6 shows a system embodiment diagram of the embedded interactive component of the automatic multimedia content of the present invention.
為能自動處理影音內容而能內嵌特定事件,本發明揭露書提出一種自動影音內容內嵌互動元件的方法與系統,系統較佳由一電腦系統實現,透過此系統,輸入至此系統的多媒體內容可以根據需求自動內嵌可以觸發特定事件的互動元件,如一種標記,符合內嵌互動元件的片段例如特定畫面(視頻)、圖像、音頻、時間點等,使得播放此多媒體內容時,在每個內嵌有互動元件的時間碼將觸發所關聯的一或多個互動事件(interactive event),在實施例中,互動事件如顯示QR碼(QR Code),可以導向連結到一網頁(URL)上;互動事件亦可如顯示某物件資訊(商品)、商品販售資訊、廣告等,或是播出另一視頻或音頻、或無線訊號等。設 定互動事件時,更可包括設定一觸發模式,也就是設定如何觸發此事件的模式,包括直接顯示在顯示裝置上,或是透過顯示裝置的週邊裝置播出音頻訊號,或是透過一外部裝置執行互動事件。 In order to automatically process audio and video content and embed specific events, the present invention provides a method and system for embedding interactive components in automatic audio and video content. The system is preferably implemented by a computer system through which multimedia content input to the system is input. Interactive components that can trigger specific events can be automatically embedded as needed, such as a marker that conforms to segments of embedded interactive components such as specific images (video), images, audio, time points, etc., such that when playing this multimedia content, The time code embedded with the interactive component will trigger the associated one or more interactive events. In the embodiment, the interactive event, such as displaying a QR code, can be linked to a web page (URL). Interactive events can also display information about an object (product), merchandising information, advertisements, etc., or broadcast another video or audio, or wireless signal. Assume When the interaction event is determined, the trigger mode may be set, that is, the mode for setting the event, including directly displaying on the display device, or playing the audio signal through the peripheral device of the display device, or through an external device. Perform interactive events.
圖1顯示本發明自動多媒體內容內嵌互動元件的系統示意圖。 1 shows a system diagram of an embedded interactive component of an automatic multimedia content of the present invention.
在此系統示意圖中,其中為自動影音內容內嵌互動元件系統10,系統可由影片庫11輸入多媒體內容,經系統處理後,自動比對得到內嵌互動元件的時間碼,可以在多媒體內容中內嵌互動元件以及所要觸發執行的互動事件,最後形成一新的多媒體內容,則能輸出至影音內容服務平台12,作為提供終端裝置101,102,103存取使用的數位內容平台,讓使用者可以在播出此新的多媒體內容時,播出程式可在偵測到有互動元件時執行對應的互動事件,例如在終端裝置上開啟網頁、執行軟體、顯示訊息或是啟始互動畫面等。 In the system diagram, in which the interactive content component system 10 is embedded in the automatic audio and video content, the system can input the multimedia content by the film library 11, and after processing by the system, the time code of the embedded interactive component is automatically compared, and can be in the multimedia content. The interactive component and the interactive event to be triggered are executed, and finally a new multimedia content is formed, which can be output to the audio-visual content service platform 12 as a digital content platform for providing access to the terminal device 101, 102, 103, so that the user can broadcast the content. In the case of new multimedia content, the broadcast program can perform corresponding interactive events when detecting interactive components, such as opening a web page on a terminal device, executing software, displaying a message, or initiating an interactive screen.
圖2顯示本發明自動多媒體內容內嵌互動元件的系統功能方塊實施例圖。 2 is a diagram showing an embodiment of a system function block of an embedded interactive component of an automatic multimedia content of the present invention.
此例中,系統為一電腦系統所實現的系統,其中的功能方塊係為執行某些功能的功能單元,係為由軟體以及/或硬體實現的功能單元,自動影音內容內嵌互動元件系統10一端連接影片庫11,並接收其中輸入的多媒體內容,而多媒體內容來源可不僅為影片庫11,並不排除如即時串流接收的影音內容、自特定電腦裝置輸入的影音內容。系統10一段連接影音內容服務平台12,例如透過網路將形成的新的多媒體內容傳送到影音內容服務平台12,提供各終端使用者觀看,並能在播放內容時從產生的互動事件獲得額外資訊。 In this example, the system is a system implemented by a computer system, wherein the function block is a functional unit that performs certain functions, and is a functional unit implemented by software and/or hardware, and an automatic interactive component system is embedded in the audio and video content. The video library 11 is connected to the video library 11 at one end, and the multimedia content input therein can be not only the video library 11, but also the audio and video content received by the instant streaming, and the audio and video content input from a specific computer device. The system 10 is connected to the audio-visual content service platform 12, for example, to transmit new multimedia content formed through the network to the audio-visual content service platform 12, to provide viewing by each terminal user, and to obtain additional information from the generated interactive event when the content is played. .
根據實施例,自動影音內容內嵌互動元件系統10所執行的功能可以電腦裝置的軟體,或與軟體搭配實現,其中包括一解析單元201,係用以解析系統10所接收之一多媒體內容,以得出其中影音特徵。解析時,比如將多媒體內容解析得出每幀的資訊,包 括畫面資訊、音訊資訊,可以取得其中影像或是音訊的特徵。 According to an embodiment, the function performed by the automatic audio-visual content embedded interactive component system 10 can be implemented by a software of a computer device or with a software, and includes a parsing unit 201 for parsing one of the multimedia contents received by the system 10 to The audio and video features are derived. When parsing, for example, parsing the multimedia content to get the information of each frame, the package Including picture information, audio information, you can get the characteristics of the image or audio.
系統中可設有一資料庫205,此為記錄系統資訊的儲存媒體,也可為一種記憶單元,其中記載用以比對解析多媒體內容得到的影音特徵的觸發特徵資訊,也是視頻特徵以及/或音頻特徵,而每個觸發特徵資訊對比關聯的某一互動事件,觸發特徵資訊與互動事件可以一比對表表示,並儲存在資料庫205(或記憶單元)中。 The system may be provided with a database 205, which is a storage medium for recording system information, and may also be a memory unit in which the trigger feature information for comparing the audio and video features obtained by parsing the multimedia content is recorded, and is also a video feature and/or audio. The feature, and each trigger feature information is compared with an associated interaction event, and the trigger feature information and the interaction event may be represented by a comparison table and stored in the database 205 (or memory unit).
接著,系統以一比對單元202比對所解析得到的影音特徵與系統設定的各種觸發特徵資訊,以得出符合觸發特徵資訊的多媒體內容的一或多個時間碼。觸發特徵資訊為根據需求設定要觸發特定互動事件的畫面(視頻)、音頻的訊號,或是特定時間點。接著,系統設有一編輯單元203,以上所得出符合觸發特徵資訊可以對應一個互動事件,這可由系統預先設定完成,並可以前述方式以一比對表記錄,而此符合觸發特徵資訊的時間段落設為一時間碼,編輯單元203可對應內嵌一互動元件,並設定互動元件所關聯的互動事件。互動元件為一標記,形式上可為一個訊號資訊、一個畫面、一段聲音、一段文字、一個時間戳記,或是數位浮水印。 Then, the system compares the obtained audio and video features and various trigger feature information set by the system with a matching unit 202 to obtain one or more time codes of the multimedia content that meet the trigger feature information. The trigger feature information is a screen (video), an audio signal, or a specific time point to trigger a specific interactive event according to requirements. Then, the system is provided with an editing unit 203, and the above-mentioned information corresponding to the triggering feature can correspond to an interactive event, which can be preset by the system, and can be recorded in a comparison table in the foregoing manner, and the time interval corresponding to the triggering feature information is set. For a time code, the editing unit 203 can correspondingly embed an interactive component and set an interaction event associated with the interactive component. The interactive component is a tag, which can be a signal message, a picture, a piece of sound, a piece of text, a time stamp, or a digital watermark.
舉例來說,觸發特徵資訊為某一景點畫面,當自動於多媒體內容解析出各幀(frame)影像,經逐幀或影像片段的比對後,找到符合此景點畫面特徵的影像段落,能取得對應某個時間段落的時間碼(time code),此時可以經過編輯內嵌互動元件,並設定當偵測到此互動元件時,產生一景點資訊、旅遊建議、關聯廣告內容等;若以音頻為例,如設定觸發特徵資訊為一歌曲片段,當自動於多媒體內容解析出其中影音片段的特徵,經音頻比對後,找到符合此歌曲片段的影音段落,亦能取得對應此段落的時間碼,此時內嵌互動元件,並設定可以產生相關音樂內容的資訊、購買資訊、關聯廣告等。另外,觸發特徵資訊可以設定為時間點,也就是讓使用者觀看多媒體內容達一時間時,透過所內嵌的互動元 件而啟動一互動的事件,例如跳出廣告訊息。 For example, the trigger feature information is a certain attraction image. When the frame image is automatically parsed out from the multimedia content, and the frame-by-frame or video segment is compared, the image segment matching the feature of the attraction image can be obtained. Corresponding to the time code of a certain time period, at this time, the embedded interactive component can be edited, and when the interactive component is detected, an attraction information, a travel suggestion, a related advertisement content, and the like are generated; For example, if the trigger feature information is set as a song segment, when the feature of the video segment is automatically parsed in the multimedia content, after the audio comparison, the video segment corresponding to the song segment is found, and the time code corresponding to the segment can also be obtained. At this time, the interactive components are embedded, and information, purchase information, associated advertisements, etc., which can generate related music content, are set. In addition, the trigger feature information can be set as a time point, that is, when the user views the multimedia content for a time, through the embedded interactive element. Initiate an interactive event, such as jumping out of an advertising message.
當完成以上根據觸發特徵資訊內嵌互動元件與設定互動事件後,影音形成單元204即重新編寫或編碼多媒體內容,形成包括至少一互動元件與所關聯互動事件的一新的多媒體內容。 After completing the above interactive function and setting interaction event according to the trigger feature information, the video forming unit 204 rewrites or encodes the multimedia content to form a new multimedia content including at least one interactive component and the associated interactive event.
在自動多媒體內容內嵌互動元件的方法流程中,如圖3所示之實施例,前述系統可於影片庫或特定來源輸入影音內容,如步驟S301;接著解析影音內容,如步驟S303,包括可以將影片解析得到每幀或影像片段的影像訊號,或是音訊中的聲音訊號或頻率;此時,可以取得每個視頻或音頻的影音特徵,如步驟S305。 In the method flow of the embedded interactive component in the automatic multimedia content, as shown in the embodiment shown in FIG. 3, the foregoing system may input the audio and video content in the movie library or a specific source, as in step S301; and then analyze the audio and video content, as in step S303, including The video is parsed to obtain an image signal of each frame or video clip, or an audio signal or frequency in the audio; at this time, the audio and video characteristics of each video or audio can be obtained, as in step S305.
影音特徵將逐幀或是每個影像片段比對系統所設定的觸發特徵,如前述事先設定根據需求設定要觸發特定互動事件的畫面(視頻)、音頻的訊號,或是特定時間點,如步驟S307;並判斷是否符合觸發特徵,如步驟S309。在比對觸發特定時,實施例之一係如流程所述,可比對資料庫30記載設定所要觸發的特定觸發特徵資訊,經比對符合後,即記錄一標記,可將此時的時間碼記載下來。其中各觸發特徵所對應的要觸發的互動事件的對照,可以一比對表形式記載在資料庫30或特定記憶裝置內。 The audio and video features will be frame-by-frame or the trigger feature set by each video segment comparison system, such as the above-mentioned preset setting of the screen (video) to trigger a specific interactive event, audio signal, or a specific time point according to requirements, such as steps S307; and determining whether the trigger feature is met, as in step S309. When the comparison triggers the specificity, one of the embodiments is as described in the process, and the specific trigger feature information to be triggered is set in the comparison database 30, and after the comparison is matched, a mark is recorded, and the time code at this time can be recorded. Record it. The comparison of the interactive events to be triggered corresponding to the triggering features may be recorded in the database 30 or the specific memory device in a comparison table form.
比對步驟可以有多種方式,可以在播放多媒體內容時,以即時解析、比對與編輯,在每次得到符合的視頻或音頻時,或可包括時間資訊,立即內嵌關聯有特定互動事件的互動元件;另有方式是,系統先完整解析整個多媒體內容後,將所符合的影音片段的時間碼記錄下來,當多媒體內容全部完成解析並獲得符合觸發特徵的影音片段後,再一次根據各個時間碼內嵌互動元件於多媒體內容中。 The comparison step can be performed in various ways, such as instant parsing, comparison and editing when playing multimedia content, and each time a matching video or audio is obtained, or may include time information, and immediately associated with a specific interactive event. The interactive component; in another way, after the system completely parses the entire multimedia content, the time code of the matched video segment is recorded, and when the multimedia content is completely parsed and the video clip matching the trigger feature is obtained, the time is again according to each time. The code embeds the interactive component in the multimedia content.
在步驟S309的比對步驟中,若在即時解析、比對的流程並未有符合的影音片段,仍重複前述步驟S303等步驟;直到有符合觸發特徵資訊的影音片段時(是),記錄符合的某個時間段落(時間碼),即可如步驟S311,系統自動在此段落中內嵌互動元件;並在 此時對照所符合的觸發特徵資訊而設定所關聯的互動事件,也就是在一實施例中為設定互動元件所關聯的互動事件為於一終端裝置開啟網頁、執行軟體、顯示訊息或是啟始互動畫面(例如:投票、問卷、觀眾回饋)等,如步驟S313。在步驟S313中,在設定關聯互動事件時,可以再次存取資料庫30,設定此時符合的觸發特徵所對應的互動事件。 In the comparison step of step S309, if there is no matching video segment in the process of instant parsing and comparison, the steps S303 and the like are repeated; until there is a video segment that matches the trigger feature information (Yes), the record is consistent. a certain time period (time code), as in step S311, the system automatically embeds the interactive component in this paragraph; At this time, the associated interaction event is set according to the matching trigger feature information, that is, the interaction event associated with setting the interaction component in one embodiment is to open a webpage, execute a software, display a message, or start a terminal device. Interactive screens (eg, voting, questionnaires, audience feedback), etc., as in step S313. In step S313, when the associated interaction event is set, the database 30 can be accessed again to set an interaction event corresponding to the triggering feature that is met at this time.
步驟繼續判斷是否完成整個多媒體內容內嵌互動元件的動作,如步驟S315;若尚未完成,仍持續重複步驟S303等步驟;若已完成,表示可以輸出新的多媒體內容,如步驟S317。 The step continues to determine whether the action of the embedded interactive component of the entire multimedia content is completed, such as step S315; if not, the step S303 and the like are continuously repeated; if it is completed, it indicates that the new multimedia content can be output, as in step S317.
在圖4中所示的方法實施例流程圖中,此例列舉最後形成的多媒體內容將輸出至影音服務平台。 In the flowchart of the method embodiment shown in FIG. 4, this example lists the final formed multimedia content to be output to the video service platform.
在此例中,表示可自特定影音來源輸入影音內容(步驟S401),並自動以軟體或演算法手段解析影音內容(步驟S403),以得出影音內容中之各幀特徵,或各段音頻特徵(步驟S405),此處系統將可執行特定影音處理技術,以取得每幀或是影像片段的影像特徵,或是取得每個音頻特徵,皆可參考相關技術領域的習知知識,並不在此贅述。 In this example, it is indicated that the audio and video content can be input from a specific video source (step S401), and the video content is automatically parsed by software or algorithm (step S403) to obtain each frame feature in the audio and video content, or each piece of audio. Feature (step S405), where the system can perform a specific video processing technology to obtain image features of each frame or image segment, or obtain each audio feature, refer to the prior knowledge of the related art, and not This statement.
系統繼續比對已經事先根據特定需求設定的觸發特徵資訊與所解析得到的影音特徵,如步驟S407,取得符合特徵的段落,並取得符合觸發特徵資訊的多媒體內容的一或多個時間碼。其中所述之特定需求可以根據多媒體內容的屬性事先設定其中需要內嵌互動元件的影音特徵。舉例來說,若多媒體內容涉及生態教學,其中需要內嵌互動元件的觸發特徵(視頻特徵、音頻特徵)可以為各種生物出現的畫面(植物、動物),或是特定生物產生的聲音(不同種類鳥叫、動物吼叫等)。當比對後取得符合觸發特徵資訊,即於該段落(涵蓋某個片段)內嵌觸發事件的互動元件(步驟S409),另有實施例可以在設定互動元件時更包括設定一觸發模式,最後形成新的影音內容(步驟S411),並輸出至影音服務平台 (步驟S413)。 The system continues to compare the trigger feature information that has been previously set according to the specific requirements with the parsed audio and video features. In step S407, a paragraph matching the feature is obtained, and one or more time codes of the multimedia content that meet the trigger feature information are obtained. The specific requirements described therein may set in advance the audio and video features in which the interactive components need to be embedded according to the attributes of the multimedia content. For example, if the multimedia content involves eco-teaching, the triggering features (video features, audio features) of the embedded interactive components may be used for images (plants, animals) of various organisms, or sounds generated by specific creatures (different types). Birds, animals, howling, etc.). When the comparison is made to obtain the matching feature information, that is, the interactive component of the trigger event embedded in the paragraph (covering a certain segment) (step S409), another embodiment may further include setting a trigger mode when setting the interactive component, and finally Forming new audio and video content (step S411), and outputting to the audio and video service platform (Step S413).
觸發模式的設定包括可以由系統端設定觸發特定互動事件的方式,當互動元件所關聯的互動事件為開啟網頁、執行軟體、顯示訊息或是啟始互動畫面進行一些與觀眾的互動時,系統可以決定相關訊息顯示在終端裝置顯示器上的某個位置、大小、表示這個訊息的特效、播放音頻的音量等,且不限於這些方式。甚至在多螢(multi-screen)的環境中可決定是否可以同步在特定行動裝置上。 The setting of the trigger mode includes a manner in which a specific interaction event can be triggered by the system. When the interactive event associated with the interactive component is to open a webpage, execute a software, display a message, or initiate an interactive screen to perform some interaction with the viewer, the system can It is determined that the related information is displayed at a certain position, size, special effect indicating the message, volume of the played audio, and the like on the display of the terminal device, and is not limited to these methods. Even in a multi-screen environment it is possible to decide whether it can be synchronized on a particular mobile device.
當使用者端以終端裝置自特定影音服務平台上以一播放軟體播放影音內容,播放過程由播放軟體持續取得影音資訊,比對是否有互動元件,若有互動元件,即觸發一互動事件,相關實施例如圖5所示之本發明自動多媒體內容內嵌互動元件的方法流程圖。 When the user terminal plays the video and audio content in a playing software on the specific audio and video service platform by the terminal device, the playing process continuously obtains the video and audio information by the playing software, and compares whether there is an interactive component, and if there is an interactive component, an interactive event is triggered. A flow chart of a method for embedding an interactive component of an automatic multimedia content of the present invention, such as shown in FIG.
在此實施例中,終端裝置播放影音內容(步驟S501),並持續偵測互動元件,當根據影音特徵偵測到有內嵌的互動元件時(步驟S503),即觸發互動事件(步驟S505),此時,可以根據特定觸發模式執行此互動事件,包括直接顯示訊息在終端裝置顯示器上(步驟S507)。 In this embodiment, the terminal device plays the video content (step S501), and continuously detects the interactive component. When the embedded interactive component is detected according to the audio-visual feature (step S503), the interactive event is triggered (step S505). At this time, the interactive event can be performed according to a specific trigger mode, including directly displaying the message on the terminal device display (step S507).
圖6顯示本發明自動多媒體內容內嵌互動元件的系統實施例圖。 6 is a diagram showing an embodiment of a system for embedding interactive components of an automatic multimedia content of the present invention.
此例顯示有一終端裝置的顯示螢幕60,當在觀看已經內嵌有互動元件的多媒體畫面時,其中播放程式持續偵測當中互動元件,當偵測到有互動元件,即根據設定的觸發模式將互動事件顯示或播放出來,如此例可以跳出畫面601顯示相關訊息,如顯示特定資訊、畫面、影片、廣告內容等;另一可能的觸發模式可讓使用者以另一行動裝置畫面602中觸發顯示此互動事件。 In this example, a display screen 60 of a terminal device is displayed. When viewing a multimedia screen in which an interactive component has been embedded, the player continuously detects the interactive component. When an interactive component is detected, the trigger mode is determined according to the setting. The interactive event is displayed or played out. In this example, the screen 601 can be displayed to display related information, such as displaying specific information, pictures, videos, advertisement contents, etc. Another possible trigger mode allows the user to trigger display in another mobile device screen 602. This interactive event.
其他互動事件包括提供連接某網頁的訊息,如QR Code,可讓觀看者以行動裝置掃描此QR code,以導向特定網址查詢資料,其他還有可以跳出廣告、商品資訊,其他關聯影音內容等。系統 端或是多媒體內容提供者亦可以透過此互動事件進一步與觀眾進行互動,例如提出特定議題進行投票、意見回饋等。 Other interactive events include providing a link to a web page, such as a QR Code, which allows viewers to scan the QR code with a mobile device to direct access to specific URLs, as well as to pop out advertisements, product information, and other associated audio and video content. system End-to-end or multimedia content providers can also interact with the audience through this interactive event, such as making specific topics for voting, feedback, and so on.
綜上所述,本發明提出一種可以自動根據事先設定好的觸發特徵資訊而內嵌互動元件的相關方法與系統,讓系統商可以自動根據事先設定的需要將特定訊息內嵌於多媒體內容中,取代傳統單向觀看的方式,以及人為設定簡單資訊時所耗費的人力與時間,且本發明更提出更多樣性的互動模式,其中可以根據符合需求的影音片段自動內嵌可以觸發特定事件的互動元件,讓觀看者可以方便取得關聯影音內容的額外資訊,特別是可以達到互動與廣告的效果。 In summary, the present invention provides a related method and system for automatically embedding interactive components according to preset trigger feature information, so that the system provider can automatically embed specific information in the multimedia content according to the preset requirements. It replaces the traditional one-way viewing method and the manpower and time spent manually setting up simple information, and the present invention proposes a more diverse interactive mode, in which a specific event can be triggered automatically according to the video clips that meet the requirements. Interactive components allow viewers to easily access additional information about the audio and video content, especially for interaction and advertising.
本發明之較佳可行實施例,非因此即侷限本發明之專利範圍,故舉凡運用本發明說明書及圖示內容所為之等效結構變化,均同理包含於本發明之範圍內,合予陳明。 The preferred embodiments of the present invention are not intended to limit the scope of the present invention, and equivalent structural changes in the description and the drawings of the present invention are included in the scope of the present invention. Bright.
11‧‧‧影片庫 11‧‧‧Video Library
10‧‧‧自動影音內容內嵌互動元件系統 10‧‧‧Automatic audio and video content embedded interactive component system
201‧‧‧解析單元 201‧‧‧ analytical unit
202‧‧‧比對單元 202‧‧‧ comparison unit
203‧‧‧編輯單元 203‧‧‧editing unit
204‧‧‧影音形成單元 204‧‧‧Video-forming unit
205‧‧‧資料庫 205‧‧‧Database
12‧‧‧影音內容服務平台 12‧‧‧Video and audio content service platform
Claims (10)
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW105113478A TW201739264A (en) | 2016-04-29 | 2016-04-29 | Method and system for automatically embedding interactive elements into multimedia content |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW105113478A TW201739264A (en) | 2016-04-29 | 2016-04-29 | Method and system for automatically embedding interactive elements into multimedia content |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| TW201739264A true TW201739264A (en) | 2017-11-01 |
Family
ID=61022809
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| TW105113478A TW201739264A (en) | 2016-04-29 | 2016-04-29 | Method and system for automatically embedding interactive elements into multimedia content |
Country Status (1)
| Country | Link |
|---|---|
| TW (1) | TW201739264A (en) |
-
2016
- 2016-04-29 TW TW105113478A patent/TW201739264A/en unknown
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US20250142146A1 (en) | Metadata delivery system for rendering supplementary content | |
| EP3146446B1 (en) | Media stream cue point creation with automated content recognition | |
| US10567834B2 (en) | Using an audio stream to identify metadata associated with a currently playing television program | |
| US9508387B2 (en) | Flick intel annotation methods and systems | |
| CN101489125B (en) | Video controlling method based on XML and system thereof | |
| US20160316233A1 (en) | System and method for inserting, delivering and tracking advertisements in a media program | |
| CN113473189B (en) | System and method for providing content in a content list | |
| US20060107195A1 (en) | Methods and apparatus to present survey information | |
| US20130014155A1 (en) | System and method for presenting content with time based metadata | |
| US20110191178A1 (en) | System and method for contextual advertising | |
| US20170041648A1 (en) | System and method for supplemental content selection and delivery | |
| US10972809B1 (en) | Video transformation service | |
| US20160119661A1 (en) | On-Demand Metadata Insertion into Single-Stream Content | |
| JP2010098730A (en) | Link information providing apparatus, display device, system, method, program, recording medium, and link information transmitting/receiving system | |
| US20120143661A1 (en) | Interactive E-Poster Methods and Systems | |
| CA2973717C (en) | System and method for supplemental content selection and delivery | |
| TWI535278B (en) | Method and system for playing video | |
| KR20150030669A (en) | Reception device, information processing method, program, transmission device and application linking system | |
| CN103369409A (en) | Multimedia system, multimedia information display device and information transmission method thereof | |
| TW201739264A (en) | Method and system for automatically embedding interactive elements into multimedia content | |
| CN106897304B (en) | Multimedia data processing method and device | |
| CN103886854A (en) | Online singing system and singing method thereof | |
| TWM467131U (en) | Advertisement broadcasting system | |
| CN102456378A (en) | Film additional content collecting and displaying system and method | |
| WO2022259237A1 (en) | Dynamic insertion of interactive content in a media content |