[go: up one dir, main page]

HK1130136A - Method and system for creating and applying dynamic media specification creator and applicator - Google Patents

Method and system for creating and applying dynamic media specification creator and applicator Download PDF

Info

Publication number
HK1130136A
HK1130136A HK09108059.2A HK09108059A HK1130136A HK 1130136 A HK1130136 A HK 1130136A HK 09108059 A HK09108059 A HK 09108059A HK 1130136 A HK1130136 A HK 1130136A
Authority
HK
Hong Kong
Prior art keywords
asset
master
frame
optimized
media asset
Prior art date
Application number
HK09108059.2A
Other languages
Chinese (zh)
Inventor
迈克尔.乔治.福格纳
瑞恩.布里斯.坎宁安
Original Assignee
雅虎公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 雅虎公司 filed Critical 雅虎公司
Publication of HK1130136A publication Critical patent/HK1130136A/en

Links

Description

Method and system for creating and applying dynamic media specification creator and applicator
Copyright notice
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the patent and trademark office patent file or records, but otherwise reserves all copyright rights whatsoever.
RELATED APPLICATIONS
This application claims priority from: U.S. delivery on 1/13/2006TemporaryApplication No.60/758,664 and US submitted on 4/10/2006TemporaryApplication No.60/790,569, both of which are incorporated herein by reference.
Background
In the current internet, there are many different types of media assets (media assets) in the form of digital files. The digital file may contain data representing one or more types of content, including, but not limited to, audio, images, and video. For example, the media assets may include file formats such as: MPEG-1 Audio layer 3 for audio ("MP 3"), Joint photographic experts group for images ("JPEG"), motion Picture experts group for video ("MPEG-2" and "MPEG-4"), Adobe Flash for animation, and executable files.
These media assets are currently created and edited using applications that execute locally on a dedicated computer. For example, in the case of digital video, popular applications for creating and editing media assets include Apple's iMovie and Finalcut Pro and Microsoft's MovieMacker. After creating and editing a media asset, one or more files may be sent to a computer (e.g., a server) located on a distributed network such as the internet. The server may host files for viewing by different users. Examples of companies operating such servers are YouTube (http:// YouTube. com) and Google Video (http:// Video. Google. com).
Currently, users must create and/or edit media assets on their client computers before sending the media assets to the server. Many users are therefore unable to edit a media asset from another client, for example if the user's client computer does not contain the appropriate application or media asset for editing. Additionally, editing applications are typically designed for professional or high-end consumer markets. Such applications do not meet the needs of ordinary consumers who lack special purpose computers with appreciable processing power and/or storage capacity.
Furthermore, ordinary consumers generally do not possess the transmission bandwidth necessary to deliver, share, or access media resources that may be widely spread across a network. Increasingly, many media assets are stored on computers connected to the internet. For example, services such as Getty Images sell media assets (e.g., Images) stored on computers connected to the internet. Thus, when a user requests a media asset for manipulation or editing, the asset is typically delivered in its entirety via a network. Especially in the case of digital video, such transfers may consume significant processing and transmission resources.
Disclosure of Invention
Against this background, systems and methods have been developed for manipulating media resources in a networked computing environment where processing power, bandwidth, and/or storage capacity may be limited. More specifically, systems and methods have been developed by which low-resolution media assets optimized for transmission over low-bandwidth networks and for editing and manipulation in environments with low processing power and low storage capacity can be created and high-resolution media assets can be created for playback.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method for editing a low-resolution media asset to generate a high-resolution edited media asset. The method includes receiving a request from a requestor to edit a first high-resolution media asset. The method also includes sending a low-resolution media asset to the requestor, the low-resolution media asset being based on the first high-resolution media asset. The method includes receiving editing instructions associated with a low-resolution media asset from a requestor. The method also includes generating a second high-resolution media asset based on the first high-resolution media asset and editing instructions associated with the low-resolution media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer readable medium encoding or containing computer executable instructions for performing a method for editing a low-resolution media asset to generate a high-resolution edited media asset. The computer-readable medium includes instructions for receiving a request from a requestor to edit a first high-resolution media asset. The computer-readable medium further includes instructions for sending a low-resolution media asset to the requestor, the low-resolution media asset being based on the first high-resolution media asset. The computer-readable medium includes instructions for receiving editing instructions associated with a low-resolution media asset from a requestor. The computer-readable medium further includes instructions for generating a second high-resolution media asset based on the first high-resolution media asset and the edit instruction associated with the low-resolution media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a system. The system includes a high-resolution media asset library. The system also includes a low-resolution media asset generator that generates a low-resolution media asset from a high-resolution media asset contained in the high-resolution media asset library. The system includes a high-resolution media asset editor that applies edits to a high-resolution media asset based on edits made to an associated low-resolution media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method. The method includes receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in a keyframe master asset. The method also includes generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from a keyframe master asset. The method includes generating a second portion of the video asset, the second portion containing sets of keyframes and optimized frames, the optimized frames obtained from an optimized master asset associated with the keyframe master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method. The method includes receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in a master asset. The method also includes generating a first portion of the video asset containing one or more keyframes associated with the starting frame, the keyframes obtained from a keyframe master asset corresponding to the master asset. The method includes generating a second portion of the video asset, the second portion containing sets of optimized frames and keyframes, the optimized frames obtained from an optimized master asset corresponding to the master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method. The method includes receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in an optimized master asset. The method also includes generating a keyframe master asset based on the optimized master asset, the keyframe master asset including one or more keyframes corresponding to the starting frame. The method includes generating a first portion of the video asset, the first portion including at least a start frame identified in the optimized master asset. The method also includes generating a second portion of the video asset, the second portion including a set of keyframes and optimized frames, the optimized frames obtained from an optimized master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium encoding or containing computer-executable instructions for performing a method. The computer-readable medium includes instructions for receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in a key frame master asset. The computer-readable medium further includes instructions for generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from a keyframe master asset. The computer-readable medium includes instructions for generating a second portion of the video asset, the second portion containing sets of keyframes and optimized frames, the optimized frames obtained from an optimized master asset associated with the keyframe master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium encoding or containing computer-executable instructions for performing a method. The computer-readable medium includes instructions for receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in a master asset. The computer-readable medium further includes instructions for generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from a keyframe master asset corresponding to the master asset. The computer-readable medium includes instructions for generating a second portion of the video asset, the second portion containing sets of keyframes and optimized frames, the optimized frames obtained from an optimized master asset corresponding to the master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium encoding or containing computer-executable instructions for performing a method. The computer-readable medium includes instructions for receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in an optimized master asset. The computer-readable medium further includes instructions for generating a keyframe master asset based on the optimized master asset, the keyframe master asset including one or more keyframes corresponding to the starting frame. The computer-readable medium includes instructions for generating a first portion of a video asset, the first portion including at least a starting frame identified in an optimized master asset. The computer-readable medium further includes instructions for generating a second portion of the video asset, the second portion including sets of the keyframes and optimized frames, the optimized frames obtained from the optimized master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a system. The system includes a master asset library storing at least one high resolution master asset. The system also includes a specification applicator that stores at least one edit specification for applying edits to at least one high-resolution master asset. The system also includes a master asset editor that applies at least one edit specification to at least one high-resolution master asset. The system also includes an edit asset generator that generates a low-resolution master asset corresponding to the high-resolution master asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method. The method includes editing a low-resolution media asset, the low-resolution media asset corresponding to a master high-resolution media asset. The method also includes generating an edit specification based on the edits to the low-resolution media asset. The method includes applying an edit specification to a master high-resolution media asset to create an edited high-resolution media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium having stored thereon a data structure. The computer-readable medium includes a first data field including data identifying a high-resolution media asset. The computer-readable medium also includes a second data field that includes data describing one or more edits made to a low-resolution media asset associated with the high-resolution media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method for identifying edit information for a media asset. The method includes editing a low-resolution media asset containing at least a first portion corresponding to a first high-resolution master media asset and a second portion corresponding to a second high-resolution master media asset. The method also includes receiving a request to generate a high-resolution edited media asset, the request identifying a first high-resolution master media asset and a second high-resolution master media asset. The method includes generating a high-resolution edited media asset. The method also includes associating edit information identifying the first high-resolution master media asset and the second high-resolution master media asset with the high-resolution edited media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium encoding or containing computer-executable instructions for performing a method for identifying edit information of a media asset. The method includes editing a low-resolution media asset containing at least a first portion corresponding to a first high-resolution master media asset and a second portion corresponding to a second high-resolution master media asset. The method also includes receiving a request to generate a high-resolution edited media asset, the request identifying a first high-resolution master media asset and a second high-resolution master media asset. The method includes generating a high-resolution edited media asset. The method also includes associating edit information identifying the first high-resolution master media asset and the second high-resolution master media asset with the high-resolution edited media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method for presenting a media asset. The method includes receiving a command to present an aggregate media asset defined by an edit specification, the edit specification identifying at least a first media asset associated with at least one edit instruction. The method also includes retrieving the edit specification. The method includes retrieving a first media asset. The method also includes presenting, on a media asset presentation device, a first media asset of the aggregated media asset in accordance with the at least one editing instruction.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium encoding or containing computer-executable instructions for performing a method for rendering a media asset. The method includes receiving a command to present an aggregate media asset defined by an edit specification, the edit specification identifying at least a first media asset associated with at least one edit instruction. The method also includes retrieving the edit specification. The method includes retrieving a first media asset. The method also includes presenting, on a media asset presentation device, a first media asset of the aggregated media asset in accordance with the at least one editing instruction.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method for editing an aggregate media asset. The method includes receiving, in a playback session, a stream from a remote computing device corresponding to an aggregate media asset, the aggregate media asset comprising at least one component media asset. The method also includes presenting the aggregate media asset on an image presentation device. The method includes receiving a user command to edit an edit specification associated with an aggregate media asset. The method includes initiating an editing session for editing an edit specification associated with the aggregate media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a computer-readable medium encoding or containing computer-executable instructions for performing a method for editing an aggregate media asset. The method includes receiving, in a playback session, a stream from a remote computing device corresponding to an aggregate media asset, the aggregate media asset comprising at least one component media asset. The method also includes presenting the aggregate media asset on an image presentation device. The method includes receiving a user command to edit an edit specification associated with an aggregate media asset. The method includes initiating an editing session for editing an edit specification associated with the aggregate media asset.
In one example (which example is intended to be illustrative and not restrictive), the present invention may be considered a method for storing an aggregate media asset. The method includes storing a plurality of component media assets. The method also includes storing a first aggregate edit specification that includes at least one command for rendering the plurality of component media assets to generate a first aggregate media asset.
These and various other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. Additional features are set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the described embodiments. The benefits and features will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.
Drawings
The following drawings, which form a part of the present application, illustrate embodiments of the systems and methods described below and are not intended to limit the scope of the invention, which should be determined based on the claims, in any way.
FIG. 1 illustrates an embodiment of a system for manipulating media assets in a networked computing environment.
FIG. 2 illustrates an embodiment of a system for manipulating media assets in a networked computing environment.
Fig. 3 illustrates an embodiment of a method for editing a low-resolution media asset to generate a high-resolution edited media asset.
Fig. 4 illustrates an embodiment of a method for generating a media asset.
Fig. 5 illustrates an embodiment of a method for generating a media asset.
Fig. 6 illustrates an embodiment of a method for generating a media asset.
FIG. 7 illustrates an embodiment of a method for recording edits to media content.
FIG. 8 illustrates an embodiment of a method for identifying edit information for a media asset.
FIG. 9 illustrates an embodiment of a method for presenting a media asset.
Fig. 10 illustrates an embodiment of a method for storing an aggregate media asset.
Fig. 11 illustrates an embodiment of a method for editing an aggregate media asset.
Detailed Description
Fig. 1 illustrates an embodiment of a system 100 for generating a media asset. In one embodiment, system 100 includes a master asset library 102. In one embodiment, the master asset library 102 may be a logical grouping of data, including but not limited to high-resolution and low-resolution media assets. In another embodiment, master asset library 102 may be a physical grouping of data, including but not limited to high resolution and low resolution media assets. In one embodiment, master asset library 102 may comprise one or more databases and reside on one or more servers. In one embodiment, the master asset library 102 may include a plurality of libraries, including public libraries, private libraries, and shared libraries. In one embodiment, master asset library 102 may be organized as a searchable library. In another embodiment, one or more servers comprising master asset library 102 may include connections to one or more storage devices for storing digital files.
For the purposes of this disclosure, the drawings associated with this disclosure, and the appended claims, the term "file" generally refers to a collection of information that is stored as a unit and that may be retrieved, modified, stored, deleted, transmitted, or the like. Storage devices may include, but are not limited to, volatile memory (e.g., RAM, DRAM), non-volatile memory (e.g., ROM, EPROM, flash memory), and devices such as hard disk drives and optical drives. The storage device may redundantly store information. The storage devices may also be connected in parallel, in series, or in some other connection configuration. As described in this embodiment, one or more resources may reside within master asset library 102.
For purposes of this disclosure, the figures associated with this disclosure, and the appended claims, "resource" refers to a logical collection of content that may be included within one or more files. For example, a resource may comprise a single file (e.g., an MPEG video file) containing image (e.g., a still image of video), audio, and video information. As another example, the resources may also include a collection of files (e.g., JPEG image files) that may be collectively used to render an animation or video. As another example, the resources may also include an executable file (e.g., an executable vector graphics file, such as an SWF file or an FLA file). The master asset library 102 may include many types of assets including, but not limited to, video, images, animations, text, executables, and audio. In one embodiment, master asset library 102 may include one or more high-resolution master assets. For the remainder of this disclosure, the "master asset" will be disclosed as a digital file containing video content. However, those skilled in the art will recognize that the master resource is not limited to containing video information, and as previously described, the master resource may contain any type of information including, but not limited to, images, audio, text, executable files, and/or animations.
In one embodiment, the media assets may be stored in the master asset library 102 in order to preserve the quality of the media assets. For example, in the case of a media asset comprising video information, two important aspects of video quality are spatial resolution and temporal resolution. Spatial resolution generally describes the sharpness of non-blur in the displayed image, while temporal resolution generally describes the smoothness of motion. Motion video like film is made up of a certain number of frames per second to represent motion in a scene. Generally, the first step in digitizing video is to divide each frame into a large number of picture elements, or simply pixels or pels. The larger the number of pixels, the higher the spatial resolution. Similarly, the more frames per second, the higher the temporal resolution.
In one embodiment, the media assets may be stored in the master asset library 102 as master assets that are not directly manipulated. For example, a media asset may be saved in its original form in the master asset library 102, but it may still be used to create a copy or derivative of the media asset (e.g., a low resolution asset). In one embodiment, media assets may also be stored in the master asset library 102 with corresponding or associated assets. In one embodiment, a media asset stored in master asset library 102 may be stored as multiple versions of the same media asset. For example, the multiple versions of the media asset stored in the master asset library 102 may include all-keyframe (all-keyframe) versions that do not exploit intra-frame similarities for compression purposes, as well as optimized versions that exploit intra-frame similarities. In one embodiment, the original media asset may represent a full key frame version. In another embodiment, the original media asset may be originally in an optimized version or stored as an optimized version. Those skilled in the art will recognize that media assets may take many forms within master asset library 102 that are within the scope of this disclosure.
In one embodiment, system 100 also includes an edit asset generator 104. In one embodiment, the edit asset generator 104 may include transcoding hardware and/or software that may convert a media asset from one format to another, and so on. For example, a transcoder may be used to convert MPEG files into Quicktime files. As another example, a transcoder may be used to convert JPEG files into bitmap (e.g., BMP) files. As another example, the transcoder may normalize the media asset format to a Flash video file (·. FLV) format. In one embodiment, the transcoder may create multiple versions of the original media asset. For example, upon receiving an original media asset, a transcoder may convert the original media asset into a high resolution version and a low resolution version. As another example, a transcoder may convert an original media asset into one or more files. In one embodiment, the transcoder may be present on a remote computing device. In another embodiment, the transcoder may be present on one or more connected computers. In one embodiment, the edit asset generator 104 may also include hardware and/or software for transferring and/or uploading media assets to one or more computers. In another embodiment, the edit asset generator 104 may include or be connected to hardware and/or software for capturing media assets from external sources such as a digital camera.
In one embodiment, the edit asset generator 104 may generate a low-resolution version of a high-resolution media asset stored in the master asset library 102. In another embodiment, the edit asset generator 104 may transmit a low resolution version of a media asset, for example, by converting the media asset stored in the master asset library 102 in real-time and transmitting the media asset as a stream to a remote computing device. In another embodiment, the edit asset generator 104 may generate a low-quality version of another media asset (e.g., a master asset) such that the low-quality version is preserved while still providing enough data to enable a user to apply edits to the low-quality version.
In one embodiment, system 100 may also include a specification applicator 106. In one embodiment, the specification applicator 106 may include one or more file or edit specifications that include instructions for editing and modifying media assets (e.g., high-resolution media assets). In one embodiment, the specification applicator 106 may include one or more edit specifications that include instructions to modify a high-resolution media asset based on edits made to a corresponding or associated low-resolution media asset. In one embodiment, specification applicator 106 may store multiple edit specifications in one or more libraries.
In one embodiment, system 100 also includes a master asset editor 108 that can apply one or more edit specifications to the media asset. For example, master asset editor 108 may apply an edit specification stored at specification applicator 106 to a first high-resolution media asset to create another high-resolution media asset, such as a second high-resolution media asset. In one embodiment, master asset editor 108 may apply an edit specification to a media asset in real-time. For example, master asset editor 108 may modify a media asset when the media asset is sent to another location. In another embodiment, master asset editor 108 may apply an edit specification to a media asset in non-real time. For example, master asset editor 108 may apply an edit specification to a media asset as part of a scheduling process. In one embodiment, master asset editor 108 may be used to minimize the necessity of transferring large media assets over a network. For example, by storing edits in an edit specification, master asset editor 108 may transmit small data files over a network to effect manipulations on a remote computing device of higher quality assets stored on one or more local computers (e.g., computers that make up a master asset library).
In another embodiment, the master asset editor 108 may be responsive to a command from a remote computing device (e.g., clicking a "remix" button at the remote computing device may command the master asset editor 108 to apply an edit specification to the high-resolution media asset). For example, master asset editor 108 may dynamically or interactively apply an edit specification to a media asset upon receipt of a user command issued from a remote computing device. In one embodiment, master asset editor 108 may dynamically apply an edit specification to the high-resolution to generate an edited high-resolution media asset for playback. In another embodiment, master asset editor 108 may apply an edit specification to a media asset on a remote computing device and one or more computers connected via a network (e.g., Internet 114). For example, bisection of the application of the edit specification may minimize the size of the edited high-resolution asset before it is transmitted to the remote computing device for playback. In another embodiment, master asset editor 108 may apply an edit specification on a remote computing device, for example, to take advantage of vector-based processing that may be efficiently performed on the remote computing device at play time.
In one embodiment, the system 100 also includes an editor 110, which editor 110 may reside on a remote computing device 112 connected to one or more networked computers (e.g., the Internet 114). In one embodiment, editor 110 may include software. For example, the editor 110 may be a stand-alone program. As another example, the editor 110 may include one or more instructions executable by another program, such as an Internet 114 browser (e.g., Microsoft Internet Explorer). In one embodiment, the editor 110 may be designed with a user interface similar to other media editing programs. In one embodiment, an editor 110 may include a connection to a master asset library 102, an edit asset library 104, a specification applicator 106, and/or a master asset editor 108. In one embodiment, the editor 110 may include a pre-constructed or "default" edit specification that may be applied to the media asset by the remote computing device. In one embodiment, the editor 110 may include a player program for displaying a media asset and/or applying one or more instructions from an edit specification upon playback of the media asset. In another embodiment, the editor 110 may be connected to a player program (e.g., a separate editor may be connected to a browser).
Fig. 2 illustrates an embodiment of a system 200 for generating a media asset. In one embodiment, the system 200 includes a high-resolution media asset library 202. In one embodiment, the high-resolution media asset library 202 may be a shared library, a public library, and/or a private library. In one embodiment, the high-resolution media asset library 202 may include at least one video file. In another embodiment, the high-resolution media asset library 202 may include at least one audio file. In another embodiment, the high-resolution media asset library 202 may include at least one reference to a media asset residing on the remote computing device 212. In one embodiment, the high-resolution media asset library 202 may reside on multiple computing devices.
In one embodiment, the system 200 also includes a low-resolution media asset generator 204 that generates a low-resolution media asset from a high-resolution media asset contained in a high-resolution media asset library from the low-resolution media asset generator 204. For example, as described above, the low-resolution media asset generator 204 may convert a high-resolution media asset to a low-resolution media asset.
In one embodiment, system 200 also includes a low-resolution media asset editor 208, which low-resolution media asset editor 208 transmits edits made to an associated low-resolution media asset to one or more computers via a network, such as the Internet 214. In another embodiment, the low-resolution media asset editor 208 may reside on a computing device remote from the high-resolution media asset editor, such as on the remote computing device 212. In another embodiment, low-resolution media asset editor 208 may utilize a browser. For example, the low-resolution media asset editor 208 may store the low-resolution media asset in a cache of the browser.
In one embodiment, system 200 may also include an image rendering device 210 that displays the associated low-resolution media asset. In one embodiment, the image rendering device 210 resides on a computing device 212 remote from the high-resolution media asset editor 206. In another embodiment, image rendering device 210 may utilize a browser.
In one embodiment, the system 200 also includes a high-resolution media asset editor 206, the high-resolution media asset editor 206 applying edits to the high-resolution media asset based on edits made to the associated low-resolution media asset.
Fig. 3 illustrates an embodiment of a method 300 for editing a low-resolution media asset to generate a high-resolution edited media asset. In method 300, a request to edit a first high-resolution media is received from a requestor in a requesting operation 302. In one embodiment, the first high-resolution media asset may comprise a plurality of files, and receiving the request to edit the first high-resolution media asset in the requesting operation 302 may further comprise receiving a request to edit at least one of the plurality of files. In another embodiment, the requesting operation 302 may further include receiving a request to edit at least one high-resolution audio or video file.
In the method 300, a low-resolution media asset based on a first high-resolution media asset is transmitted to a requestor in a transmitting operation 304. In one embodiment, the sending operation 304 may comprise sending at least one low resolution audio or video file. In another embodiment, the sending operation 304 may further comprise converting at least one high-resolution audio or video file associated with the first high-resolution media asset from a first file format to at least one low-resolution audio or video file, respectively, having a second file format. For example, a high resolution uncompressed audio file (e.g., a WAV file) may be converted to a compressed audio file (e.g., an MP3 file). As another example, a compressed file with a smaller compression ratio may be converted to a file of the same format, but formatted with a larger compression ratio.
The method 300 then includes receiving editing instructions associated with the low-resolution media asset from the requestor in a receiving operation 306. In one embodiment, receiving operation 306 may further comprise receiving an instruction to modify a video presentation property of at least one high-resolution video file. For example, modification of the video presentation property may include receiving an instruction to modify an image aspect ratio, a spatial resolution value, a temporal resolution value, a bit rate value, or a compression value. In another embodiment, receiving operation 306 may further comprise receiving an instruction to modify a timeline (e.g., frame order) of the at least one high resolution video file.
The method 300 further includes generating a second high-resolution media asset based on the first high-resolution media asset and editing instructions associated with the low-resolution media asset in a generating operation 308. In one embodiment of the generating operation 308, the edit specification is applied to at least one high-resolution audio or video file comprising the first high-resolution media asset. In another embodiment, the generating operation 308 further comprises the steps of: generating a copy of at least one high-resolution audio or video file associated with a first high-resolution media asset; applying editing instructions to at least one high resolution audio or video file, respectively; and saving the copy as the second high-resolution media asset.
In another embodiment of method 300, at least a portion of the second high-resolution media asset may be transmitted to a remote computing device. In another embodiment of the method 300, at least a portion of the second high-resolution media asset may be displayed by an image rendering device. For example, the image rendering device may take the form of a browser that resides at a remote computing device.
Fig. 4 illustrates an embodiment of a method 400 for generating a media asset. In method 400, a request to generate a video asset identifying a starting frame and an ending frame in a key frame master asset is received in a receiving operation 402. For example, the request of receiving operation 402 may identify a first portion and/or a second portion of a video asset.
In generate first portion operation 404, the method 400 then includes generating a first portion of the video asset, where the first portion contains one or more keyframes associated with the starting frame and the keyframes are obtained from a keyframe master asset. For example, if the key frame master asset comprises an uncompressed video file, one or more frames of the uncompressed video file may comprise a key frame associated with a starting frame of the media asset.
In generate second portion operation 406, the method 400 further includes generating a second portion of the video asset, where the second portion contains a set of keyframes and optimized frames, and the optimized frames are obtained from an optimized master asset associated with the keyframe master asset. For example, if the optimized master asset includes a compressed video file, the compressed set of frames may be combined in the video asset with one or more uncompressed frames from the uncompressed video file.
In another embodiment of method 400, a pool of master resources may be maintained such that a keyframe master resource and an optimized master resource corresponding to at least one of the pool master resources may be generated. In another embodiment of method 400, the request may identify a starting key frame or an ending key frame in the key frame master asset that corresponds to the starting frame or ending frame, respectively.
Fig. 5 illustrates an embodiment of a method 500 for generating a media asset. In method 500, a request to generate a video asset identifying a starting frame and an ending frame in a master asset is received in a receiving operation 502. For example, the request of receiving operation 502 may identify a first portion and/or a second portion of a video asset.
In generate first portion operation 504, the method 500 then includes generating a first portion of the video asset, where the first portion contains one or more keyframes associated with the starting frame, e.g., keyframes obtained from a keyframe master asset correspond to a master asset.
In generate second portion operation 506, the method 500 then includes generating a second portion of the video asset, where the second portion contains a set of optimized frames and keyframes, and the optimized frames obtained from the optimized master asset correspond to the master asset. For example, if the optimized master asset comprises a compressed video file, the compressed set of frames may be combined in the video asset with one or more uncompressed frames from the key frame master asset.
In another embodiment of method 500, a pool of master resources may be maintained such that a keyframe master resource and an optimized master resource corresponding to at least one of the pool master resources may be generated. In another embodiment of method 500, the request may identify a starting key frame or an ending key frame in the key frame master asset that corresponds to the starting frame or ending frame, respectively.
Fig. 6 illustrates an embodiment of a method 600 for generating a media asset. In method 600, a request to generate a video asset is received in receiving operation 602, where the video asset identifies a starting frame and an ending frame in an optimized master asset. For example, the request of receiving operation 602 may identify a first portion and/or a second portion of a video asset.
The method 600 then includes generating a keyframe master asset, including one or more keyframes corresponding to the starting frame, based on the optimized master asset in a generate keyframe operation 604. In a generate first portion operation 606, the method 600 further includes generating a first portion of the video asset, where the first portion includes at least a starting frame identified in the optimized master asset. In generate second portion operation 608, the method 600 then further includes generating a second portion of the video asset, where the second portion includes a set of optimized frames and keyframes, and the optimized frames are obtained from an optimized master asset.
In another embodiment of method 600, a pool of master resources may be maintained such that a keyframe master resource and an optimized master resource corresponding to at least one of the pool master resources may be generated. In another embodiment of method 600, the request may identify a starting key frame or an ending key frame in the key frame master asset that corresponds to the starting frame or ending frame, respectively.
Fig. 7 illustrates an embodiment of a method 700 for recording edits to media content. In method 700, a low-resolution media asset corresponding to a master high-resolution media asset is edited in an editing operation 702. In one embodiment, editing includes modifying an image of a low-resolution media asset corresponding to a master high-resolution media asset. For example, when an image includes pixel data, the pixels can be manipulated to appear in different colors or different intensities. In another embodiment, editing includes modifying a duration of a low-resolution media asset corresponding to a duration of a master high-resolution media asset. For example, modifying the duration may include shortening the low-resolution media asset and the high-resolution media asset corresponding to the low-resolution media asset.
In another embodiment, where the master high-resolution media asset and the low-resolution media asset comprise at least one or more frames of video information, editing comprises modifying a transition property of the at least one or more frames of video information of the low-resolution media asset corresponding to the master high-resolution media asset. For example, a transition such as a fade-in or fade-out transition may replace an image of one frame with an image of another frame. In another embodiment, editing includes modifying a volume value of an audio component of a low-resolution media asset corresponding to a master high-resolution media asset. For example, a media asset that includes video information may include an audio track that is played louder or softer depending on whether a larger or smaller volume value is selected.
In another embodiment, where the master high-resolution media asset and the low-resolution media asset comprise at least two or more frames of sequential video information, the editing comprises modifying the order of the at least two or more frames of sequential video information of the low-resolution media asset corresponding to the master high-resolution media asset. For example, the second frame may be ordered before the first frame of the media asset comprising video information.
In another embodiment, editing includes modifying one or more uniform resource locators (e.g., URLs) associated with a low-resolution media asset corresponding to a master high-resolution media asset. In another embodiment, editing includes modifying a playback rate (e.g., 30 frames per second) of a low-resolution media asset corresponding to a master high-resolution media asset. In another embodiment, editing includes modifying a resolution (e.g., temporal or spatial resolution) of a low-resolution media asset corresponding to a master high-resolution media asset. In one embodiment, the editing may occur on a remote computing device. For example, the edit specification itself may be created on a remote computing device. Similarly, for example, the edited high-resolution media asset may be sent to a remote computing device for presentation on an image presentation device, such as a browser.
The method 700 then includes generating an edit specification based on the edits to the low-resolution media asset in a generating operation 704. The method 700 further includes applying the edit specification to the master high-resolution media asset to create an edited high-resolution media asset in an applying operation 706. In one embodiment, method 700 further comprises presenting the edited high-resolution media asset on an image presentation device. For example, rendering the edited high-resolution media asset itself may include applying a media asset filter to the edited high-resolution media asset. As another example, applying the media asset filter may include overlaying the edited high-resolution media asset with an animation. For another example, applying the media asset filter may also include changing a display attribute of the edited high-resolution media asset. Changing the display properties may include, but is not limited to, changing the video presentation properties. In this example, applying the media asset filter may include changing a video effect, a title, a frame rate, a trick play effect (e.g., the media asset filter may change fast forward, pause, slow motion, and/or rewind operations), and/or a composite display (e.g., simultaneously displaying at least a portion of two different media assets, such as in the case of picture-in-picture and/or green screen composite). In another embodiment, method 700 may also include storing the edit specification. For example, the edit specification may be stored at a remote computing device or at one or more computers connected via a network (e.g., via the internet).
Fig. 8 illustrates an embodiment of a method 800 for identifying edit information for a media asset. In method 800, a low-resolution media asset is edited in an editing operation 802, wherein the low-resolution media asset contains at least a first portion corresponding to a first high-resolution master media asset and a second portion corresponding to a second high-resolution master media asset. In one embodiment, the editing operation 802 further comprises storing at least some of the editing information as metadata with the high-resolution edited media asset. In another embodiment, the editing operation 802 may occur on a remote computing device.
In receiving operation 804, method 800 then comprises receiving a request to generate a high-resolution edited media asset, wherein the request identifies a first high-resolution master media asset and a second high-resolution master media asset. The method 800 then includes generating a high-resolution edited media asset in a generating operation 806. The method 800 further includes associating edit information identifying the first high-resolution master media asset and the second high-resolution master media asset with the high-resolution edited media asset in an associating operation 808.
In one embodiment, method 800 further includes retrieving the first high-resolution master media asset or the second high-resolution master media asset. In another embodiment, method 800 further comprises assembling the retrieved first high-resolution media asset and the retrieved second high-resolution media asset into a high-resolution edited media asset.
FIG. 9 illustrates an embodiment of a method 900 for presenting a media asset. In method 900, a command to present an aggregate media asset defined by an edit specification is received in receiving operation 902, where the edit specification identifies at least a first media asset associated with at least one edit instruction. In one embodiment, receiving operation 902 comprises an end user command. In another embodiment, receiving operation 902 may comprise a command issued by a computing device, such as a remote computing device. In another embodiment, receiving operation 902 may include a series of commands that together represent a command to render an aggregate media asset defined by an edit specification.
In edit specification retrieval operation 904, an edit specification is retrieved. In one embodiment, the retrieving operation 904 may comprise retrieving the edit specification from memory or some other storage device. In another embodiment, the retrieving operation 904 may comprise retrieving an edit specification from a remote computing device. In another embodiment, retrieving the edit specification in retrieving operation 904 may include retrieving several edit specifications that collectively make up a single related edit specification. For example, several edit specifications may be associated with different media assets (e.g., the acts of a drama may each include a media asset), which together make up a single related edit specification (e.g., for an entire drama, including each act of the drama). In one embodiment, the edit specification can identify a second media asset associated with a second edit instruction that can be retrieved and presented on the media asset presentation device.
In a media asset retrieval operation 906, a first media asset is retrieved. In one embodiment, retrieving operation 906 may comprise retrieving the first media asset from a remote computing device. In another embodiment, retrieving operation 906 may comprise retrieving the first media asset from memory or some other storage device. In another embodiment, the retrieving operation 906 may comprise retrieving a portion of the first media asset (e.g., a header or a first portion of a file). In another embodiment of retrieving operation 906, the first media asset may comprise a plurality of sub-portions. Continuing with the example set forth in retrieving operation 904, a first media asset in the form of a video (e.g., a drama with multiple scenes) may include multiple media asset portions (e.g., multiple scenes represented as different media assets). In this example, the edit specification may contain information linking multiple different media assets together or into a single related media asset.
In a rendering operation 908, a first media asset of the aggregated media assets is rendered on the media asset rendering device according to the at least one editing instruction. In one embodiment, the editing instructions may identify or point to the second media asset. In one embodiment, a media asset rendering device may include a display for video information and a speaker for audio information. In embodiments where a second media asset is present, the second media asset may include information similar to the first media asset (e.g., both the first and second media assets may contain audio or video information) or different from the first media asset (e.g., the second media asset may contain audio information, such as commentary for a movie, while the first media asset may contain video information, such as movie images and voice). In another embodiment, the rendering operations 908 may also include editing instructions that modify a transition attribute for transitioning from the first media asset to the second media asset, overlay effects and/or titles on the assets, combine two assets (e.g., a combination resulting from editing instructions for picture-in-picture and/or green screen capabilities), modify a frame rate and/or presentation rate of at least a portion of the media asset, modify a duration of the first media asset, modify a display attribute of the first media asset, or modify an audio attribute of the first media asset.
Fig. 10 illustrates an embodiment of a method 1000 for storing an aggregate media asset. In the method 1000, a plurality of component media assets are stored in a storing operation 1002. For example, by way of illustration and not limitation, storing operation 1002 may comprise caching at least a portion of a plurality of component media assets in a memory. As another example, one or more component media assets may be cached in a memory cache reserved for a program, such as an Internet browser.
In a storing operation 1004, a first aggregate edit specification is stored, wherein the first aggregate edit specification includes at least one command for rendering a plurality of component media assets to generate a first aggregate media asset. For example, an aggregate media asset may include one or more component media assets that contain video information. In this example, the component videos may be ordered so that they may be presented in some order as an aggregate video (e.g., a video montage). In one embodiment, the storing operation 1004 includes storing at least one command for sequentially displaying a first portion of the plurality of component media assets. For example, a display command may modify the playback duration of a component media asset that includes video information. In another embodiment of the storing operation 1004, at least one command to render an effect corresponding to at least one of the plurality of component media assets may be stored. For example, the storing operation 1004 may include one or more effects that command a transition between component media assets. In another embodiment of the storing operation 1004, a second aggregate edit specification can be stored, the second aggregate edit specification including at least one command for rendering the plurality of component media assets to generate a second aggregate media asset.
Fig. 11 illustrates an embodiment of a method for editing an aggregate media asset. In method 1100, in a receiving operation 1102, a stream corresponding to an aggregate media asset is received from a remote computing device in a playback session, the aggregate media asset including at least one component media asset. For example, a playback session may include a user environment that allows playback of a media asset. As another example, a playback session may include one or more programs that may display one or more files. Continuing with this example, the playback session may include an internet browser capable of receiving the streaming aggregate media asset. In this example, the aggregate media asset may comprise one or more component media assets that reside on the remote computing device. The one or more component media assets can be streamed to achieve bandwidth and processing efficiency on the local computing device.
In a rendering operation 1104, the aggregate media asset is rendered on the image rendering device. For example, the aggregate media asset may be displayed to show pixel information from the aggregate media asset that includes video information. In receiving operation 1106, a user command to edit an edit specification associated with the aggregate media asset is received. As previously described, the edit specification can take many forms, including but not limited to one or more files containing metadata and other information associated with component media assets that can be associated with an aggregate media asset.
In an initiating operation 1108, an edit session is initiated for editing an edit specification associated with the aggregate media asset. In one embodiment, initiating operation 1108 includes displaying information corresponding to an edit specification associated with the aggregate media asset. For example, an editing session may allow a user to adjust the duration of a certain component media asset. In another embodiment, method 1100 further includes modifying an edit specification associated with the aggregate media asset, thereby altering the aggregate media asset. Continuing with the previous example, once the component media assets are edited in the editing session, edits to the component media assets can be made to the aggregate media asset.
Those skilled in the art will recognize that the method and system of the present invention may be embodied in many forms and thus are not limited by the foregoing exemplary embodiments and examples. In other words, the functional elements being performed by a single or multiple components in various combinations of hardware and software or firmware, as well as individual functions, may be distributed among software applications at the client-level or server-level, or both. In this regard, any number of the features of the different embodiments described herein may be combined into single or multiple embodiments, and alternate embodiments having fewer than or more than all of the features described herein are possible. The functionality may also be distributed, in whole or in part, among multiple components, in manners now known or to become known. Thus, many software/hardware/firmware combinations are possible in implementing the functions, features, interfaces and preferences described herein. Additionally, the scope of the present disclosure covers conventionally known manners for carrying out the described features and functions and interfaces, as well as those variations and modifications that may be made to the hardware or software or firmware components described herein as would be apparent to those skilled in the art now and hereafter.
While various embodiments have been described for purposes of this disclosure, various changes and modifications may be made which are well within the scope of the invention. For example, the edit specification may also include instructions for layering multiple audio tracks together or splicing different audio samples together. For another example, online reconfiguration of a three-dimensional gaming environment (e.g., editing of a 3D gaming environment) may be accomplished using methods and systems directed to generating a low-resolution media asset description corresponding to a high-resolution media asset. As another example, the methods and systems described herein may allow for interactive reconfiguration of Internet web pages.
Numerous other changes may be made which will be apparent to those skilled in the art and which are encompassed within the spirit of the disclosed invention and defined by the claims.

Claims (46)

1. A method, comprising:
receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in a key frame master asset;
generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from the keyframe master asset; and
generating a second portion of the video asset, the second portion containing an optimized frame and the set of keyframes, the optimized frame obtained from an optimized master asset associated with the keyframe master asset.
2. The method of claim 1, further comprising:
a pool of primary resources is maintained.
3. The method of claim 2, further comprising:
for at least one of the primary resources in the library, a corresponding keyframe primary resource and a corresponding optimized primary resource are generated.
4. The method of claim 1, further comprising:
based on the request, identifying a starting keyframe of the keyframe master asset that corresponds to the starting frame.
5. The method of claim 1, further comprising:
based on the request, an ending key frame in the key frame master resource corresponding to the ending frame is identified.
6. The method of claim 1, wherein the requesting further comprises:
a first portion of the video asset is identified.
7. The method of claim 1, wherein the requesting further comprises:
a second portion of the video asset is identified.
8. A method, comprising:
receiving a request to generate a video asset, the video asset identifying a start frame and an end frame in a master asset;
generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from a keyframe master asset corresponding to the master asset; and
generating a second portion of the video asset, the second portion containing an optimized frame and the set of keyframes, the optimized frame obtained from an optimized master asset corresponding to the master asset.
9. The method of claim 8, further comprising:
a pool of primary resources is maintained.
10. The method of claim 9, further comprising:
for at least one of the primary resources in the library, a corresponding keyframe primary resource and a corresponding optimized primary resource are generated.
11. The method of claim 8, further comprising:
based on the request, identifying a starting keyframe of the keyframe master asset that corresponds to the starting frame.
12. The method of claim 8, further comprising:
based on the request, an ending key frame in the key frame master resource corresponding to the ending frame is identified.
13. The method of claim 8, wherein the requesting further comprises:
a first portion of the video asset is identified.
14. The method of claim 8, wherein the requesting further comprises:
a second portion of the video asset is identified.
15. A method, comprising:
receiving a request to generate a video asset, the video asset identifying a start frame and an end frame in an optimized master asset;
generating a keyframe master asset based on the optimized master asset, the keyframe master asset comprising one or more keyframes corresponding to the starting frame;
generating a first portion of the video asset, the first portion including at least a start frame identified in the optimized master asset; and
generating a second portion of the video asset, the second portion comprising an optimized frame and the set of keyframes, the optimized frame obtained from the optimized master asset.
16. The method of claim 15, further comprising:
a pool of primary resources is maintained.
17. The method of claim 16, further comprising:
for at least one of the primary resources in the library, a corresponding keyframe primary resource and a corresponding optimized primary resource are generated.
18. The method of claim 15, further comprising:
based on the request, identifying a starting keyframe of the keyframe master asset that corresponds to the starting frame.
19. The method of claim 15, further comprising:
based on the request, an ending key frame in the key frame master resource corresponding to the ending frame is identified.
20. The method of claim 15, wherein the requesting further comprises:
a first portion of the video asset is identified.
21. The method of claim 15, wherein the requesting further comprises:
a second portion of the video asset is identified.
22. A computer-readable medium encoded with or containing computer-executable instructions for performing a method comprising:
receiving a request to generate a video asset, the video asset identifying a starting frame and an ending frame in a key frame master asset;
generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from the keyframe master asset; and
generating a second portion of the video asset, the second portion containing an optimized frame and the set of keyframes, the optimized frame obtained from an optimized master asset associated with the keyframe master asset.
23. The computer-readable medium of claim 22, further comprising instructions for:
a pool of primary resources is maintained.
24. The computer-readable medium of claim 24, further comprising instructions for:
for at least one of the primary resources in the library, a corresponding keyframe primary resource and a corresponding optimized primary resource are generated.
25. The computer-readable medium of claim 22, further comprising instructions for:
based on the request, identifying a starting keyframe of the keyframe master asset that corresponds to the starting frame.
26. The computer-readable medium of claim 22, further comprising instructions for:
based on the request, an ending key frame in the key frame master resource corresponding to the ending frame is identified.
27. The computer-readable medium of claim 22, wherein the request further comprises instructions for:
a first portion of the video asset is identified.
28. The computer-readable medium of claim 22, wherein the request further comprises instructions for:
a second portion of the video asset is identified.
29. A computer-readable medium encoded with or containing computer-executable instructions for performing a method comprising:
receiving a request to generate a video asset, the video asset identifying a start frame and an end frame in a master asset;
generating a first portion of the video asset, the first portion containing one or more keyframes associated with the starting frame, the keyframes obtained from a keyframe master asset corresponding to the master asset; and
generating a second portion of the video asset, the second portion containing an optimized frame and the set of keyframes, the optimized frame obtained from an optimized master asset corresponding to the master asset.
30. The computer-readable medium of claim 29, further comprising instructions for:
a pool of primary resources is maintained.
31. The computer-readable medium of claim 30, further comprising instructions for:
for at least one of the primary resources in the library, a corresponding keyframe primary resource and a corresponding optimized primary resource are generated.
32. The computer-readable medium of claim 29, further comprising instructions for:
based on the request, identifying a starting keyframe of the keyframe master asset that corresponds to the starting frame.
33. The computer-readable medium of claim 29, further comprising instructions for:
based on the request, an ending key frame in the key frame master resource corresponding to the ending frame is identified.
34. The computer-readable medium of claim 29, wherein the request further comprises instructions for:
a first portion of the video asset is identified.
35. The computer-readable medium of claim 29, wherein the request further comprises instructions for:
a second portion of the video asset is identified.
36. A computer-readable medium encoded with or containing computer-executable instructions for performing a method comprising:
receiving a request to generate a video asset, the video asset identifying a start frame and an end frame in an optimized master asset;
generating a keyframe master asset based on the optimized master asset, the keyframe master asset comprising one or more keyframes corresponding to the starting frame;
generating a first portion of the video asset, the first portion including at least a start frame identified in the optimized master asset; and
generating a second portion of the video asset, the second portion comprising an optimized frame and the set of keyframes, the optimized frame obtained from the optimized master asset.
37. The computer-readable medium of claim 36, further comprising instructions for:
a pool of primary resources is maintained.
38. The computer-readable medium of claim 37, further comprising instructions for:
for at least one of the primary resources in the library, a corresponding keyframe primary resource and a corresponding optimized primary resource are generated.
39. The computer-readable medium of claim 36, further comprising instructions for:
based on the request, identifying a starting keyframe of the keyframe master asset that corresponds to the starting frame.
40. The computer-readable medium of claim 36, further comprising instructions for:
based on the request, an ending key frame in the key frame master resource corresponding to the ending frame is identified.
41. The computer-readable medium of claim 36, wherein the request further comprises instructions for:
a first portion of the video asset is identified.
42. The computer-readable medium of claim 36, wherein the request further comprises instructions for:
a second portion of the video asset is identified.
43. A system, comprising:
a master asset library storing at least one high resolution master asset;
a specification applicator storing at least one edit specification for applying edits to the at least one high-resolution master asset;
a master asset editor that applies the at least one edit specification to the at least one high-resolution master asset; and
an edit asset generator that generates a low resolution master asset corresponding to the high resolution master asset.
44. The system of claim 43, further comprising:
an editor associated with the remote computing device that generates the at least one edit specification in response to one or more instructions from a user.
45. The system of claim 43, wherein the master asset library stores keyframe master assets and optimized master assets corresponding to the at least one high resolution master asset.
46. The system of claim 43, wherein the master asset library stores the low resolution master asset.
HK09108059.2A 2006-01-13 2007-01-12 Method and system for creating and applying dynamic media specification creator and applicator HK1130136A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US60/758,664 2006-01-13
US60/790,569 2006-04-10
US11/622,938 2007-01-12

Publications (1)

Publication Number Publication Date
HK1130136A true HK1130136A (en) 2009-12-18

Family

ID=

Similar Documents

Publication Publication Date Title
US8411758B2 (en) Method and system for online remixing of digital multimedia
US8868465B2 (en) Method and system for publishing media content
KR100976887B1 (en) Method and system for creating and applying dynamic media specification builder and applicator
KR100991583B1 (en) A method, computer readable storage medium and system for combining editorial information with media content
US20080016245A1 (en) Client side editing application for optimizing editing of media assets originating from client and server
KR100987862B1 (en) Method and system for recording edits in media content
CN101395918A (en) Methods and systems for creating and applying dynamic media specification creators and applicators
HK1130136A (en) Method and system for creating and applying dynamic media specification creator and applicator
HK1130137A (en) Method and system for combining edit information with media content
HK1130138A (en) Method and system for recording edits to media content
HK1130099A (en) Method and system for online remixing of digital multimedia