Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be executed in a service device or similar computing device. Taking the operation on the service device as an example, fig. 1 is a hardware structure block diagram of the service device of a media resource loading method according to an embodiment of the present application. As shown in fig. 1, the service device may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MPU, a processing means such as a programmable logic device FPGA, or the like) and a memory 104 for storing data, wherein the service device may further include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and is not intended to limit the structure of the service device described above. For example, the service device may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store a computer program, for example, a software program of application software and a module, such as a computer program corresponding to a loading method of a media resource in an embodiment of the present application, and the processor 102 executes the computer program stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the above-mentioned method. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located with respect to the processor 102, which may be connected to the service device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. The specific example of the network described above may include a wireless network provided by a communication provider of a service device. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as a NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
Fig. 2 is a flowchart (one) of a method for loading media resources according to an embodiment of the present application, wherein the media resources include rich media resources and multimedia resources, and the flowchart includes the following steps as shown in fig. 2:
step S202, generating a first track file corresponding to a rich media resource and a second track file of a multimedia resource, and packaging the first track file and the second track file into a target container to generate a media file, wherein the rich media resource is a resource which is overlapped on an interface corresponding to the multimedia resource;
It should be noted that the multimedia resources include but are not limited to video and audio, and the rich media resources include but are not limited to barrages, stickers, subtitles and special effects.
Step S202 can ensure separation and modularization of resources by abstracting and modeling multimedia resources and rich media resources into independent track files, thereby facilitating subsequent unified encapsulation and flexible control. And further combined into the media file by a specific packaging technique (e.g., using MP4, webM, etc. container formats). The unified packaging method enables different types of resources to be transmitted as a whole, reduces the number of requests of a client, simplifies the management and synchronization of the resources, and improves the transmission efficiency.
Step S204, generating a resource list file according to the control logic of the first track file;
The resource manifest file (typically presented in JSON, XML or YAML format) in step S204 is used to describe the meta information of all tracks enclosed in the media file and the file of control rules. The method comprises key information such as the ID, the type, the coding format, the priority, the display condition (such as the equipment type, the user identity and the playing scene), the rendering sequence, the dependency relationship and the like of the track. The generation of the manifest file is based on the control logic of the rich media resource track file, and provides a detailed resource management guide for the client, so that the client can dynamically decide which resources are loaded, when and how to render, thereby realizing intelligent scheduling and display of the resources.
It should be noted that the resource list file may further include control logic of the multimedia resource, where the control logic of the multimedia resource includes, but is not limited to, one-key skip, starting to display audio when the video playing reaches a certain time point, etc., and masking the target object and the target voiceprint.
Step S206, the media file and the resource list file are sent to the client, so that the client loads the multimedia resource according to the media file, and loads the rich media resource according to the media file and the resource list file.
The media file and the resource manifest file are sent to the client, and can be efficiently distributed through a standard CDN (content distribution network), or can be delivered through a self-developed distribution channel. The unified distribution mechanism ensures the consistency of the resources and control information received by the client.
On the client side, the client first parses the resource manifest file to determine which tracks (multimedia resources and rich media resources) to load based on the description and control logic therein. The client will then de-encapsulate and decode the corresponding resources from the media file according to the requirements of these tracks. Through the unified rendering interface, different types of track resources can be effectively overlapped on the video playing picture, so that the richness and the diversification of visual contents are realized. In addition, the resource manifest file enables the client to flexibly adjust the loading and rendering policies of the resources according to device characteristics, user preferences, and network conditions, thereby optimizing the playback experience.
Through the steps, the first track file and the second track file of the multimedia resource corresponding to the rich media resource are generated, the first track file and the second track file are packaged into the target container to generate the media file, wherein the rich media resource is the resource which is overlapped on the interface corresponding to the multimedia resource, the resource list file is generated according to the control logic of the first track file, the media file and the resource list file are sent to the client side, so that the client side loads the multimedia resource according to the media file, and loads the rich media resource according to the media file and the resource list file.
Alternatively, the generation of the first track file corresponding to the rich media resource in the step S202 may be implemented by determining attribute information of each sub-resource in the rich media resource, generating a track event of each sub-resource according to the attribute information of each sub-resource, and generating the first track file of the rich media resource according to the display time of each sub-resource and the track event of each sub-resource.
In an embodiment of the present application, detailed attribute information of each sub-resource (e.g., single bullet screen, single piece of artwork, single-line caption, etc.) is extracted or defined, wherein the attribute information includes, but is not limited to, type of resource (e.g., image, text, interactive instruction, etc.), size, location, presentation time, duration, special effect parameters (e.g., zoom, rotation, transparency, etc.). A clear description framework is built for each sub-resource by determining attribute information.
Based on translating the attribute information of each sub-resource into one or a series of track events. The track event is a data structure describing the specific behavior of the sub-resource on the time axis, and contains key information such as the presentation time point, duration, position parameters and the like of the sub-resource. For example, for a bullet screen resource, the track event may include bullet screen text, display time, duration, speed, display location, etc. The event data will be used to construct a track structure for the entire rich media asset, ensuring that the asset will be presented correctly when played.
And finally, summarizing the track events of all the sub-resources into a structured first track file, wherein the first track file is in a specific format (such as JSON, webVTT, custom binary format and the like) so as to be packaged into a multimedia container. The structure of the track file needs to contain the display time, type identification, metadata and other control information for each track event so that the client can understand when to load, how to decode and render each sub-resource, thereby ensuring synchronization of the rich media resource with the main video stream.
Alternatively, processing the rich media asset into a standardized track file may be divided into the following steps:
step1, preparing input resources;
1. Master video file-ensuring that video files are available, for example stored in mp4 or webm format.
2. And (3) confirming the existence of the audio file, wherein the encoding format is AAC or Opus.
3. And collecting all the expression map images to be processed, wherein the expression map image resources can be in the format of png, webp and the like.
4. Bullet screen text script, which prepares a JSON format file containing bullet screen content and display time stamp thereof.
Step 2, processing by using a tool script;
1. Ffmpeg and Python environments are configured-ensuring that the system is installed and configured with ffmpeg and a Python-enabled environment.
2. And (3) writing a Python script, namely creating a Python script to read the expression map and the barrage script, and then processing the resources by using ffmpeg to enable the resources to conform to the standard format of the track file. The script will be responsible for matching the expression map to its display time in the video and generating the corresponding track description.
Step3, creating a track description;
1. and extracting expression map information, namely extracting necessary information such as picture file names, position data, display time and the like from expression map resources by using a Python script.
2. And (3) associating the timestamps by reading the timestamp information in the barrage script, and associating the timestamps with the expression mapping information by the Python script.
3. Generating track description data, namely constructing the track description of the processed information in a standardized format (such as JSON format), wherein the track description comprises the following steps:
track_id, the unique identifier of the track.
Type: resource type, e.g., image-overlay.
Start_time, the time when the resource starts to display.
Duration of resource display.
Image_src: the file name or path of the image resource.
Position, the display position of an image on a video picture.
Scale: scaling of the image.
For example:
step 4, integrating track data;
1. and (3) serializing the track events, namely, serially connecting all the events of the expression map and the bullet screen in time sequence to form a JSON sequence, so that a plurality of track events can be described in one file.
2. Outputting a track file, namely writing the generated track description sequence into a standard track file, such as an overlay_track. This document contains a standardized description of all the expression stickers and barrages so that the subsequent packaging process can be identified and processed.
Through the steps, originally scattered rich media resources, such as expression maps and barrages, are converted into a structured track file format which can be uniformly managed, and standardized management and efficient distribution of the resources are realized.
Alternatively, "encapsulating the first track file and the second track file into a target container to generate a media file" in the above step S202 may be implemented by converting the first track file into a first data file in a target format, converting the second track file into a second data file in a target format, and writing the first data file and the second data file into target areas of the target container, respectively, and executing a target encapsulation command to encapsulate the target containers written with the first data file and the second data file into the media file.
The first track file is a track description file of rich media resources, such as a bullet screen, a subtitle, a special effect track, etc. The rich media resource is converted into a target Format (such as a custom box structure conforming to the ISO Base MEDIA FILE Format standard), so that the rich media resource can be correctly identified and decoded by the existing media container (such as MP4, webM and the like). Format conversion includes, but is not limited to, reconstructing the JSON, XML, or binary data first track file into the format of a particular region (e.g., moov, trak, mdia, etc.) in the container file.
The second track file is the basic video and audio track file and further processing is required during the encapsulation process to ensure compatibility with the track data of the rich media asset.
After the format conversion is completed, the first and second data files will be written to designated areas or boxes, respectively, of the target container (e.g., MP 4). Wherein the specified area depends on the specification of the specific container format, e.g. the first data file will be written into moof (Movie Fragment) box and the second data file into MP4 user extended box (udta, uuid box).
The package command is a media package tool (such as ffmpeg, gpac MP, 4Box, etc.), and the processed video, audio and additional rich media resource track data are packaged into a single media file through the package command. This process includes, but is not limited to, writing all track data with metadata to the corresponding blocks of the media container, ensuring consistency of the timeline and integrity of the decoded information, the last generated media file (e.g., unified _asset.mp4).
The format conversion in the embodiment of the application ensures that rich media resources can be supported by various devices and clients, and avoids resource loading failure or abnormal playing caused by incompatibility of formats. By writing the first and second data files into different areas of the target container respectively and then executing the packaging command uniformly, all resources can be integrated into a single media file efficiently, and the request times and the data processing complexity of the client are reduced.
Optionally, the process of uniformly packaging the main video, the main audio and the plurality of track files into one resolvable multi-track container comprises the following steps:
step1, preparing input data;
1. Video/audio resources, video and audio files, such as video in.mp 4 or.webm format, and AAC or Opus encoded audio.
It should be noted that the main audio and main video are already in the form of tracks in practice, which are the first and most basic components of a multitrack container.
JSON-format track file-JSON file for describing additional resources (such as map and barrage), for example, overlay_track, ensures that all necessary attribute information is contained in the file.
3. Time axis information (time anchor point) the point in time information synchronized with the video content is collected or generated for locating when the track data is displayed or applied.
Step2, track data conversion;
1. The track is converted into binary data, namely the track file in the JSON format is converted into binary data by using a proper tool (such as ffmpeg matched with Python script) so as to be suitable for the storage format of the multimedia container.
Step3, packaging the multi-rail container;
1. Tool chain is selected to determine whether to use GPAC MP4Box or FFmpeg+ bmffmux for encapsulation. GPAC MP4Box is suitable for fast and simple packaging, while FFmpeg+ bmffmux provides a higher level of customization capability and flexibility.
2. Packaging the primary video and audio files are packaged as part of a multi-track container using selected tools.
For example, when ffmpeg is used, the-map parameters are used to map the video and audio streams into the output container.
Step4, writing an expansion box;
Writing user extension box, writing the converted track data into extension box of MP4 container for storage, such as using udta or uuidbox.
Step 5, mapping the track type;
defining track types-defining the type and purpose of each track in the container, for example:
Video:Track 1(type:vide);
Audio:Track 2(type:soun);
Overlay:Track 3(type:meta/json);
Step 6, executing the packaging command;
The pack command is as follows:
{ffmpeg-i video.mp4-i overlay_track.json,
-map 0:v-map 0:a-map 1,
-c copy-metadata:s:2handler="overlay-json",
-movflags use_metadata_tags,
output_with_tracks.mp4}。
1. The input file is specified using the-i parameters to take as input the main video, main audio and track file (e.g. overlay _ track. Json).
2. Mapping streams-map parameters are used to specify which streams to extract from which inputs, e.g. -map0: v-map 0:a-map 1 maps video, audio, and third input (track file) to the output container.
3. Copy coding-use-c copy avoids re-encoding video and audio, maintaining original quality.
4. Metadata is added-metadata: s:2 handler= "overlay-json" is used to add metadata of the track handling manner to the corresponding stream, where handler is used for the client to identify the track type, s:2 denotes addition to the second stream (track).
5. Use movflags use-movflags use _metadata_tags ensures that metadata tags are used correctly.
6. Output of the encapsulated file after execution of the command, a single encapsulated media file containing all tracks (video, audio, map, bullet screen, etc.), for example named output_with_tracks.
Through the steps, various types of resource data can be integrated into a unified multi-track container, so that analysis and efficient playing of a client side are facilitated, and synchronization and cross-platform consistency of resources are ensured.
Optionally, the embodiment of the application provides a technical scheme for dynamically updating rich media resources (such as barrages, pictures, special effects and the like), which comprises the steps of determining the updating state of the rich media resources, generating a third track file according to the updated rich media resources when the updating state of the rich media resources indicates that the rich media resources are updated, and packaging the third track file into a target container to generate the updated media file.
Monitoring and detecting whether a rich media asset in the multimedia content needs to be updated, the determination of the update status may be accomplished in a variety of ways, including but not limited to version control, asset hash value checking, update timestamp comparison, and the like. When the rich media resource is detected to be updated, the new resource is processed to generate or update a corresponding track file (namely, a third track file). The generated third track file contains the updated data such as the attribute information, track event, display time and the like of the sub-resource, so that the latest and the accuracy of the resource are ensured.
And finally, uniformly packaging the updated third track file and other tracks (such as video and audio tracks) in the original media file to form the updated media file. The updated media file not only contains video and audio, but also contains the latest rich media resource track, thereby realizing seamless upgrading of the whole resource.
Optionally, taking a newly added bullet screen track file as an example, the new track expansion steps are as follows:
step1, constructing a new bullet screen track and converting the format;
1. Modeling a time axis event of a new bullet screen style;
The newly designed bullet screen patterns are converted into event sequences on the time axis using, for example, ffmpeg in combination with Python processing scripts or dedicated media editing software. Each Event contains information such as display time, text content, presentation style, etc. of the bullet screen, and can be WebVTT (a text format of subtitles and metadata) or a custom JSON-Event format.
JSON-Event is a flexible data exchange format that can describe various attributes of each bullet Event. For example, the following is a simple example of a JSON-Event:
{ "time":32.5, "text": this point turns too fast,
{ "Time":45.0, "text": who understands "," style ": fade-in" }.
Each JSON object represents a bullet event, and includes a time point (in seconds) at which the event occurs, text content of the bullet, and a specific display style (such as emphasis effect or fade-in effect).
Step2, track encapsulation and media file updating;
1. packaging the track file into an existing MP4 container;
the track file is created using the encapsulation command of ffmpeg, encapsulated with the existing media file (main video + audio track) to form an updated media file.
Wherein, the encapsulation command example of ffmpeg:
{ffmpeg-i main.mp4-i new_danmaku_track.json,
-map 0-map 1,
-c copy-metadata:s:1handler_name="danmaku_beta",
-movflags use_metadata_tags,
output_with_danmaku.mp4}。
-i main 4 and-i new danmaku track JSON specify as input the main video file and JSON description file of the new bullet track, respectively.
The map 0 and map 1 parameters are used to tell ffmpeg which streams should be selected from the first and second inputs to output, here meaning that all streams of main.mp4 are reserved and new_ danmaku _track is added as an extra track.
C copy means keeping the video and audio coding unchanged, copying the stream directly to the output file, avoiding quality and performance losses due to re-encoding.
Metadata: s 1handler_name = "danmaku _beta" is used to add metadata tags to the newly added bullet track, indicating that this is one bullet track and has a specific handle name (handler_name) for identification and control when the client parses.
Movflags use _metadata_tags ensure that ffmpeg uses metadata tags to properly create the metadata portion of the output file, which is particularly important for multi-track encapsulation, as it helps clients know how to process each track.
Output_with_danmaku.mp4 is the final packaged media file name containing the original video and audio information, as well as the newly added bullet track data.
Through the steps, the dynamic updating of the rich media resources is realized without the integral updating of the App, so that the flexibility and the efficiency of content updating are greatly improved, and the continuity and the improvement of user experience are ensured.
Optionally, after the third track file is packaged into the target container to generate the updated media file, generating an updated resource list file according to meta information and control logic of the third track file, sending the updated media file and the updated resource list file to a client so that the client loads the multimedia resource according to the updated media file and loads the updated rich media resource according to the updated media file and the updated resource list file.
The third track file contains newly added or updated rich media resource track data, including meta information such as the type, display time, duration, position, special effect parameters, priority, dependency relationship and the like of the resource. The resource manifest file is responsible for describing the meta information and control logic of all tracks, so that after updating the third track file, a resource manifest file of the latest resource status is generated, i.e. the updated resource manifest file.
The updated media file contains all multimedia resources and the newly added third track file, and the updated resource manifest file is a comprehensive description and control instruction for all available tracks. These two files need to be sent to the client together to ensure that the client can play according to the latest resource data and control logic.
After receiving the updated media file and resource manifest file, the client first parses the resource manifest file to see which resource tracks are available, and their control logic. Then, the client loads corresponding track data according to the indication of the manifest file. And for the updated rich media resource, the client loads and renders according to the display time and the control logic calculated in real time, so as to ensure accurate synchronization with the video content.
According to the embodiment of the application, the third track file and the resource list file in the media container are synchronously updated, so that the instant update and dynamic control of the resources are realized, the strong and flexible content management capability is provided for an online video platform, and the user experience and the resource distribution efficiency are obviously improved.
Optionally, a new track is added to the resource control list by:
Step1, defining meta information and control information of a new track;
Basic properties of the new track and control logic thereof are determined. Specifically, the method comprises the following key fields:
"dependencies":["track_main_video"]}。
wherein id is "track_ danmaku _beta", the unique identifier of the new track is used to distinguish and reference different resource tracks.
Type: "danmaku", identifies the type of resource carried by the track, here the barrage.
The codec is "JSON-Event" describing the coding format or data structure of the resource, where the barrage data is stored in the form of JSON-Event.
And the priority is 3, the priority of the track is defined, and the smaller the numerical value is, the higher the priority is, and the loading sequence and the display level of the resource are affected.
Conditions, set the conditions for track enablement, e.g., open only to "beta" test users, and the system needs to turn on the "enable_ danmaku _v2" function flag.
And determining a display layer of the resource in the play picture by using the render_layer to ensure the correct superposition of the resource.
DEPENDENCIES [ "track_main_video" ], list other tracks that must be loaded before track rendering, ensure that the main video track is loaded and prepared before the bullet track.
Step 2, integrating new track information into a resource control list;
The manifest is typically a structured file, such as JSON format, that describes the meta-information and control logic of all tracks in the media container. The process of adding new track information in the manifest includes:
1. opening a current resource control manifest file:
The currently stored resource control list is read, which may be a JSON-format file containing the description information of all known tracks.
2. Adding a new track description to the manifest:
the newly defined track information is added in place of the manifest (as in tracks arrays), as described in full above as "track danmaku beta". The structural integrity of the list is ensured, and the field format is correct, so that the client can successfully analyze.
3. And storing the updated resource control list:
and storing the modified list file to ensure that the newly added track information is durable and can be used for subsequent distribution and client reading.
Through the steps, the resource control list is updated and contains newly added barrage track information, which provides a basis for further distribution and dynamic selection and loading of resources of clients. The client can intelligently decide whether to load the track danmaku beta track or not according to the updated manifest file and how to display the track according to the control logic of the track, so that the aim of adding a playing function or content without App upgrading is fulfilled.
Optionally, generating a resource manifest file according to the control logic of the first track file comprises obtaining meta information of the first track file and writing the meta information into a first field in the resource manifest file, and determining the control logic according to the resource type of the rich media resource and writing the control logic into a second field in the resource manifest file.
Meta information includes, but is not limited to, track ID, resource type, codec type, priority, display time range, etc. This portion of information is critical to the client's identification and processing of particular resources. Meta-information is extracted from the first track file (e.g., packaged bullet screen, special effects track file), which can be parsed out and prepared to be written to the first field of the resource manifest file using a specialized media file parsing library (e.g., FFmpeg, gst-Parse, etc.) or custom parsing script.
The first field generally refers to a portion that directly describes some track resource base information, such as each item in the tracks array in the manifest. Different types of rich media resources (e.g., barrages, special effects, subtitles, etc.) may have different control logic, e.g., the barrages may need to take into account user identity, play speed adjustment, display density control, and special effects may involve user device capability and version compatibility decisions. And determining a control logic process, namely determining corresponding enabling rules, display priorities, rendering orders and dependency relationships according to the characteristics of the resource types and the service policies of the platform. The second field may contain conditional expressions, prioritization, dependency declarations, etc. By writing the control logic information into the second field, the client can acquire the resource processing guide while receiving the media container file, thereby realizing intelligent resource scheduling and rendering control.
Alternatively, the following is a main structure and field description of the manifest file, which may be packaged in JSON, YAML or binary metadata formats:
{
The unique identifier of the track is used for internal scheduling and reference, so that the client can accurately identify and control each track.
And the type is used for classifying and managing different types of resources such as audio and video, subtitles, barrages, mapping, special effects and the like so as to realize differentiated processing and rendering.
The decoding format or the resource packaging subformat defines how to decode and render resources, and ensures that the decoding capability of the client is matched.
Priority, rendering priority, the smaller the value, the higher the priority, for controlling the loading sequence of the resource and solving the problem of resource conflict.
Conditions, control rules, including device type, client version, user tags, function switches, etc., are used to decide under what conditions to load and expose a particular track.
Rendering layer of resources on playing interface, ensuring correct superposition of resources, such as top of main picture, subtitle layer, special effect layer, etc.
DEPENDENCIES, indicating which tracks need to be loaded in advance according to the dependency relationship before the resource rendering so as to realize the cooperative display among the resources.
The resource manifest file not only describes static information of the resource, but also supports the behavior of dynamically controlling the resource, and the control logic range includes:
1. load behavior control:
The appropriate track is intelligently selected for loading based on the client device type (mobile, tv, pc).
The rollback policy is automatically executed according to the minimum version requirements of the client, and the low version client may skip incompatible tracks to maintain the base playback function.
Through a special function switch (feature_flag), an experimental or special function track is enabled, and the gray release and A/B test of the function are realized.
2. Rendering priority control:
The priority of the map class resource is set lower than the subtitle and main video to avoid overlaying important information.
The priority of the bullet screen track is adjusted according to the subscription state of the user, for example, the VIP user can enjoy higher definition or more timely bullet screen display.
3. Time range and trigger mechanism:
And limiting the playing time range of the track, and avoiding the resource from interfering with the viewing experience at an unexpected time point.
And setting a condition triggering mechanism, such as playing to a certain time point to automatically load a specific track, so as to realize accurate resource scheduling.
4. Multi-track mutual exclusion and combining strategy:
In some cases, mutually exclusive relationships between tracks are defined, preventing resource conflicts, such as advanced filters and certain interactive instructions, from being available at the same time.
And loading and rendering the intelligent scheduling track through the dependency tree or the combined scene, so as to realize complex resource combination and hierarchical display.
Through the integrated resource list file structure and the control logic, unified packaging, dynamic scheduling and cross-platform consistent playing of video streaming media resources are realized, and the efficiency and user experience of resource management are greatly improved.
In this embodiment, a method for loading media resources is further provided, and the method is applied to a client, and fig. 3 is a flowchart (second) of a method for loading media resources according to an embodiment of the present application, as shown in fig. 3, where the flowchart includes the following steps:
Step S302, receiving a media file and a resource list file sent by a service device, wherein a cloud platform generates a first track file corresponding to a rich media resource and a second track file of the multimedia resource, packages the first track file and the second track file into a target container to generate the media file, and generates the resource list file according to control logic of the first track file, wherein the rich media resource is a resource which is overlapped on an interface corresponding to the multimedia resource;
Step S304, loading the multimedia resource according to the media file, and loading the rich media resource according to the media file and the resource list file.
In one exemplary embodiment, loading the multimedia asset according to the media file and the rich media asset according to the media file and the asset manifest file includes parsing the asset manifest file to determine control logic for the first track file and parsing the media file to obtain the first track file and the second track file, decoding the first track file and the second track file to obtain the multimedia asset and the rich media asset, loading the multimedia asset to the client through a target interface and loading the rich media asset to the client according to the control logic.
In an exemplary embodiment, after the multimedia resource is loaded according to the media file and the rich media resource is loaded according to the media file and the resource list file, the method further comprises analyzing the updated media file to obtain the first track file, the second track file and the third track file when the updated media file and the updated resource list file are received, wherein the third track file is the track file corresponding to the updated rich media resource, whether the preset condition for rendering the third track file is met by the client or not is judged, loading the multimedia resource according to the updated media file and the updated resource list file when the preset condition for rendering the third track file is met by the client, and loading the updated rich media resource according to the updated media file and the updated resource list file when the preset condition for rendering the third track file is not met by the client, and loading the updated media resource according to the updated media file and the rich media file.
In the embodiment of the application, the client reads the resource list when starting, analyzes and judges whether track_ danmaku _beta is loaded. And if the user meets the conditions, loading and rendering the track, otherwise, loading the old barrage track by default or skipping.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiment also provides a device for loading media resources, which is used for implementing the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
Fig. 4 is a block diagram (a) of a media resource loading apparatus according to an embodiment of the present application, which is applied to a service device, as shown in fig. 4, and includes:
A first generating module 42, configured to generate a first track file corresponding to a rich media resource and a second track file of a multimedia resource, and encapsulate the first track file and the second track file into a target container to generate a media file, where the rich media resource is a resource that is superimposed on an interface corresponding to the multimedia resource;
a second generating module 44, configured to generate a resource manifest file according to the control logic of the first track file;
a sending module 46, configured to send the media file and the resource manifest file to a client, so that the client loads the multimedia resource according to the media file, and loads the rich media resource according to the media file and the resource manifest file.
According to the device, a first track file corresponding to the rich media resource and a second track file of the multimedia resource are generated, the first track file and the second track file are packaged into a target container to generate the media file, wherein the rich media resource is a resource which is overlapped on an interface corresponding to the multimedia resource, a resource list file is generated according to control logic of the first track file, the media file and the resource list file are sent to a client side, so that the client side loads the multimedia resource according to the media file, and loads the rich media resource according to the media file and the resource list file.
In an exemplary embodiment, the first generating module 42 is configured to determine attribute information of each sub-resource in the rich media resource, generate a track event of each sub-resource according to the attribute information of each sub-resource, and generate a first track file of the rich media resource according to the display time of each sub-resource and the track event of each sub-resource.
In an exemplary embodiment, the first generating module 42 is configured to convert the first track file into a first data file in a target format, convert the second track file into a second data file in a target format, and write the first data file and the second data file into target areas of the target container, respectively, and execute a target packaging command to package the target containers written into the first data file and the second data file into the media file.
In an exemplary embodiment, a first generating module 42 is configured to determine an update status of the rich media resource, generate a third track file according to the updated rich media resource if the update status of the rich media resource indicates that the rich media resource is updated, and package the third track file into a target container to generate an updated media file.
In an exemplary embodiment, the second generating module 44 is configured to generate an updated resource list file according to meta information and control logic of the third track file, send the updated media file and the updated resource list file to a client, so that the client loads the multimedia resource according to the updated media file, and load the updated rich media resource according to the updated media file and the updated resource list file.
In an exemplary embodiment, the second generating module 44 is configured to obtain meta information of the first track file and write the meta information into a first field in the resource manifest file, and determine the control logic according to the resource type of the rich media resource, and write the control logic into a second field in the resource manifest file.
Fig. 5 is a block diagram (two) of a media resource loading device according to an embodiment of the present application, applied to a client, as shown in fig. 5, the device includes:
The receiving module 52 is configured to receive a media file and a resource list file sent by a service device, where the cloud platform generates a first track file corresponding to a rich media resource and a second track file of a multimedia resource, encapsulates the first track file and the second track file into a target container, so as to generate a media file, and generates a resource list file according to control logic of the first track file, where the rich media resource is a resource that is superimposed on an interface corresponding to the multimedia resource, and the multimedia resource at least includes one of a video resource and an audio resource;
a loading module 54, configured to load the multimedia resource according to the media file, and load the rich media resource according to the media file and the resource manifest file.
In an exemplary embodiment, the loading module 54 is configured to parse the updated media file to obtain the first track file, the second track file, and the third track file when the updated media file and the updated resource list file are received, where the third track file is a track file corresponding to the updated rich media resource, whether the client meets a preset condition for rendering the third track file, load the multimedia resource according to the updated media file and the updated rich media resource according to the updated media file when the client meets the preset condition for rendering the third track file, and load the multimedia resource according to the updated media file and the updated resource list file when the client does not meet the preset condition for rendering the third track file.
In an exemplary embodiment, the loading module 54 is configured to parse the resource manifest file to determine control logic of the first track file, parse the media file to obtain the first track file and the second track file, decode the first track file and the second track file to obtain the multimedia resource and the rich media resource, load the multimedia resource to the client through a target interface, and load the rich media resource to the client according to the control logic.
It should be noted that each of the above modules may be implemented by software or hardware, and the latter may be implemented by, but not limited to, the above modules all being located in the same processor, or each of the above modules being located in different processors in any combination.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In an exemplary embodiment, the computer readable storage medium may include, but is not limited to, a U disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, etc. various media in which a computer program may be stored.
An embodiment of the application also provides an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic device may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Embodiments of the application also provide a computer program product comprising a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
Embodiments of the present application also provide another computer program product comprising a non-volatile computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of any of the method embodiments described above.
Embodiments of the present application also provide a computer program comprising computer instructions stored on a computer readable storage medium, a processor of a computer device reading the computer instructions from the computer readable storage medium, the processor executing the computer instructions to cause the computer device to perform the steps of any of the method embodiments described above.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The above description is only of the preferred embodiments of the present application and is not intended to limit the present application, but various modifications and variations can be made to the present application by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principle of the present application should be included in the protection scope of the present application.