HK1140329B - Seeking and synchronization using global scene time - Google Patents
Seeking and synchronization using global scene time Download PDFInfo
- Publication number
- HK1140329B HK1140329B HK10106404.5A HK10106404A HK1140329B HK 1140329 B HK1140329 B HK 1140329B HK 10106404 A HK10106404 A HK 10106404A HK 1140329 B HK1140329 B HK 1140329B
- Authority
- HK
- Hong Kong
- Prior art keywords
- media stream
- rich media
- synchronization
- timeline
- seeking
- Prior art date
Links
Description
Technical Field
The present invention generally relates to methods and apparatus for enabling seeking in a global timeline stream of a rich media stream, and for enabling transport level timestamp based synchronization in a rich media stream.
Background
Scalable Vector Graphics (SVG) is an XML-based language for the representation of static and dynamic vector graphics. SVG is vector based, which means that content is not made for some screen resolution, but can be easily scaled. SVG is standardized by the world wide web consortium (W3C). The mobile profile specification of SVG version 1.1 is adopted by 3GPP release 5 and is supported today by approximately 1 billion mobile handsets.
SVG Tiny 1.2 is a more powerful version of SVG specifically designed for mobile devices, which is described in more detail in "Scalable Vector Graphics (SVG) Tiny 1.2 Specification" (W3CCandidate Recommendation, 8/10 2006). This specification is currently a W3C candidate recommendation that has been adopted by 3GPP release 6. Support for various new multimedia features including full control of audio and video is included with the micro dom (uDOM) and the script < uDOM >.
In addition to being a media type for vector graphics, SVG can also be used as a scene description language, where scenes can be composed temporally as well as spatially. In fact, SVG Tiny 1.2 is the basis for the 3GPP work item in Dynamic and Interactive Multimedia Scenes (DIMS) and for the OMA work item on Rich Media Environments (RME)The basic scene description format. More information on the current standardization of DIMS can be found in "3 GPP TS 26.142v7.1.0 (2007-09): "3rdGeneration Partnership Project; technical Specification Group service and System attributes; dynamic and Interactive Multimedia Scenes ".
Fig. 1a relates to a planar SVG (planesvg) defined as a large document according to the prior art, carrying an SVG scene, here represented by a plurality of (0-k) SVG elements E. The entire scene is typically downloaded completely before it can be rendered. Thus, in ordinary svg (plainvg), there is only a single timeline starting from 0, so that global and local positioning is consistent.
Dims (rme) content, as opposed to planar SVG content, can be divided into base scenes and updates of those scenes. The format of these updates is LASeR commands.
An example of DIMS content according to the prior art is illustrated by fig. 1 b. The sequence of updates and base scenes can be streamed using real-time transport protocol (RTP) or stored in a track of a 3GP file. The rendered SVG document is composed of a number of elements starting from a base scene S, which will typically be updated with smaller scene updates U.
Each unit in a DIMS stream has a media time. The media time is often calculated using an offset from the first unit by using a transmission level timestamp. In this document, this is also referred to as global time 100, since it is persistent for the entire DIMS flow. SVG also has an internal document time 101. The internal document time is reset to zero for each new SVG document 102, 103 in the stream and is therefore also referred to as local time in the respective document. The global timeline will most likely not have the same rate as the local timeline, which may typically have a rate of 1 Hz.
Redundant scenes are redundant Random Access Points (RAPs) that are handled differently than non-redundant scenes because they are used to replace a scene and multiple updates at tune-in. The document time should start at the same value as for other users that are not tuned on the redundant scene. Therefore, the scene time must be advanced from the initial time 0 to the tuning time.
Currently, there is no definition of when to advance scene time in DIMS. LASeR proposes to advance the scene time after the scene has been fully loaded. The MORE proposal is unspecific in this area, but the proposed alternative under the MORE flag shifts the scene time forward at initial loading of the document.
The prior art solutions do not allow seeking from markers in the global time of DIMS streams, i.e. it is not possible to create "seek" buttons or seek instructions, as is possible with ordinary SVG having only one single timeline.
One problem that arises when using the setCurrentTime method defined in the SVG uDOM for adjusting the time of an SVG document is that the SVG document time has changed while the media time or global time level remains unchanged, creating a mismatch between this new document time and the media time. The resynchronization in this case is performed in the same way as any other synchronization for e.g. interruptions (interruptions) in the transmission from the media. Whether one of the elements should be paused (pause) or positioned forward in the other elements is not defined. Resynchronization can thus result in the scene time simply returning to its value prior to the fix, thereby again being synchronized but undoing the time change.
Another problem associated with prior art solutions is the inability to locate across document boundaries. DIMS streams may, and likely will, contain multiple non-redundant scenes, i.e., SVG documents. Each such document has a separate timeline that begins with time instance (time instance) zero.
Still another problem with the known technique is that it is not possible to choose a global time as the synchronization basis for the synchronization, i.e. to force other timelines to be synchronized with the global time. This may also be referred to as defining the global time as syncMaster. This makes it impossible to create a stream defined to have its playback based entirely on the transmission level timestamps.
Disclosure of Invention
It is an object of the present invention to address at least the problems outlined above. More specifically, it is an object of the present invention to provide a mechanism that allows performing seeking in a rich media stream based on a global timeline. Furthermore, it is also an object of the present invention to provide a mechanism that allows synchronization in a rich media stream based on a global timeline.
According to one aspect, the invention relates to a method for performing seeking in a rich media stream provided from a multimedia server to at least one multimedia client, wherein seeking is performed in an encoder of the multimedia server. A seeking instruction is inserted into the media stream at time instance X, wherein the seeking instruction includes an offset time instance Y, wherein X and Y are arbitrary time values measured at the global timeline rate. The media streams are then encoded and transmitted to one or more multimedia clients.
According to one embodiment, the positioning instruction may be a server initiated instruction that may be inserted directly into the media stream by the multimedia server.
According to another embodiment, the positioning instruction may instead be a user initiated instruction, inserted into the scene of the media stream. The user-initiated instruction may be inserted through a secondary stream (secondary stream) that can be related to the event.
According to another aspect, a method of enabling seeking in a rich media stream performed at a decoder of a multimedia client is described. According to this aspect, a positioning instruction is received at time instance X, wherein the positioning instruction includes an offset time instance Y. Subsequently, positioning according to the received positioning instruction is performed at the decoder.
The seeking step may comprise seeking over the entire media stream in a global timeline of the media stream, wherein seeking is accomplished by adding an offset Y to the current time instance X, moving the global timeline and one or more internal document timeline towards time instance X + Y in a synchronized manner.
The searching step may further include the steps of finding a last Random Access Point (RAP) of the media stream that occurs before a desired seek time instance X + Y, and decoding the RAP and creating a scene with a document time. After these steps, the media stream may be decoded from the RAP moving the document time towards the desired seek time instance.
Time instances X and Y may be calculated from the transmission level timestamps of the rich media stream.
By normalizing the transport level timestamps to the rate of the global timeline, a conversion from the internal document timeline to the global timeline may be performed.
The rate of the global timeline may be independent of the transport chosen for the media stream. Alternatively, the rate of the global timeline may be predefined.
Additionally, the rate of the global timeline may be explicitly sent to the multimedia client.
The rich media stream may contain two or more documents, and in such cases the suggested positioning step will allow positioning across different document boundaries.
The rich media stream may be any one of a DIMS/RME stream or a LASeR stream.
The positioning instruction may be any of a command, a DOM method, or an attribute.
According to another aspect, a method for a decoder allowing synchronization in a rich media stream is described, wherein a syncMasterGlobal attribute is set in case a persistent global timeline is to be used as a synchronization master (syncmastermaster), and a syncMasterGlobal attribute is set in case one or more document timelines are to be used as synchronization masters. The set attributes are then transmitted to one or more receiving entities.
The attributes may be transmitted to one or more receiving entities for insertion into SVG elements of the rich media stream. Alternatively, it may instead be provided to one or more receiving entities via signaling external to the rich media stream.
According to yet another aspect, a decoder for performing synchronization in a rich media stream is described, wherein an attribute having the purpose of signaling a current synchronization master is received at the decoder.
Subsequently, the attribute is used for synchronization in the rich media stream using the global timeline as synchronization master in case the received attribute is a syncMasterGlobal attribute and in case the syncMasterGlobal attribute is set, and using one or more document timelines as synchronization master in case the received attribute is a syncMaster attribute and in case the syncMaster attribute is set or in case the received attribute is not set.
The global timeline may be based on the transport level timestamps.
According to one embodiment, the syncMasterGlobal attribute may have priority over the syncMaster attribute in case both attributes have been set.
The claimed invention also relates to a multimedia client and a multimedia server adapted to perform the above method.
Drawings
The invention will be described in more detail below by way of exemplary embodiments and with reference to the accompanying drawings, in which:
figure 1a is a basic overview of a generic SVG stream according to the prior art.
Figure 1b is a basic overview of DIMS flow according to the prior art.
FIG. 2 is a basic overview showing the timing of a global timeline relative to a local document timeline.
Figure 3 shows global positioning performed at the decoder according to one embodiment.
FIG. 4a shows how an internal document timeline can be synchronized with a global timeline according to one embodiment.
FIG. 4b shows how the global timeline can in turn be synchronized with the internal document timeline.
Figure 5 shows an exemplary multimedia server according to one embodiment.
Figure 6 shows an exemplary multimedia client according to one embodiment.
Figure 7 is a block diagram illustrating a method of seeking in a rich media stream performed by an encoder.
Figure 8 is a block diagram illustrating the method of seeking in a rich media stream performed by a decoder.
Fig. 9 is a block diagram illustrating a method for a decoder allowing synchronization in a rich media stream.
Fig. 10 is a block diagram illustrating a method of a decoder for performing synchronization in a rich media stream.
Detailed Description
Briefly, the present invention enables seeking in a global timeline of a rich media stream, such as a DIMS stream, particularly when the stream contains both a global timeline and a local timeline. The present invention enables seeking across document boundaries in a rich media stream containing multiple documents each having a separate timeline.
According to one embodiment, seeking is performed in a global timeline or transmission timeline of a rich media stream, i.e., in a timeline that spans all document timelines, allowing seeking across and seeking within document boundaries. This also allows for simultaneous positioning in both the document and transmission timelines, eliminating the need to rely on a synchronization module that may not be well defined to move the other timeline.
According to the proposed embodiment, the command/positioning instruction takes a certain offset and positions that amount from the point of activation in the rich media stream, not just in a separate document within the stream. This positioning in the rich media stream or positioning in the global time of the rich media may result in a different rich media document.
Furthermore, the present invention allows synchronization using the global timeline as the basis for synchronization, allowing the content creator to choose between synchronization based on the internal document timeline or the global timeline. This can be done by introducing a new attribute called syncMasterGlobal on rich media documents.
By setting the syncMaster attribute, for example, on the SVG element, the content creator can choose to synchronize based on internal document time, while by setting the syncMaster global attribute in turn, synchronization can be based on global time.
FIG. 2 generally illustrates the concept of a local timeline, a global timeline. As described above, it is currently not possible to locate in a stream using the document timeline 200, because the same document time may occur again multiple times in the stream. This can be seen in enlarged areas 201 and 202, where, for example, document time 0 appears in document 2(doc.2) and document 3 (doc.3). In fact, all documents start at document time 0 in both DIMS/RME and LASeR. In fig. 2, document time 2 in doc.3 is represented as global time 95 on global timeline 203.
According to one embodiment, commands are defined to seek in the global time of a rich media stream, such as a DIMS stream. The positioning is performed on the entire stream, not just on the document timeline of the current document. Such a seek will result in a synchronized seek of both the media timeline (i.e., the global timeline) and the internal document timeline in the rich media stream. According to the described embodiment, the command takes a certain offset and locates that amount from the point of activation in the rich media stream, rather than just in a single document within the stream, and a different rich media document may be generated.
The global timeline in DIMS/RME or LASeR is computed from the transmission level timestamps, e.g. from timestamps from 3GPP files, (simple aggregation format) SAF or RTP timestamps, or LASeR beats (ticks). The global timeline has a rate that is independent of the chosen transmission. Typically, the rate for the global timeline is set to 1Hz, but any predefined rate can be used.
The transition from the media timeline to the global timeline is performed simply by normalizing the transport level timestamps to the global timeline rate. Just as for RTP timestamps, the global timeline does not have to start with 0, since it is a relative timing that is of importance.
The command "GlobalSeek" that needs to be used when seeking in a rich media session may have the following syntax:
<GlobalSeek seekOffset=″seekOffset″>
where "seekOffset" is an arbitrary signed time value, measured at the global timeline rate.
Globalseek produces a "seekOffset" amount of localization in the global timeline. The global time of the position fix is obtained by adding "seekOffset" to the current global time. Since the rich media stream may contain multiple documents, this positioning can result in a change of the rich media document. The document will also be localized to a local time corresponding to the localized global time.
Positioning can be conceptually viewed as a function where the global timeline and document timeline move forward in a synchronous manner as in normal playback, but at a faster rate and without the need to present the media stream. Thus, positioning backwards in time, i.e. negative seekOffset, can be performed in a similar way, but by starting from zero again and moving forward.
The seeking in the global timeline will result in a synchronous seeking in the document timeline of the relevant document. However, the actual result of global localization depends on the underlying document localization semantics. For example, SVG appears to have a relaxed definition of localization, where certain events do not have to be fired during the localization interval.
An exemplary implementation of global positioning at a decoder according to one embodiment will now be described with reference to fig. 3. In fig. 3, a decoder, e.g., a DIMS decoder, receives a command in a rich media stream, e.g., a DIMS stream contained in a 3GP file, at time instance X, where seekOffset may be set to time instance Y as follows:
<GlobalSeek seekOffset=″Y″>
wherein X and Y are arbitrary time values. The decoder finds the nearest Random Access Point (RAP), i.e. the nearest or last base scene element 300 before time instance X + Y. The random access point is decoded and a scene with document time is created. Subsequently, the rich media stream is decoded, i.e. the media stream units are decoded while advancing the document time to time instance X + Y as fast as possible. At the same time, the script is run as needed. As described above, synchronous seeking is concurrently seeking in the global timeline 301 and the document timeline 302. Subsequently, the scene can be displayed and normal decoding can continue.
When wishing to locate in a rich media stream, such as a DIMS stream, instead of considering the proposed global search method from the encoder perspective, the GlobalSeek command/instruction can simply be inserted, which can be server-initiated or user-initiated. In the case of server initiated commands, the commands may be inserted directly into the media stream, while user initiated commands may be inserted into the relevant scene, for example, through a secondary stream related to the event.
The rate of the global timeline may be explicitly sent to the client instead of being predefined. Alternatively, global positioning can use absolute time, which can be sent to the client.
As an alternative to defining the global seeking in XML, the global seeking may be defined as an update or command in the LASeR binary or any other textual or non-textual representation of the rich media scene. In addition, positioning may be defined as, for example, a DOM method, rather than a command. Another alternative solution to implementing the positioning as a command could be to implement it as an attribute. The positioning will thus have an implicit execution time indicating, for example, the beginning or end of the document.
The described invention also allows synchronization using a global timeline as the synchronization base or synchMaster.
As shown in fig. 4a, the underlying internal document timeline 302 will be synchronized with the global timeline 301 by setting the global timeline as synchronization master 300a, alternatively, as shown in fig. 4b, the internal document timeline 302 is used as synchronization master 300b, i.e. by setting the syncMaster attribute.
These alternative synchronization options can be implemented by introducing a new attribute, for example called syncMasterGlobal, on the rich media document. The introduction of this variable allows the content creator to choose synchronization based on the internal document time, by setting the syncMaster attribute on an element of the stream, typically an SVG element, or global timeline or transmission timeline, by setting a new syncMaster global attribute on a media stream element.
The syncMasterGlobal attribute may be implemented in, for example, DIMS/RME or LASeR. This new variable is typically a boolean attribute added to the media stream (e.g., SVG element) with a default value of "false". When "true", the global timeline, i.e. the transport level timestamps of the stream, will serve as synchronization master, i.e. other elements in the time container, in which case the internal document timeline will be forced to be synchronized with the global timeline. If both the syncMasterGlobal attribute and the syncMaster attribute are set to true, the former generally has a higher priority than the latter. In other cases, the same rules will apply to syncMasterGlobal as to syncMaster.
Alternatively, an attribute may be specified that signals which of the two corresponding global or local timelines is to be treated as synchMaster. In this way, some source outside the DIMS stream can be used to signal the stream to become syncMaster.
A multimedia server adapted to allow seeking in a rich media stream according to one embodiment will now be described with reference to fig. 5.
Fig. 5 shows a multimedia server 500 comprising an encoder 501 adapted to provide a media stream to one or more clients, typically multimedia clients. The encoder that receives the media stream from the media source 502 comprises an encoding unit 503 that encodes the encoded stream after the insertion unit 504 has inserted the positioning instructions into the media stream. As described above, the positioning instruction inserted at time instance X includes an offset time instance Y. The encoder also comprises a transmitter 505 for transmitting the media stream to one or more terminating multimedia clients 600.
According to an alternative embodiment, the encoder is adapted to allow synchronization in the rich media stream based on the global timeline as an alternative to synchronization based on one or more document timelines. Such an encoder may be provided if the insertion unit 504 of fig. 5 is adapted to introduce the new attribute syncMasterGlobal as described earlier herein. By adapting the insertion unit 504 to set syncMasterGlobal or syncMaster, the global timeline of the rich media stream or one or more document timelines can be chosen as syncMaster.
A multimedia client adapted to allow seeking in a rich media stream according to one embodiment is depicted in fig. 6, wherein the multimedia client 500 provides the media stream to the multimedia client 600. The multimedia client 600 comprises a decoder 601 that receives a media stream comprising positioning instructions at a receiver 602. As described above, the positioning instruction received at time instance X includes an offset time instance Y. The positioning unit 603 is adapted to perform positioning at the decoding unit 604 of the decoder 601 according to the positioning instruction. Once the seeking has been performed, the rich media stream is decoded and provided to the media player 605 of the multimedia client 600.
According to an alternative embodiment, the decoder may comprise a synchronization unit 606, which may be adapted to synchronize in the rich media stream received via the receiver 602. The synchronization unit 606 is adapted to identify whether the syncMasterGlobal attribute has been set, thereby indicating whether the global timeline is to be used as a synchronization master, or whether syncMaster has been set, i.e. whether one or more document timelines are to be used as synchronization masters.
According to yet another alternative embodiment, the positioning command may be combined with the synchronization. With the global timeline set to syncMasterGlobal, everything else will be synchronized with it. By setting the global timeline to syncMaster, it will be possible to locate only in the global timeline, leaving the synchronization module to sort through the rest. The synchronization module will simply note that the local timeline is not synchronized and move it to the correct location.
The method of seeking in a rich media stream to be performed in an encoder according to the above embodiments may be described according to the block diagram of fig. 7.
In a first step 700, a positioning instruction is inserted into a rich media stream. The rich media stream comprising the instructions is encoded in a next step 701 and in a final step 702 the rich media stream is transmitted to one or more media clients.
The method of seeking in a rich media stream according to the above embodiments may be described with reference to the block diagram of fig. 8, where the method is in turn performed in a decoder.
In a first step 800 of fig. 8, a positioning instruction is received by a decoder, and in a final step 801, positioning is performed according to the received positioning instruction.
A method for an encoder that allows synchronization in a rich media stream using any of the above attributes is illustrated with reference to the block diagram of fig. 9.
In a first step 900 of fig. 9, it is determined which timeline is to be used as synchronization master by setting a synchronization attribute, i.e. syncMasterGlobal or syncMaster. In a next step 901, the synchronization attribute is transmitted to one or more receiving entities.
The method of the decoder for performing synchronization in a rich media stream as described above is illustrated with reference to the block diagram 10, wherein the synchronization properties are received in a first step 1000, and wherein the synchronization according to the received properties is performed in the rich media stream in a final step 1001.
In summary, the proposed use of seeking in a global transport level timeline allows seeking directly from the content of a rich media stream. The localization will likely be performed across and within the document boundaries. In addition, the proposed positioning mechanism allows simultaneous positioning in both the document and transmission timelines, eliminating the need to rely on a synchronization module that may not be well defined to move the other timeline.
While the present invention has been described with reference to specific exemplary embodiments, the description is primarily intended to illustrate the inventive concept and should not be taken as limiting the scope of the invention. Although concepts such as SVG, DIMS, RME, SAF, LASeR, uDOM and MORE are used in describing the above embodiments, any other similarly suitable standards, protocols and network elements may be used substantially as described herein. The invention is primarily defined by the appended independent claims.
Claims (24)
1. A method of enabling seeking in a rich media stream provided from a multimedia server to at least one multimedia client, wherein said multimedia server comprises an encoder, said method comprising the following steps performed at said encoder:
-inserting (700) a seeking instruction in the media stream comprising a global timeline and at least two documents each having a separate local timeline, the seeking instruction being an instruction to seek in the global timeline at an offset time instance Y to a current time instance X, where X and Y are arbitrary time values,
-encoding (701) the rich media stream, an
-transmitting (702) the media stream to the at least one multimedia client, thereby allowing positioning within and across document boundaries at the at least one multimedia client.
2. The method of claim 1, wherein the positioning instruction is a server initiated instruction, the media stream being inserted directly by the multimedia server.
3. The method of claim 1, wherein the positioning instruction is a user-initiated instruction inserted into a scene of the media stream.
4. A method according to claim 3, wherein the positioning instruction is inserted through a secondary stream, the stream relating to a certain event.
5. A method of enabling seeking in a rich media stream provided from a multimedia server to a multimedia client, wherein the multimedia client comprises a decoder, the method comprising the following steps performed at the decoder:
-receiving (800) seeking instructions in the media stream comprising a global timeline and at least two documents each having a separate local timeline, the seeking instructions instructing the decoder to seek in the global timeline at an offset time instance Y added to a current time instance X, where X and Y are arbitrary time values, and
-performing (801) a positioning according to the received positioning instruction, thereby allowing positioning within and across document boundaries.
6. The method of claim 5, wherein the seeking step comprises moving the global timeline and one or more internal document timeline towards time instance X + Y in a synchronized manner by adding an offset time instance Y to a current time instance X, thereby seeking in the global timeline over the entire media stream.
7. The method of claim 5, wherein the positioning step further comprises the steps of:
-finding the last random access point of said media stream occurring before the desired positioning time instance X + Y,
-decoding the random access point and creating a scene with a document time, and
-moving the document time towards the located time, decoding the media stream from the random access point.
8. The method of any of claims 1-7, wherein time instances X and Y are calculated from a transmission level timestamp of the rich media stream.
9. The method of claim 8, wherein the conversion from an internal document timeline to the global timeline is performed by normalizing the transport level timestamps to the rate of the global timeline.
10. The method of any of claims 1-7, wherein a rate of the global timeline is independent of a transmission chosen for the media stream.
11. The method of any of claims 1-7, wherein the rate of the global timeline is a predetermined rate.
12. The method according to any of claims 1-7, wherein the rate of the global timeline is explicitly sent to the multimedia client.
13. The method of any of claims 1-7, wherein the rich media stream is any of a DIMS/RME stream or a LASeR stream.
14. The method of any of claims 1-7, wherein the positioning instruction is any of: command, DOM method, or attribute.
15. A method for an encoder allowing synchronization in a rich media stream, the method comprising the steps of:
-setting (900) a first synchronization master attribute on an element of the rich media stream in case a continuous global timeline is to be used as a synchronization master, or setting a second synchronization master attribute on an element of the rich media stream in case one or more document timelines are to be used as a synchronization master, and
transmitting (901) the attributes to at least one receiving entity, thereby allowing the receiving entity to synchronize using a synchronization basis for the rich media stream based on the received attributes.
16. The method of claim 15, wherein the element is an SVG element of the rich media stream.
17. The method of claim 15, wherein the attribute is provided to the at least one receiving entity via signaling external to the rich media stream.
18. A method for a decoder performing synchronization in a rich media stream, the method comprising the steps of:
-receiving (1000) a synchronization master attribute in an element of the rich media stream, the attribute having the purpose of signaling a current synchronization master;
-synchronizing (1001) in the rich media stream using a global timeline as synchronization master in case the received property is a first synchronization master property and in case the first synchronization master property is set, or using one or more document timelines as synchronization master in case the received property is a second synchronization master property and in case the second synchronization master property is set or in case the received property is not set.
19. The method of any of claims 15-18, wherein the global timeline is based on transmission level timestamps.
20. The method according to any of claims 15-18, wherein the first synchronization master attribute has a higher priority than the second synchronization master attribute if both attributes are set.
21. A multimedia server (500) comprising an encoder (501) for enabling seeking in a rich media stream provided to at least one multimedia client (600), the encoder comprising:
-an insertion unit (504) for inserting a seeking instruction into the media stream comprising a global timeline and at least two documents each having a separate local timeline, the seeking instruction being an instruction to seek in the global timeline at an offset time instance Y to a current time instance X, where X and Y are arbitrary time values,
-an encoding unit (503) for encoding the rich media stream, an
-a transmitter (505) for transmitting the rich media stream to the at least one multimedia client, thereby allowing positioning within and across document boundaries at the at least one multimedia client.
22. A multimedia client (600) comprising a decoder (601) for enabling seeking in a rich media stream provided from a multimedia server (500), the multimedia client comprising:
-a receiver (602) for receiving seeking instructions in the media stream comprising a global timeline and at least two documents each having a separate local timeline, the seeking instructions instructing the decoder to seek in the global timeline at an offset time instance Y to a current time X, where X and Y are arbitrary time values,
-a positioning unit (603) for performing positioning in accordance with the received positioning instructions, an
-a decoding unit (604) for decoding the rich media stream, thereby allowing localization within and across document boundaries.
23. A multimedia server (500) comprising an encoder (501) for enabling selection of a synchronization base in a rich media stream, the encoder comprising:
-an insertion unit (504) for setting a syncMasterGlobal attribute in case synchronization is to be based on a global timeline, or for setting a syncMaster attribute in case synchronization is to be based on one or more document timelines, and
-a transmitter (505) for transmitting the property to at least one receiving entity, thereby allowing the at least one receiving entity to choose a synchronization basis for the rich media stream based on the set property.
24. A multimedia client (600) comprising a decoder (601) for selecting a synchronization basis in a rich media stream, the decoder comprising:
-a receiver (602) for receiving an attribute in an element of the rich media stream, the attribute having the purpose of signaling a current synchronization master, and for providing the rich media stream to a decoding unit (604),
-a synchronization unit (606) in the rich media stream, wherein the synchronization unit is adapted to use a global timeline as synchronization master in case the received property is a first property and in case the first property is set, or to use one or more document timelines as synchronization master in case the property is a second property and in case the second property is set or in case the received property is not set.
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US90562707P | 2007-03-08 | 2007-03-08 | |
| US60/905,627 | 2007-03-08 | ||
| PCT/SE2007/001176 WO2008108694A1 (en) | 2007-03-08 | 2007-12-28 | Seeking and synchronization using global scene time |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| HK1140329A1 HK1140329A1 (en) | 2010-10-08 |
| HK1140329B true HK1140329B (en) | 2014-06-13 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US10523726B2 (en) | Real-time or near real-time streaming | |
| KR101401183B1 (en) | Apparatus and methods for describing and timing representations in streaming media files | |
| US11356749B2 (en) | Track format for carriage of event messages | |
| US20120233345A1 (en) | Method and apparatus for adaptive streaming | |
| GB2479272A (en) | Setting playlist file and media file duration and timing | |
| WO2012097006A1 (en) | Real-time or near real-time streaming | |
| KR20090009847A (en) | Method and apparatus for reconstructing media from media representations | |
| US10999621B2 (en) | Technique for synchronizing rendering of video frames with rendering of auxiliary media | |
| JP5576910B2 (en) | Search and synchronization using global scene time | |
| WO2011123821A1 (en) | Real-time or near real-time streaming | |
| Concolato et al. | Live HTTP streaming of video and subtitles within a browser | |
| HK1140329B (en) | Seeking and synchronization using global scene time | |
| GB2510766A (en) | Determining earliest and latest transmission times for playlist files having plural tags and universal resource indicators (URIs) |