HK1244380A1 - Information processing device and information processing method - Google Patents
Information processing device and information processing method Download PDFInfo
- Publication number
- HK1244380A1 HK1244380A1 HK18103656.9A HK18103656A HK1244380A1 HK 1244380 A1 HK1244380 A1 HK 1244380A1 HK 18103656 A HK18103656 A HK 18103656A HK 1244380 A1 HK1244380 A1 HK 1244380A1
- Authority
- HK
- Hong Kong
- Prior art keywords
- image
- adaptation group
- information processing
- group
- adaptation
- Prior art date
Links
Description
Technical Field
The present disclosure relates to an information processing apparatus and an information processing method, and more particularly, to an information processing apparatus and an information processing method capable of setting an adaptation group (adaptation set) that does not include rendering (reproduction).
Background
In recent years, OTT-V (Over The Top Video) has become The mainstream in streaming services on The internet. One technique that has come into widespread use as a basic technique of OTT-V is MPEG-DASH (moving picture experts group phase-dynamic adaptive streaming over HTTP (hypertext transfer protocol)) (see, for example, non-patent document 1).
According to MPEG-DASH, a distribution server provides encoded streams having different bit rates of one moving image content, and a playback terminal requests the encoded stream having the optimal bit rate, thereby realizing adaptive stream distribution.
The MPEG-DASH SRD (spatial relationship description) extension defines an SRD indicating a position on a screen of one or more individually coded areas into which an image of moving image content is divided (see, for example, non-patent documents 2 and 3). The SRD enables a spatially adaptive ROI (region of interest) function to selectively acquire an encoded stream of an image of a desired region using a bit rate adaptive method for selectively acquiring an encoded stream having a desired bit rate.
If the image of the moving image content is a mosaic (mosaic) image composed of thumbnail images (divided images) of moving images from a plurality of broadcast programs, it is conceivable to indicate the positions of the thumbnail images on the screen with the SRD.
However, according to the SRD, the positions on the screens of the respective thumbnail images and the positions on the stitched image compatible with the encoded stream are described as being identical to each other. Therefore, if the positions on the screens of the respective thumbnail images and the positions on the stitched image compatible with the encoded stream are different from each other, the positions on the screens of the respective thumbnail images cannot be described using the SRD.
Therefore, it is desirable to reliably describe the positions on the screen of the respective thumbnail images so that they can be recognized. Further, in the case where there is an encoded stream of a stitched image having a plurality of bit rates, if positions on the screen of respective thumbnail images are described for the respective bit rates, the description tends to be redundant. It is also desirable to prevent such problems.
Reference list
Non-patent document
Non-patent document 1
MPEG-DASH(Dynamic Adaptive Streaming over HTTP)(URL:http://mpeg.chiariglione.org/standards/mpeg-dash/media-presentation-des cription-and-segment-formats/text-isoiec-23009-12012-dam-1)
Non-patent document 2
“Text of ISO/IEC 23009-1:2014FDAM 2Spatial Relationship Description,Generalized URL parameters and other extensions,”N15217,MPEG111,Geneva,February 2015
Non-patent document 3
“WD of ISO/IEC 23009-32nd edition AMD 1DASH ImplementationGuidelines,”N14629,MPEG109,Sapporo,July 2014
Disclosure of Invention
Technical problem
An MPD (media presentation description) file has not been considered for setting an adaptation group that does not include a presentation.
The present disclosure has been made in the above-described circumstances, and aims to be able to set an adaptation group that does not include a manifestation.
Solution to the problem
An information processing apparatus according to a first aspect of the present disclosure is an information processing apparatus including a setting portion that sets a first adaptation group including a plurality of appearances corresponding to an encoded stream having a predetermined bit rate and a second adaptation group not including the appearances.
An information processing method according to a first aspect of the present disclosure corresponds to an information processing apparatus according to the first aspect of the present disclosure.
According to a first aspect of the present disclosure, a first adaptation group including a plurality of presentations corresponding to an encoded stream having a predetermined bit rate and a second adaptation group not including a presentation are set.
An information processing apparatus according to a second aspect of the present disclosure is an information processing apparatus including a player that plays back an encoded stream having a predetermined bit rate based on a first adaptation group including a plurality of presentations corresponding to the encoded stream and a second adaptation group not including the presentations.
According to a second aspect of the present disclosure, an encoded stream having a predetermined bit rate is played back based on a first adaptation group including a plurality of presentations corresponding to the encoded stream and a second adaptation group not including the presentations.
The information processing apparatus according to the first and second aspects may be realized by a computer when it executes a program.
In order to realize the information processing apparatus according to the first and second aspects, the program to be executed by the computer may be provided by being transmitted through a transmission medium or being recorded on a recording medium.
Advantageous effects of the invention
According to the first aspect of the present disclosure, information may be set. According to the first aspect of the present disclosure, it is possible to set an adaptation group that does not include a presentation.
According to the second aspect of the present disclosure, the encoded stream may be played back. According to the second aspect of the present disclosure, the encoded stream may be played back based on the adaptation group not including the presentation.
The above-described advantages are not necessarily limiting in nature, and any of the advantages described in this disclosure may be applicable.
Drawings
Fig. 1 is a block diagram depicting a configuration example of a first embodiment of an information processing system to which the present disclosure is applied.
Fig. 2is a block diagram depicting a configuration example of the file generating apparatus shown in fig. 1.
Fig. 3 is a diagram depicting an example of a stitched image.
Fig. 4 is a diagram depicting an example of a segment structure of an image file.
Fig. 5 is a diagram depicting an example of an sgpd box and a leva box.
Fig. 6 is a diagram depicting a first example of an MPD file.
Fig. 7 is a diagram depicting a second example of an MPD file.
Fig. 8 is a diagram depicting an example of a screen on which thumbnail images are placed.
Fig. 9 is a flowchart showing a file generation process of the file generation apparatus shown in fig. 2.
Fig. 10 is a diagram depicting a third example of an MPD file.
Fig. 11 is a diagram depicting a third example of an MPD file.
Fig. 12is a diagram depicting a fourth example of an MPD file.
Fig. 13 is a diagram depicting a fourth example of an MPD file.
Fig. 14 is a diagram depicting a fifth example of an MPD file.
Fig. 15 is a diagram depicting a fifth example of an MPD file.
Fig. 16 is a diagram depicting a sixth example of an MPD file.
Fig. 17 is a diagram depicting a sixth example of an MPD file.
Fig. 18 is a diagram depicting a seventh example of an MPD file.
Fig. 19 is a diagram depicting a seventh example of an MPD file.
Fig. 20 is a block diagram depicting a configuration example of a stream player implemented by the moving image playback terminal shown in fig. 1.
Fig. 21 is a diagram showing an outline of playback processing of the stream player shown in fig. 20.
Fig. 22 is a flowchart showing a playback process of the stream player shown in fig. 20.
Fig. 23 is a block diagram depicting a configuration example of hardware of a computer.
Detailed Description
Modes for carrying out the present disclosure (hereinafter referred to as "embodiments") will be described below. The description will be given in the following order:
1. the first embodiment: information processing system (FIGS. 1 to 22)
2. Second embodiment: computer (Picture 23)
< first embodiment >
(configuration example of the first embodiment of information processing System)
Fig. 1 is a block diagram depicting a configuration example of a first embodiment of an information processing system to which the present disclosure is applied.
The information processing system 210 shown in fig. 1 includes a moving image playback terminal 14 and a Web server 12 connected to a file generation device 211, and the Web server 12 and the moving image playback terminal 14 are connected to each other via the internet 13.
In the information processing system 210, the Web server 12 distributes an encoded stream of a spliced image, which is an image of moving image content, to the moving image playback terminal 14 according to a process equivalent to MPEG-DASH.
The file generating device 211 of the information processing system 210 encodes the spliced image at a plurality of encoding rates (bit rates) to generate an encoded stream. The file generating device 211 generates an image file by converting the encoded stream of the encoding rate into a file of one file per time unit, which is referred to as "segment" ranging from several seconds to ten seconds. The file generation device 211 uploads the generated image file to the Web server 12.
The file generation device 211 (setting unit) also generates an MPD file (management file) for managing an image file and the like. The file generation device 211 uploads the MPD file to the Web server 12.
The Web server 12 stores the image file and the MPD file uploaded from the file generation device 211. In response to a request from the moving image playback terminal 14, the Web server 12 transmits an image file, an MPD file, or the like, which has been stored therein, to the moving image playback terminal 14.
The moving image playback terminal 14 executes software 21 for controlling stream data (hereinafter referred to as "control software"), moving image playback software 22, client software 23 for accessing HTTP (hypertext transfer protocol) (hereinafter referred to as "access software"), and the like.
The control software 21 is software for controlling the flow of data from the Web server 12. Specifically, the control software 21 enables the moving image playback terminal 14 to acquire an MPD file from the Web server 12.
Based on the MPD file, the control software 21 instructs the access software 23 to transmit a request for transmitting an encoded stream to be played, which is designated by the moving image playback software 22.
The moving image playback software 22 is software for playing an encoded stream acquired from the Web server 12. Specifically, the moving image playback software 22 instructs the control software 21 on the encoded stream to be played. Further, when the moving image playback software 22 receives a notification that the reception of a stream from the access software 23 has started, the moving image playback software 22 decodes the encoded stream received by the moving image playback terminal 14 into image data. The moving image playback software 22 combines the decoded image data and outputs the combined image data as necessary.
The access software 23 is software for controlling communication with the Web server 12 through the internet 13 using HTTP. Specifically, in response to an instruction from the control software 21, the access software 23 controls the moving image playback terminal 14 to transmit a request for transmitting an encoded stream to be played, which is included in an image file. The access software 23 also controls the moving image playback terminal 14 to start receiving the encoded stream transmitted from the Web server 12 in response to the request, and provides notification to the moving image playback software 22 that the reception of the stream has started.
(configuration example of File Generation device)
Fig. 2is a block diagram depicting a configuration example of the file generating apparatus 211 shown in fig. 1.
The file generating device 211 shown in fig. 2 includes an encoding processor 231, an image file generator 232, an MPD generator 233, and a server upload processor 234.
The encoding processor 231 of the file generating device 211 encodes the stitched image into an image of the moving image content at a plurality of encoding rates, thereby generating an encoded stream. The encoding processor 231 supplies the encoded streams of the respective encoding rates to the image file generator 232.
The image file generator 232 converts the encoded streams of the respective encoding rates supplied from the encoding processor 231 into a file of one file per segment, thereby generating an image file. The image file generator 232 supplies the generated image file to the MPD generator 233.
The MPD generator 233 determines a URL (uniform resource locator) or the like of the Web server 12 for storing the image file supplied from the image file generator 232. Then, the MPD generator 233 generates an MPD file including the URL of the image file and the like. The MPD generator 233 provides the generated MPD file and image file to the server upload processor 234.
The server upload processor 234 uploads the image file and the MPD file supplied from the MPD generator 233 to the Web server 12 shown in fig. 1.
(example of mosaic image)
Fig. 3 is a diagram depicting an example of a stitched image.
In the example shown in fig. 3, the stitched image 250 includes an upper left thumbnail image 251, an upper right thumbnail image 252, a lower left thumbnail image 253, and a lower right thumbnail image 254. The stitched image 250 has a resolution of 2k (1920 pixels × 1080 pixels), and all the thumbnail images 251 to 254 have a resolution of 960 pixels × 540 pixels.
(example of a segmented Structure of an image File of a stitched image)
Fig. 4 is a diagram depicting an example of a segmented structure of the image file of the stitched image 250 shown in fig. 3.
As shown in fig. 4, in the image file of the stitched image 250, the initial segment includes ftyp box and moov box. The moov box includes a stbl box and an mvex box placed therein. The stbl box includes an sgpd box or the like placed therein, and the mvex box includes a leva box or the like placed therein.
The media segment includes one or more sub-segments including a sidx box, a ssix box, and pairs of moof boxes and mdat boxes. The sidx box has position information placed therein, the position information indicating the position of each sub-segment in the image file. The ssix box includes position information of encoded streams of respective levels (levels) placed in the mdat box.
One sub-segment is set for each desired length of time. Mdat boxes have coded streams placed therein together for a desired length of time, and moof boxes have management information of those coded streams placed therein.
(examples of sgpd and leva boxes)
Fig. 5 is a diagram depicting an example of the sgpd box and the leva box shown in fig. 4.
As shown in fig. 5, a Tile region group Entry (Tile regionggroupoentry) indicating positions on the stitched image 250 of the thumbnail images 251 to 254 constituting the stitched image 250 is continuously described in the sgpd box.
In the example shown in fig. 5, the first chip area group entry corresponds to the thumbnail image 251 and is (1,0, 960, 540). The second chip area group entry corresponds to the thumbnail image 252 and is (2,960,0,960,540). The third chip area group entry corresponds to the thumbnail image 253 and is (3,0,540,960, 540). The fourth chip area group entry corresponds to the thumbnail image 254 and is (4,960,540,960,540). The chip region group entry is standardized in accordance with the HEVC chip Track (Tile Track) of the HEVC (high efficiency video coding) file format.
The information of the level corresponding to each chip region group entry is described continuously in the leva box from the information of the level corresponding to the first chip region group entry. The level of the thumbnail image 251 is set to 1, the level of the thumbnail image 252 is set to 2, the level of the thumbnail image 253 is set to 3, and the level of the thumbnail image 254 is set to 4. The level is used as an index when a part of the encoded stream is specified from the MPD file.
An assignment _ type indicating whether an object to be rated is an encoded stream placed on a plurality of tracks (tracks) as information of each of the levels is described in the leva box. In the example shown in fig. 4, the encoded stream of stitched image 250 is placed on one track. Therefore, assignment _ type is set to 0,0 indicating that the object to be ranked is not a coded stream placed on a plurality of tracks.
The type of the slice region group entry corresponding to the level is also described in the leva box as information of each level. In the example shown in fig. 4, "trif" representing the type of the chipping region group entry described in the sgpd box is described as information of each level. For example, the details of the leva box are described in ISO/IEC 14496-12ISO base media File Format version 4, month 7, 2012.
(first example of MPD File)
Fig. 6 is a diagram depicting a first example of an MPD file corresponding to an image file of the stitched image 250 generated by the file generation device 211 shown in fig. 1.
In the example shown in fig. 6, it is assumed that the encoded stream of the stitched image has one bit rate. This applies to fig. 7 to be described later.
As shown in fig. 6, in the MPD file, an "adaptation group" is described for each encoded stream. Mp4 is described in "appearance" in each "adaptation group", and URL "stream" of the image file of the encoded stream of the stitched image 250 is described in "appearance". Since the level is set for the encoded stream of the stitched image 250, "sub-presentation" of each level can be described in "presentation".
Therefore, in the "sub-appearance" of the level "1", a description is given of < supplementalpropertymelduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,0, 960,540,1920, 1080"/>, which represents the SRD of the thumbnail image 251. Accordingly, the SRD of the thumbnail image 251 is set in association with the position on the stitched image 250 of the thumbnail image 251 indicated by the clip area group entry corresponding to the level "1".
In the "sub-appearance" of the level "2", described is < SupplementalProperty scheme eiduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,960,0,960,540,1920, 1080"/>, which represents the SRD of the thumbnail image 252. Accordingly, the SRD of the thumbnail image 252 is set in association with the position on the stitched image 250 of the thumbnail image 252 indicated by the clip area group entry corresponding to the level "2".
In the "sub-appearance" of the level "3", described is < SupplementalProperty scheme eiduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,0,540,960,540,1920, 1080"/>, which represents the SRD of the thumbnail image 253. Accordingly, the SRD of the thumbnail image 253 is set in association with the position on the stitched image 250 of the thumbnail image 253 indicated by the clip area group entry corresponding to the level "3".
In the "sub-appearance" of the level "4", described is < SupplementalProperty scheme eiduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,960,540,960,540,1920,1080"/>, which represents the SRD of the thumbnail image 254. Accordingly, the SRD of the thumbnail image 254 is set in association with the position on the stitched image 250 of the thumbnail image 254 indicated by the clip area group entry corresponding to the level "4".
As described above, in the MPD file shown in fig. 6, the horizontal size and the vertical size of the stitched image 250 indicated by the slice region group entry are the same as the horizontal size and the vertical size of the picture indicated by the SRD. The horizontal and vertical coordinates on the stitched image 250 indicated by the slice region group entry corresponding to each level are the same as the horizontal and vertical positions on the picture indicated by the SRD corresponding to that level. Therefore, when the MPD file shown in fig. 6 is generated, the pictures on which the thumbnail images 251 to 254 decoded based on SRDs are placed are the same as the mosaic image 250.
The URLs of the moving images corresponding to the thumbnail images 251 to 254 of the ranks are also described in the "sub-presentation" of each rank. Specifically, the URL "http:// example. com/a _ service/my. mpd" of the moving image corresponding to the thumbnail image 251 is described in the "sub-presentation" of the level "1". The URL "http:// example. com/b _ service/my. mpd" of the moving image corresponding to the thumbnail image 252 is described in the "sub-presentation" of the level "2".
The URL "http:// example. com/c _ service/my. mpd" of the moving image corresponding to the thumbnail image 253 is described in the "sub-presentation" of the level "3". The URL "http:// example. com/d _ service/my. mpd" of the moving image corresponding to the thumbnail image 254 is described in the "sub-presentation" of the level "4".
(second example of MPD File)
Fig. 7 is a diagram depicting a second example of an MPD file corresponding to an image file of the stitched image 250 generated by the file generation device 211 shown in fig. 1.
The MPD file shown in fig. 7 is different from the MPD file shown in fig. 6 only in SRDs described in "sub-presentations" of each level.
Specifically, in the MPD file shown in fig. 7, a "child presentation" of level "3" describes < SupplementalProperty schemeIdUri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,0, 960,540,1920, 1080"/>, which represents the SRD of the thumbnail image 253.
In the "sub-appearance" of the level "4", described is < SupplementalProperty scheme eiduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,960,0,960,540,1920, 1080"/>, which represents the SRD of the thumbnail image 254.
In the "sub-appearance" of the level "1", described is < SupplementalProperty scheme eiduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,0,540,960,540,1920, 1080"/>, which represents the SRD of the thumbnail image 251.
In the "sub-appearance" of the level "2", described is < SupplementalProperty scheme eiduri ═ urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,960,540,960,540,1920,1080"/>, which represents the SRD of the thumbnail image 252.
As described above, in the MPD file shown in fig. 7, as in the MPD file shown in fig. 6, the horizontal size and the vertical size of the stitched image 250 indicated by the slice region group entry are the same as the horizontal size and the vertical size of the picture indicated by the SRD.
However, the horizontal and vertical coordinates on the stitched image 250 indicated by the slice region group entry corresponding to each level are different from the horizontal and vertical positions on the screen indicated by the SRD corresponding to that level. Therefore, when the MPD file shown in fig. 7 is generated, the pictures on which the thumbnail images 251 to 254 decoded based on SRDs are placed are different from the mosaic image 250.
(example of screen on which thumbnail image is placed)
Fig. 8 is a diagram depicting an example of a picture on which thumbnail images 251 to 254 decoded based on the SRD described in the MPD file shown in fig. 7 are placed.
The SRD of the thumbnail image 251 described in the MPD file shown in fig. 7 indicates that the coordinate of the upper left corner of the thumbnail image 251 on the screen 270 of 1920 pixels × 1080 pixels is (0,540). Accordingly, as shown in fig. 8, the thumbnail image 251 is placed in the lower left area of the screen 270.
The SRD of the thumbnail image 252 indicates the coordinate of the upper left corner of the thumbnail image 252 on the screen 270 as (960,540). Accordingly, as shown in fig. 8, the thumbnail image 252 is placed in the lower right area of the screen 270.
The SRD of the thumbnail image 253 indicates that the coordinates of the upper left corner of the thumbnail image 253 on the screen 270 of 1920 pixels × 1080 pixels are (0, 0). Accordingly, as shown in fig. 8, the thumbnail image 253 is placed in the upper left area of the screen 270.
The SRD of the thumbnail image 254 indicates the coordinates of the upper left corner of the thumbnail image 254 on the screen 270 as (960, 0). Accordingly, as shown in fig. 8, the thumbnail image 254 is placed in the upper right area of the screen 270.
As described above, with the MPD file shown in fig. 7, the layouts of the thumbnail images 251 to 254 may be changed from the layout in the stitched image 250 as they are displayed, to be encoded into the layout in the screen 270.
(description of processing of File Generation apparatus)
Fig. 9 is a flowchart of the file generation processing of the file generation device 211 shown in fig. 2.
In step S191 shown in fig. 9, the encoding processor 231 encodes the stitched image into an image of the moving image content at a plurality of encoding rates, thereby generating an encoded stream. The encoding processor 231 supplies the encoded streams of the respective encoding rates to the image file generator 232.
In step S192, the image file generator 232 converts the encoded streams of the respective encoding rates supplied from the encoding processor 231 into a file of one file per segment, thereby generating an image file. The image file generator 232 supplies the generated image file to the MPD generator 233.
In step S193, the MPD generator 233 generates an MPD file including the URL of the image file and the like. The MPD generator 233 provides the generated MPD file and image file to the server upload processor 234.
In step S194, the server upload processor 234 uploads the image file and the MPD file supplied from the MPD generator 233 to the Web server 12. The process now ends.
In the MPD files shown in fig. 7 and 8 described above, it is assumed that the encoded stream of the stitched image has one bit rate. However, in the case where there are a plurality of bit rates, "appearance" shown in fig. 7 or fig. 8 is described for each bit rate. Specifically, "display" in which the same SRD and URL of a moving image or the like are set is described in the MPD file as many as the number of bit rates. Thus, the description tends to be redundant.
(third example of MPD File)
Fig. 10 and 11 are diagrams depicting a third example of an MPD file corresponding to an image file of the stitched image 250.
In the examples shown in fig. 10 and 11, the encoded stream of the stitched image 250 has four bit rates. This also applies to fig. 12 to 19 to be described later.
In the MPD files shown in fig. 10 and 11, the SRDs and URLs of the moving images of the thumbnail images 251 to 254 are set as shared information shared between the encoded streams of the stitched image 250 having four bit rates.
Specifically, in the MPD files shown in fig. 10 and 11, "adaptation groups" corresponding to encoded streams of four bit rates and "adaptation groups" for sharing information between the encoded streams of four bit rates (hereinafter referred to as shared "adaptation groups") are described.
As shown in fig. 10, in the "adaptation group" of the encoded stream corresponding to four bit rates, < rolescemeiduri ═ urn: mpeg: and (4) dash: role: 2014 "value ═ multiple"/>, which indicates that this "adaptation group" is the "adaptation group" of the encoded streams corresponding to the stitched image 250 sharing the information. The "presentation" of the encoded streams corresponding to the respective bit rates is also described in the "adaptation groups". The URL of the image file of the encoded stream of the corresponding bit rate is described in "presentation".
In the example shown in fig. 10, the URL "stream1. mp4" of the image file of the encoded stream of the first bit rate is described in the "appearance" of the bit rate. Similarly, the URLs "stream2. mp4", "stream3. mp4", "stream4. mp4" of the image files of the encoded streams for those bit rates are described in the "presentation" for the second through fourth bit rates.
As shown in fig. 10 and 11, an "adaptation group" for sharing information is set for each thumbnail image. The shared "adaptation group" has < Role schemeIdUri ═ urn that indicates that the "adaptation group" is a shared "adaptation group" corresponding to each thumbnail image: mpeg: and (4) dash: role: 2014 "value ═ multiple _ element"/>.
In the shared "adaptation group", information (hereinafter referred to as adaptation group identification information) identifying an "adaptation group" corresponding to an encoded stream in which information described in the shared "adaptation group" is shared, that is, information identifying an "adaptation group" in which information described in the shared "adaptation group" is used as a segment is described.
In the examples shown in fig. 10 and 11, the adaptation group identification information is represented by < essentialpropertymelduri ═ urn: mpeg: and (4) dash: adaptationet-index: an indication 2015 of "value ═ 1"/> indicates an ID (identifier) "1" assigned to the "adaptation group" of the encoded streams corresponding to the four bit rates described above.
Information indicating the level set for the corresponding thumbnail image is also described in the shared "adaptation group" as information identifying the thumbnail image (hereinafter referred to as thumbnail image identification information). The shared "adaptation group" also has the SRD of the corresponding thumbnail image and the URL of the entity (entity) of the file from which the moving image is acquired.
Specifically, in the examples shown in fig. 10 and 11, the first sharing "adaptation group" corresponds to the thumbnail image 251.
Thus, as shown in fig. 10, in the first sharing "adaptation group", there is described < essentialpropertymelduri ═ urn: mpeg: and (4) dash: subdivision-index: 2015 "value ═ 1"/>, which indicates that the level set for the thumbnail image 251 is 1. Also described in the first shared "adaptation group" is a "url: mpeg: and (4) dash: srd: 2014 "value ═ 1,0, 960,540,1920, 1080"/>, which indicates the SRD corresponding to the thumbnail image 251 and the URL "http: com/a _ service/my. mpd ".
As shown in fig. 11, in the second sharing "adaptation group", there is described < essentialpropertymelduri ═ urn: mpeg: and (4) dash: subdivision-index: 2015 "value ═ 2"/>, which indicates that the level set for the thumbnail image 252 is 2. Also described in the second shared "adaptation group" is a "urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,960,0,960,540,1920, 1080"/>, which indicates the SRD corresponding to the thumbnail image 252 and the URL "http: com/b _ service/my. mpd ".
In the third shared "adaptation group", described is < essential property scheme eiduri ═ urn: mpeg: and (4) dash: subdivision-index: 2015 "value ═ 3"/>, which indicates that the level set for the thumbnail image 253 is 3. Also described in the third shared "adaptation group" is a "urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,0,540,960,540,1920, 1080"/>, which indicates the SRD corresponding to the thumbnail image 253 and the URL "http: com/c _ service/my. mpd ".
In the fourth shared "adaptation group", described is < essential property scheme eiduri ═ urn: mpeg: and (4) dash: subdivision-index: 2015 "value ═ 4"/>, which indicates that the level set for the thumbnail image 254 is 4. Also described in the fourth shared "adaptation group" is a "urn: mpeg: and (4) dash: srd: 2014 "value ═ 1,960,540,960,540,1920,1080"/>, which represents the SRD corresponding to the thumbnail image 254 and the URL "http:// example. com/d _ service/my. mpd" of the moving image corresponding to the thumbnail image 254.
In the MPD files shown in fig. 10 and 11, as described above, since the SRDs and URLs of the moving images of the thumbnail images 251 to 254 are set as shared information, there is no redundancy in description. When the moving image playback terminal 14 acquires the MPD file illustrated in fig. 10 and 11, it recognizes that shared information exists from "value ═ multiple", and extracts the shared information from the shared "adaptation group" that describes the ID of the "adaptation group".
The adaptation group identification information refers to information identifying an "adaptation group" in which information described in the shared "adaptation group" is inherently set. The thumbnail image identification information refers to information identifying a "sub-presentation" in which information described in the shared "adaptation group" is inherently set.
Therefore, a value as information for identifying an element which is named element _ name and which inherently sets information described in the shared "adaptation group" may be defined as being described as < essential property scheme ideduri ═ urn: mpeg: and (4) dash: adaptationet-reference: 2015 "value ═ element _ name, value"/>, and the adaptation group identification information and the thumbnail image identification information may be described by a common description method.
In this case, the adaptation group identification information is described as < essentialpropertyexeduri ═ urn: mpeg: and (4) dash: adaptationet-reference: 2015 "value ═ AdaptationSet, id"/>, and the thumbnail image identification information is described as < essentialpropertschemeiduri ═ urn: mpeg: and (4) dash: adaptationet-reference: 2015 "value ═ replication, level"/>.
(fourth example of MPD File)
Fig. 12 and 13 are diagrams depicting a fourth example of an MPD file corresponding to an image file of the stitched image 250.
The configuration of the MPD file shown in fig. 12 and 13 is mainly different from that shown in fig. 10 and 11 in that the SRDs of the respective thumbnail images 251 to 254 are described in an "adaptation group" of an encoded stream corresponding to four bit rates, rather than being described in a shared "adaptation group".
Specifically, in the MPD file shown in fig. 12 and 13, as shown in fig. 12, an "adaptation group" corresponding to encoded streams of four bit rates has four "content components (ContentComponent)" which describe shared information shared by encoded streams corresponding to all "presentations" in the "adaptation group".
Each "content component" corresponds to one of the levels set for the thumbnail images 251 to 254. A corresponding SRD, which is shared information, is described in each "content component", and also thumbnail image identification information is described.
As shown in fig. 13, the shared "adaptation group" has no SRD and thumbnail image identification information, and instead of the adaptation group identification information, the shared "adaptation group" has information (hereinafter referred to as content component identification information) identifying "content components" of the "adaptation group" corresponding to an encoded stream that shares information described in the "adaptation group".
In the example shown in fig. 12 and 13, the content component identification information described in the shared "adaptation group" corresponding to the thumbnail image 251 is represented by < essentialpropertschemeiduri ═ urn: mpeg: and (4) dash: adaptationet-index: 2015 "value ═ 1, srd 1"/> denotes an ID "1" assigned to an "adaptation group" corresponding to a coded stream of four bit rates and an ID "srd 1" assigned to a "content component" in which thumbnail image identification information of the thumbnail image 251 is described.
The content component identification information described in the shared "adaptation group" corresponding to the thumbnail image 252 is represented by < essentialpropertschemeiduri ═ urn: mpeg: and (4) dash: adaptationet-index: 2015 "value ═ 1, srd 2"/> denotes an ID "srd 2" assigned to the "content component" describing the thumbnail image identification information of the thumbnail image 252.
The content component identification information described in the shared "adaptation group" corresponding to the thumbnail image 253 is represented by < essentialpropertschemeiduri ═ urn: mpeg: and (4) dash: adaptationet-index: 2015 "value ═ 1, srd 3"/> denotes an ID "srd 3" assigned to the "content component" describing the thumbnail image identification information of the thumbnail image 253.
The content component identification information described in the shared "adaptation group" corresponding to the thumbnail image 254 is represented by < essentialpropertschemeiduri ═ urn: mpeg: and (4) dash: adaptationet-index: 2015 "value ═ 1, srd 4"/> denotes an ID "srd 4" assigned to the "content component" of the thumbnail image identification information describing the thumbnail image 254.
In the MPD files shown in fig. 12 and 13, as described above, since the SRDs and URLs of the moving images of the thumbnail images 251 to 254 are set as shared information, there is no redundancy in description.
When the moving image playback terminal 14 acquires the MPD file shown in fig. 12 and 13, it recognizes that shared information exists from "Value multiple", and extracts the shared information from the shared "adaptation group" that describes the ID of the "adaptation group" to which "Value multiple" is set. The moving image playback terminal 14 also extracts shared information from the "content component" described in the "adaptation group" in which "Value ═ multiple" is set.
In the MPD files shown in fig. 12 and 13, since the URL of a moving image is described in the shared "adaptation group", the SRD is not described. Therefore, the MPD file is compatible with an MPD file that describes URLs of moving images in the shared "adaptation group".
(fifth example of MPD File)
Fig. 14 and 15 are diagrams depicting a fifth example of an MPD file corresponding to an image file of the stitched image 250.
The configuration of the MPD file shown in fig. 14 and 15 mainly differs from the configuration of the MPD file shown in fig. 12 and 13 in that: the URLs of the moving images corresponding to the respective thumbnail images 251 to 254 are described in the "adaptation group" of the encoded stream corresponding to the four bit rates, not in the shared "adaptation group".
Specifically, in the MPD files shown in fig. 14 and 15, as shown in fig. 14, the URLs of the respective moving images are described in the "content components" set in the "adaptation groups" corresponding to the encoded streams of the four bit rates. The shared "adaptation group" is not described.
In the MPD files shown in fig. 14 and 15, as described above, since the SRDs and URLs of the moving images of the thumbnail images 251 to 254 are set as shared information, there is no redundancy in description.
When the moving image playback terminal 14 acquires the MPD file illustrated in fig. 14 and 15, it recognizes that shared information exists from "Value multiple", and extracts the shared information from "content components" described in "adaptation group" in which "Value multiple" is set.
In the above description, the thumbnail image identification information, the adaptation group identification information, and the content component identification information are described using an Essential Property (Essential Property). However, they may be described as elements of an "adaptation group".
In this case, the MPD file shown in fig. 10 and 11 becomes the MPD file shown in fig. 16 and 17. Specifically, as shown in fig. 16 and 17, the adaptation group identification information is described as sharing the "association id (association id)" of the "adaptation group", and the thumbnail image identification information is described as sharing the "association level (association level)" of the "adaptation group".
Further, the MPD file shown in fig. 12 and 13 becomes the MPD file shown in fig. 18 and 19. Specifically, as shown in fig. 18 and 19, in the content component identification information, information identifying an "adaptation group" corresponding to an encoded stream sharing information described in the "adaptation group" is described as an "association ID" of the shared "adaptation group". Information identifying the "content component" of an "adaptation group" corresponding to an encoded stream sharing the information described in the "adaptation group" is described as the "association level" of the shared "adaptation group".
An "association type (associationType)" indicating the type of the "adaptation group" may be described as an element of the "adaptation group". In this case, information indicating that the "adaptation group" is the "adaptation group" corresponding to the encoded stream of the stitched image 250 that shares information, and information indicating that the "adaptation group" is the shared "adaptation group" corresponding to each thumbnail image are described as the association type. For example, if the "adaptation group" is an "adaptation group" corresponding to the encoded stream of the stitched image 250 that shares information, the association type is set to "subs".
In the MPD files shown in fig. 6, 7, and 10 to 19, there is no dependency between the levels at the time of decoding and display. However, if there is a dependency relationship between the levels, < essential property scheme ideduri ═ urn: mpeg: and (4) dash: recurrence-dependency: 2015 "value ═ level"/>. For example, in the MPD files shown in fig. 12 and 13, if level 2 depends on level 1, in the "content component" assigned with ID "srd 2" corresponding to level 2, there is described that < "essentialpropertyeschemeiduri ═ urn: mpeg: and (4) dash: recurrence-dependency: 2015 "value ═ 1"/>.
(function configuration example of moving image playback terminal)
Fig. 20 is a block diagram depicting a configuration example of a stream player implemented by the moving image playback terminal 14 shown in fig. 1 when it executes the control software 21, the moving image playback software 22, and the access software 23.
The stream player 290 shown in fig. 20 includes an MPD acquirer 291, an MPD processor 292, an image file acquirer 293, a decoder 294, a display controller 295, a acceptor 296, and a moving image acquirer 297.
The MPD acquirer 291 of the stream player 290 acquires an MPD file from the Web server 12 and supplies the MPD file to the MPD processor 292.
The MPD processor 292 extracts information such as URLs of image files of segments to be played back from the MPD file provided by the MPD acquirer 291, and provides the extracted information to the image file acquirer 293. The MPD processor 292 also supplies an MPD file to the moving image acquirer 297. The MPD processor 292 extracts SRDs of the divided images of the segmented stitched image to be played back from the MPD file, and supplies the extracted SRDs to the display controller 295.
The image file acquirer 293 requests the Web server 12 for an encoded stream of an image file specified by a URL supplied from the MPD processor 292, and acquires the encoded stream. The image file acquirer 293 supplies the acquired encoded stream to the decoder 294.
The decoder 294 decodes the encoded stream supplied from the image file acquirer 293. The decoder 294 supplies the stitched image obtained as a result of the decoding process to the display controller 295.
The display controller 295 (distributor) places the split image of the stitched image supplied from the decoder 294 on a picture based on the SRD supplied from the MPD processor 292. The display controller 295 superimposes a cursor or the like on a screen on which the divided images are placed, and supplies the divided images with the superimposed cursor to a display device (not shown) that displays them.
In response to an instruction to enlarge a given area of the screen supplied from the receptor 296, the display controller 295 enlarges the size of a portion of the screen on which the stitched image is placed, including only the thumbnail images contained in the area, up to the size of the screen. The display controller 295 superimposes a cursor or the like on a given thumbnail image in the screen on which the enlarged partial mosaic image is placed, and supplies the thumbnail image with the superimposed cursor to a display device (not shown) that displays them.
The display controller 295 supplies the moving image corresponding to one of the displayed thumbnail images supplied from the moving image acquirer 297 to a display device (not shown) that displays the supplied moving image.
The acceptor 296 accepts an instruction or the like from the user, and supplies the instruction to the moving image acquirer 297 or the display controller 295.
In response to an instruction about a given location provided from the acceptor 296, the moving image acquirer 297 acquires the URL of a moving image corresponding to the location from the MPD file provided by the MPD processor 292. The moving image acquirer 297 acquires a moving image from the Web server 12 or the like based on the acquired URL, and supplies the acquired moving image to the display controller 295.
(outline of playback processing)
Fig. 21 is a diagram illustrating an outline of playback processing by the stream player 290.
As shown in the left part of fig. 21, the display controller 295 places a cursor 312 on a given thumbnail image 311 of the 4 × 4 thumbnail images 311 constituting the stitched image 310 placed in the screen, and controls a display device (not shown) to display the given thumbnail image 311.
At this time, the user gives an instruction to enlarge a desired area while viewing the screen of the stitched image 310 on which the cursor 312 is superimposed. In the example shown in fig. 21, the user gives an instruction to enlarge the area of the 2 × 2 thumbnail image 311 in the upper right area of the screen on which the stitched image 310 is placed.
In response to the enlargement instruction, the display controller 295 enlarges the size of the partial stitched image 313 of the screen on which the stitched image 310 is placed, which is made up of only the 2 × 2 thumbnail image 311 in the upper right area, to the size of the screen. Then, as shown in the center portion of fig. 21, the display controller 295 superimposes the cursor 314 on a given thumbnail image 311 in the screen on which the enlarged partial mosaic image 313 is placed, and controls a display device (not shown) to display the thumbnail image 311.
At this time, the user moves the cursor 314 to the desired thumbnail image 311, and performs an action such as double-clicking thereon, thereby indicating the position of the cursor 314. In the example shown in fig. 21, the user indicates the position of the upper right thumbnail image 311.
In response to an instruction of the user, the moving image acquirer 297 acquires, from the MPD file, the URL of the moving image corresponding to the SRD indicating the position on the screen of the stitched image 310 (corresponding to the position on the indicated partial stitched image 313) as the URL of the moving image corresponding to the indicated position. Then, based on the acquired URL, the moving image acquirer 297 acquires the moving image 315 from the Web server 12 or the like and supplies the acquired moving image 315 to the display controller 295. As shown in the right part of fig. 21, the display controller 295 controls a display device (not shown) to display the moving image 315.
(description of processing of moving image playback terminal)
Fig. 22 is a flowchart of the playback process of the stream player 290 shown in fig. 20.
In step S211 shown in fig. 22, the MPD acquirer 291 of the stream player 290 acquires an MPD file from the Web server 12, and supplies the acquired MPD file to the MPD processor 292.
In step S212, the MPD processor 292 extracts information such as the URL of an image file of a segment to be played back from the MPD file provided by the MPD acquirer 291, and provides the extracted information to the image file acquirer 293. The MPD processor 292 also supplies an MPD file to the moving image acquirer 297. The MPD processor 292 extracts SRDs of the divided images of the segmented stitched image to be played back from the MPD file, and supplies the extracted SRDs to the display controller 295.
In step S213, the image file acquirer 293 requests the Web server 12 for an encoded stream of the image file specified by the URL supplied from the MPD processor 292, and acquires the encoded stream. The image file acquirer 293 supplies the acquired encoded stream to the decoder 294.
In step S214, the decoder 294 decodes the encoded stream supplied from the image file acquirer 293. The decoder 294 supplies the stitched image obtained as a result of the decoding process to the display controller 295.
In step S215, the display controller 295 places the divided images of the stitched image from the decoder 294 on the screen based on the SRD from the MPD processor 292, superimposes a cursor or the like on the screen, and supplies the divided images with the superimposed cursor to a display device (not shown) that displays them.
In step S216, the acceptor 296 determines whether it has accepted an instruction from the user to enlarge a given region of the screen. If the acceptor 296 determines that it has not accepted the instruction to enlarge the given region of the screen in step S216, the acceptor 296 waits until it accepts the instruction to enlarge the given region of the screen.
If the acceptor 296 determines that it has accepted the instruction to enlarge a given region of the screen in step S216, the acceptor 296 supplies an enlargement instruction to the display controller 295. In step S217, in response to the enlargement instruction supplied from the acceptor 296, the display controller 295 enlarges the size of the part of the screen on which the stitched image is placed, which includes only the thumbnail image included in the area indicated to be enlarged, up to the size of the screen.
In step S218, the display controller 295 superimposes a cursor or the like on a given thumbnail image in the screen on which the enlarged partial mosaic image is placed, and supplies the thumbnail image with the superimposed cursor to a display device (not shown) that displays them. At this time, the user moves the cursor to a desired thumbnail image and performs an action such as double-clicking thereon, thereby indicating the position of the cursor on the screen.
In step S219, the acceptor 296 determines whether it has accepted an instruction of a position on the screen from the user. If the acceptor 296 determines that it has not accepted the instruction of the position on the screen in step S219, the acceptor 296 waits until it accepts the instruction of the position on the screen.
If the acceptor 296 determines in step S219 that it has accepted the instruction of the position on the screen, the acceptor 296 supplies the instruction to the moving image acquirer 297. In step S220, in response to an instruction from the receptor 296, the moving image acquirer 297 acquires the URL of the moving image corresponding to the instructed position from the MPD file supplied from the MPD processor 292.
In step S221, the moving image acquirer 297 acquires a moving image from the Web server 12 or the like based on the acquired URL and supplies the acquired moving image to the display controller 295.
In step S222, the display controller 295 supplies the moving image supplied from the moving image acquirer 297 to a display device (not shown) that displays it. The process is now ended.
In the first embodiment, the partial stitched image is displayed after the stitched image has been displayed, and the position on the partial stitched image is indicated by the user. However, part of the stitched image may not be displayed, and the position on the stitched image may be directly indicated by the user.
In the above description, the URL of the moving image (original version) corresponding to the thumbnail image is described in association with the SRD of each thumbnail image in the MPD file. However, the information described in association with the SRD of each thumbnail image is not limited to such a URL.
For example, information of image files superimposed when thumbnail images are displayed in a picture-in-picture (pip-in-picture) mode may be described in association with SRDs of the respective thumbnail images. In this case, for example, let < SupplementalProperty schemeelduri ═ urn: mpeg: and (4) dash: srd-composition: 2014 "value ═ source _ url"/> is described as information of image files superimposed when thumbnail images are displayed in the picture-in-picture mode. "source _ url" represents information for identifying an adaptation group in an external MPD file or an internal MPD file that manages image files superimposed on thumbnail images.
Alternatively, information indicating the type (meaning) of the thumbnail image may be described in association with the SRD of each thumbnail image. In this case, for example, let < supplementalpropertymehduri ═ urn: mpeg: and (4) dash: srd-role: 2014 "value ═ ping"/> is described as information indicating that the thumbnail image is displayed in the picture-in-picture mode. The moving image playback terminal can now present characters or icons of images whose thumbnail images are displayed in a picture-in-picture mode to the user.
Further, all URLs of moving images corresponding to thumbnail images, information of image files superimposed when the thumbnail images are displayed in the picture-in-picture mode, and information indicating the types of the thumbnail images may be described in association with the SRDs of the respective thumbnail images. In this case, characters or icons indicating images in which thumbnail images are displayed in the picture-in-picture mode are displayed with respect to the respective thumbnail images, and given images are displayed in the picture-in-picture mode such that they are superimposed on the respective thumbnail images. When the user indicates a position on the stitched image, a moving image corresponding to the thumbnail image displayed at the indicated position is played back.
The URL of the moving image corresponding to the thumbnail image, the information of the image file superimposed while the thumbnail image is displayed in the picture-in-picture mode, and the information indicating the type of the thumbnail image may be described in another attribute instead of the SRD or may be described as a flag in an attribute of the extended SRD.
< second embodiment >
(description of computer to which the present disclosure applies)
The above-described processing sequence may be hardware-implemented or software-implemented. If the processing sequence is software implemented, a software program is installed in the computer. The computer may be a computer incorporated in dedicated hardware or a general-purpose personal computer capable of executing various functions by installing various programs.
Fig. 23 is a block diagram depicting a configuration example of hardware of a computer that executes the above-described processing sequence based on a program.
The computer 900 includes a CPU (central processing unit) 901, a ROM (read only memory) 902, and a RAM (random access memory) 903 connected to each other through a bus 904.
An input/output interface 905 is connected to the bus 904. An input unit 906, an output unit 907, a storage unit 908, a communication unit 909, and a driver 910 are connected to the input/output interface 905.
The input unit 906 includes a keyboard, a mouse, and a microphone. The output unit 907 includes a display and a speaker. The storage unit 908 includes a hard disk and a nonvolatile memory. The communication unit 909 includes a network interface. The drive 910 drives a removable medium 911 such as a magnetic disk, an optical disk, a magneto-optical disk, or a semiconductor memory.
In the computer 900 thus configured, the CPU 901 loads a program stored in the storage unit 908 into the RAM903 through the input/output interface 905 and the bus 904, for example, and executes the program to execute the above-described processing sequence.
For example, a program run by the computer 900(CPU 901) may be recorded in a removable medium 911 that is a package medium or the like and provided by the removable medium 911. The program may also be provided through a wired or wireless transmission medium such as a local area network, the internet, or digital satellite broadcasting.
In the computer 900, when a removable medium 911 is inserted into the drive 910, a program can be installed in the storage unit 908 through the input/output interface 905. The program may also be received by the communication unit 909 through a wired or wireless transmission medium and installed in the storage unit 908. Alternatively, the program may be installed in advance in the ROM 902 or the storage unit 908.
The program executed by the computer 900 may be a program executed in chronological order in the sequence described above in the present specification, or may be a program executed in parallel with each other or executed at a necessary timing when called.
In this specification, the term "system" means a collection of components (devices, modules (parts), etc.), and it does not matter whether all components exist in the same housing. Therefore, a plurality of devices accommodated in separate housings and connected through a network and a single device having a plurality of modules accommodated in one housing can be referred to as a system.
The advantages mentioned above in this description are only illustrative and not restrictive, and other advantages are not excluded.
Embodiments of the present disclosure are not limited to the above-described embodiments, and various changes may be made without departing from the scope of the present disclosure.
The present disclosure may be presented in the following configurations:
(1) an information processing apparatus comprising:
a setting section that sets a first adaptation group AdaptationSet including a plurality of rendering repetitions corresponding to an encoded stream having a predetermined bit rate, and a second adaptation group not including a rendering.
(2) The information processing apparatus according to (1), wherein the setting section adds information indicating that the first adaptation group and the second adaptation group are related to each other to the second adaptation group.
(3) The information processing apparatus according to (1) or (2), wherein the setting part sets the first adaptation group and the second adaptation group in a management file that manages files of the encoded streams.
(4) The information processing apparatus according to any one of (1) to (3), the encoded streams corresponding to the plurality of presentations contained in the first adaptation group being encoded streams of one image having different bit rates, an
The setting section adds each position on the screen of the plurality of divided images constituting the image to the second fitting group.
(5) The information processing apparatus according to (4), wherein the setting section adds, to the second adaptation group, an entity from which the file corresponding to each divided image is acquired.
(6) The information processing apparatus according to (4), wherein the setting section adds a position on the screen of each divided image to information of respectively different second adaptation groups.
(7) The information processing apparatus according to (6), wherein the setting section adds an entity from which the file corresponding to each of the divided images is acquired to the second adaptation group of the positions on the screen containing the divided image.
(8) The information processing apparatus according to (6) or (7), wherein the setting section adds information identifying the divided images corresponding to the second adaptation group.
(9) The information processing apparatus according to any one of (1) to (8), wherein the setting section adds information indicating the first fitting group to the first fitting group, and adds information indicating the second fitting group to the second fitting group.
(10) The information processing apparatus according to any one of (1) to (3), wherein the encoded streams corresponding to the plurality of presentations contained in the first adaptation group are encoded streams of one image having different bit rates, and
the setting section adds, to the first fitting group, information as a content component at each position on a screen of a plurality of divided images constituting the image.
(11) The information processing apparatus according to (10), wherein the setting section adds, to the first adaptation group, information from which an entity of the file corresponding to the divided image is acquired as the content component.
(12) An information processing method comprising:
a setting step in which the information processing apparatus sets a first adaptation group including a plurality of appearances corresponding to encoded streams having a predetermined bit rate and a second adaptation group not including the appearances.
(13) An information processing apparatus comprising:
a player that plays back an encoded stream having a predetermined bit rate based on a first adaptation group including a plurality of presentations corresponding to the encoded stream and a second adaptation group not including the presentations.
(14) An information processing method comprising:
a playing step in which the information processing device plays back an encoded stream having a predetermined bit rate based on a first adaptation group including a plurality of presentations corresponding to the encoded stream and a second adaptation group not including the presentation.
List of reference numerals
14 moving image playback terminal, 211 file generating device, 250 mosaic image, 251 to 254 thumbnail image, 270 screen, 295 display controller.
Claims (14)
1. An information processing apparatus comprising:
a setting section that sets a first adaptation group AdaptationSet including a plurality of rendering repetitions corresponding to an encoded stream having a predetermined bit rate, and a second adaptation group not including a rendering.
2. The information processing apparatus according to claim 1, wherein the setting section adds information indicating that the first adaptation group and the second adaptation group are related to each other to the second adaptation group.
3. The information processing apparatus according to claim 1, wherein the setting section sets the first adaptation group and the second adaptation group in a management file that manages files of the encoded streams.
4. The information processing apparatus according to claim 1, wherein the encoded streams corresponding to the plurality of presentations contained in the first adaptation group are encoded streams of one image having different bit rates, and
the setting section adds each position on the screen of the plurality of divided images constituting the image to the second fitting group.
5. The information processing apparatus according to claim 4, wherein the setting section adds, to the second adaptation group, an entity from which the file corresponding to each divided image is acquired.
6. The information processing apparatus according to claim 4, wherein the setting section adds a position on the screen of each divided image to the information of the respectively different second adaptation groups.
7. The information processing apparatus according to claim 6, wherein the setting section adds an entity from which the file corresponding to each divided image is acquired to the second adaptation group of the positions on the screen containing the divided image.
8. The information processing apparatus according to claim 6, wherein the setting section adds information identifying the divided image corresponding to the second adaptation group.
9. The information processing apparatus according to claim 1, wherein the setting section adds information indicating the first adaptation group to the first adaptation group, and adds information indicating the second adaptation group to the second adaptation group.
10. The information processing apparatus according to claim 1, wherein the encoded streams corresponding to the plurality of presentations contained in the first adaptation group are encoded streams of one image having different bit rates, and
the setting section adds, to the first fitting group, information as a content component at each position on a screen of a plurality of divided images constituting the image.
11. The information processing apparatus according to claim 10, wherein the setting section adds, to the first adaptation group, information from which an entity of a file corresponding to the divided image is acquired as a content component.
12. An information processing method comprising:
a setting step in which the information processing apparatus sets a first adaptation group including a plurality of appearances corresponding to encoded streams having a predetermined bit rate and a second adaptation group not including the appearances.
13. An information processing apparatus comprising:
a player that plays back an encoded stream having a predetermined bit rate based on a first adaptation group including a plurality of presentations corresponding to the encoded stream and a second adaptation group not including the presentations.
14. An information processing method comprising:
a playing step in which the information processing device plays back an encoded stream having a predetermined bit rate based on a first adaptation group including a plurality of presentations corresponding to the encoded stream and a second adaptation group not including the presentation.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| JP2015-119362 | 2015-06-12 | ||
| JP2015-125915 | 2015-06-23 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| HK1244380A1 true HK1244380A1 (en) | 2018-08-03 |
Family
ID=
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| JP7708229B2 (en) | Information processing device and information processing method | |
| KR102246002B1 (en) | Method, device, and computer program to improve streaming of virtual reality media content | |
| EP3310062A1 (en) | Information processing device and information processing method | |
| CA2842560C (en) | Transmission of reconstruction data in a tiered signal quality hierarchy | |
| US10757463B2 (en) | Information processing apparatus and information processing method | |
| US20180176650A1 (en) | Information processing apparatus and information processing method | |
| CN108965929B (en) | A method for presenting video information, client and device for presenting video information | |
| US11095936B2 (en) | Streaming media transmission method and client applied to virtual reality technology | |
| KR20130138263A (en) | Streaming digital video between video devices using a cable television system | |
| GB2506911A (en) | Streaming data corresponding to divided image portions (tiles) via a description file including spatial and URL data | |
| CN106464943A (en) | Information processing device and method | |
| HK1244380A1 (en) | Information processing device and information processing method | |
| CN106162380A (en) | A kind of Online Video call method and system | |
| US11005908B1 (en) | Supporting high efficiency video coding with HTTP live streaming | |
| WO2019138927A1 (en) | Information processing device and method | |
| WO2022220207A1 (en) | Information processing device and method |