[go: up one dir, main page]

US20140341300A1 - Agile decoder - Google Patents

Agile decoder Download PDF

Info

Publication number
US20140341300A1
US20140341300A1 US14/449,894 US201414449894A US2014341300A1 US 20140341300 A1 US20140341300 A1 US 20140341300A1 US 201414449894 A US201414449894 A US 201414449894A US 2014341300 A1 US2014341300 A1 US 2014341300A1
Authority
US
United States
Prior art keywords
streams
decoder
decoding
stages
formats
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/449,894
Inventor
Michael Anthony DeLuca
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GVBB Holdings SARL
Original Assignee
GVBB Holdings SARL
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GVBB Holdings SARL filed Critical GVBB Holdings SARL
Priority to US14/449,894 priority Critical patent/US20140341300A1/en
Publication of US20140341300A1 publication Critical patent/US20140341300A1/en
Assigned to THOMSON LICENSING reassignment THOMSON LICENSING ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DELUCA, MICHAEL ANTHONY
Assigned to GVBB HOLDINGS S.A.R.L. reassignment GVBB HOLDINGS S.A.R.L. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: THOMSON LICENSING
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • H04N19/00533
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder

Definitions

  • This invention relates to a decoder for decoding compressed streams.
  • video decoders have been hardware-based.
  • typical video decoders take the form of discrete, stand-alone devices dedicated for this purpose.
  • decoding streams of different formats usually requires a different chipset for each stream type and relatively lengthy firmware downloads. Multiple chipsets can prove to be very costly and making firmware changes in real-time becomes impractical.
  • a decoder for decoding at least first and second compressed streams.
  • the decoder comprises first and second decoder stages, each capable of decoding a separate one of the at least first and second streams to yield first and second uncompressed streams, respectively.
  • a routing stage routes the at least first and second streams to the at least first and second decoder stages, respectively.
  • At least first and second buffer stages each store a separate one of the first and second uncompressed streams output by the first and second decoder stages.
  • An output stage combines the uncompressed streams stored by the at least first and second buffer stages.
  • the decoder arrangement of the present principles comprises a processor programmed with software to perform the function of each of the decoder stages.
  • Software decoders allow for a high degree of customization and afford greater flexibility and control over the decoding process.
  • Source video material used in non-linear editors (NLEs) can come from different sources and can undergo compression in different formats, such as MPEG2, DV, JPG2K for example.
  • the material also can have different presentation sizes, for instance, standard definition NTSC vs. high-definition 1080i.
  • the decoder arrangement of the present principles can decode video streams regardless of their native compression format or image size by making use of decoder stages capable of decoding different formats.
  • the decoder arrangement of the present principles can decompress frames of different formats and compose the uncompressed frames onto a common “canvas.” Further, with the availability of relatively inexpensive multiprocessor systems, the decoder arrangement of the present principles can execute on multiple parallel threads so that processing throughput can scale with the number and speed the available processors.
  • FIG. 1 depicts a block schematic diagram of a decoder arrangement in accordance with a preferred embodiment of the present principles.
  • FIG. 1 depicts a block schematic drawing of a decoder arrangement 10 in accordance with a preferred embodiment of the present principles for uncompressing a plurality of compressed video streams 11 1 - 11 n , where n is an integer greater than zero.
  • one or more of the video streams 11 1 - 11 n can have a different format from the others.
  • one of the streams can take the form of an MPEG 2 stream, while another stream could take the form of a DV25 or DV50 stream.
  • the particular format of each stream remains unimportant, as long as the particular format of the stream is known a priori.
  • the decoder 10 of the present principles comprises a programmed processor 12 , such as a microprocessor, microcomputer or the like, which operates in stages as described in greater detail below.
  • the processor 12 When programmed to operate as a software decoder, the processor 12 will possess an input routing stage 13 that routes the compressed video streams 11 1 - 11 n to an appropriate one of a plurality of decoder stages 14 1 - 14 n depending of the compression format of the incoming stream.
  • Each of decoder stages 14 1 - 14 n operates to de-compress an incoming of a particular format.
  • Typical compression formats include MPEG 2, DV25, and DV50 for standard definition (SD) video, and MPEG 2, H.264-MPE4 AVC and D 100 for high definition video.
  • one or more of the decoder stages 14 1 - 14 n will have the capability of de-compressing video in one of the MPEG 2, DV25, and DV50 SD formats or in one of the MPEG 2, H.264-MPE4 AVC and DV 100 HD formats.
  • each of the presentation buffer stages 18 1 - 18 n holds a frame of a particular size which depends on the decompression format of its associated decoding stage.
  • each of the presentation buffer stages 18 1 - 18 n will have one of the following standard sizes:
  • Uncompressed SD frames will typically conform to either the NTSC or PAL format.
  • Uncompressed HD frames depending on their size, will undergo storage in a buffer stage capable of accommodating 1080i or 720p HD frames. If a decoded frame image does not conform to size of a presentation buffer stage, as can occur with some MPEG 2 video frames, the frame undergoes a clipping or cropping operation. For example, a 720 ⁇ 512 MPEG 2 frame will have 32 lines clipped before placed in presentation buffer stage.
  • a scaler 20 scales the frames read from each of the presentation buffer stages 18 1 - 18 n and also performs the required color space conversion associated with a video display renderer 22 . For example, if the video display renderer 22 has an input frame size of 1920 ⁇ 1080 pixels associated with 1080i HD frames, the scaler 20 will scale all frames to that size. Any frames smaller than 1920 ⁇ 1080 pixels will undergo up-conversion by the scaler 20 . Conversely, if the video display renderer 22 has an input frame size of 720 ⁇ 480 pixels, then the scaler 20 will down-convert larger size frames.
  • the decoder arrangement 10 of FIG. 1 improves efficiency by affording temporal parallelism. With software-based decoder arrangements, reading the compressed video stream can involve time spent with disk access. Additionally, writing frames of the uncompressed output stream typically will involve time spent performing bus transfers to display hardware. Waiting for to complete such input/output (I/O) tasks to occur does not constitute an efficient use of processing cycles.
  • the decoder arrangement 10 of the present principles makes use of the presentation buffers 18 1 - 18 n to decouple the decoding stages 14 1 - 14 n from the scaler 20 that performs the scaling and color space conversion.
  • a decoder stage can decode the next frame (N+1) as the scaler 20 formats the now uncompressed frame (N) for output.
  • a lock semaphore protects each presentation buffer stage to prevent one thread from overrunning the other.
  • the decoder arrangement 10 makes use of a processor 12 for executing a software based program for decoding a plurality of streams 11 1 - 11 n , typically, although not necessarily in serial fashion.
  • FIG. 1 depicts the processor 12 as being dedicated to the decoding task, the processor could perform other operations in addition to decoding.
  • the processor 12 when networked with other processors in a large system, could detect the number of other of processors available. If another processor besides processor 12 becomes available, then spatial parallelism becomes automatically enabled.
  • Most video compression algorithms process an individual frame using a series of smaller sub-picture areas called macroblocks. The total number of macroblocks blocks can be divided among the available processors by creating multiple threads of execution. If the number of blocks for each processor remains less than the total number of blocks requiring decoding, then the time required to decode each frame is less. When using this method, each thread of execution is synchronized at the end of each frame to prevent image tearing.
  • the software-based decoding arrangement 10 affords several advantages are compared to stand-alone hardware decoders.
  • the decoder arrangement 10 can expand easily to support new compression standards as they become available since all that becomes necessary is a software update.
  • individual 3 rd -party CODEC components can be replaced if faster or better substitutes are found.
  • the decoder arrangement 10 can switch compression types on a video frame boundary, typically every 33 ms at NTSC rates and 40 ms at PAL rates to provide seamless decoding operations for all supported compression formats.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Discrete Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

A decoder arrangement (10) includes a processor (12) programmed to decode multiple streams (111-11 n), including multiple streams of different formats. In terms of functionality, the decoder arrangement includes a routing stage (13) routes each streams to different decoder stages (141-14 n), each capable of decoding a stream of a particular format to yield an uncompressed stream at its output. Each of plurality of buffer stages (161-16 n) stores a successive frame of an uncompressed stream output by an associated decoder stage. An output stage scales and the frames stored by the buffer stages to a common size for input to a display device (22).

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims priority under 35 U.S.C. 119(e) to U.S. Provisional Patent Application Ser. No. 60/653,538, filed Feb. 16, 2005, the teachings of which are incorporated herein.
  • TECHNICAL FIELD
  • This invention relates to a decoder for decoding compressed streams.
  • BACKGROUND ART
  • Traditionally, video decoders have been hardware-based. In other words, typical video decoders take the form of discrete, stand-alone devices dedicated for this purpose. With hardware decoder implementations, decoding streams of different formats usually requires a different chipset for each stream type and relatively lengthy firmware downloads. Multiple chipsets can prove to be very costly and making firmware changes in real-time becomes impractical.
  • Thus a need exists for a decoder arrangement that overcomes the disadvantages of the prior art.
  • BRIEF SUMMARY OF THE INVENTION
  • Briefly, in accordance with a preferred embodiment of the present principles, there is provided a decoder for decoding at least first and second compressed streams. The decoder comprises first and second decoder stages, each capable of decoding a separate one of the at least first and second streams to yield first and second uncompressed streams, respectively. A routing stage routes the at least first and second streams to the at least first and second decoder stages, respectively. At least first and second buffer stages each store a separate one of the first and second uncompressed streams output by the first and second decoder stages. An output stage combines the uncompressed streams stored by the at least first and second buffer stages.
  • In practice, the decoder arrangement of the present principles comprises a processor programmed with software to perform the function of each of the decoder stages. Software decoders allow for a high degree of customization and afford greater flexibility and control over the decoding process. Source video material used in non-linear editors (NLEs) can come from different sources and can undergo compression in different formats, such as MPEG2, DV, JPG2K for example. The material also can have different presentation sizes, for instance, standard definition NTSC vs. high-definition 1080i. The decoder arrangement of the present principles can decode video streams regardless of their native compression format or image size by making use of decoder stages capable of decoding different formats. The decoder arrangement of the present principles can decompress frames of different formats and compose the uncompressed frames onto a common “canvas.” Further, with the availability of relatively inexpensive multiprocessor systems, the decoder arrangement of the present principles can execute on multiple parallel threads so that processing throughput can scale with the number and speed the available processors.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts a block schematic diagram of a decoder arrangement in accordance with a preferred embodiment of the present principles.
  • DETAILED DESCRIPTION
  • FIG. 1 depicts a block schematic drawing of a decoder arrangement 10 in accordance with a preferred embodiment of the present principles for uncompressing a plurality of compressed video streams 11 1-11 n, where n is an integer greater than zero. In practice, one or more of the video streams 11 1-11 n can have a different format from the others. For example, one of the streams can take the form of an MPEG 2 stream, while another stream could take the form of a DV25 or DV50 stream. The particular format of each stream remains unimportant, as long as the particular format of the stream is known a priori.
  • In practice the decoder 10 of the present principles comprises a programmed processor 12, such as a microprocessor, microcomputer or the like, which operates in stages as described in greater detail below. When programmed to operate as a software decoder, the processor 12 will possess an input routing stage 13 that routes the compressed video streams 11 1-11 n to an appropriate one of a plurality of decoder stages 14 1-14 n depending of the compression format of the incoming stream. Each of decoder stages 14 1-14 n operates to de-compress an incoming of a particular format. Typical compression formats include MPEG 2, DV25, and DV50 for standard definition (SD) video, and MPEG 2, H.264-MPE4 AVC and D 100 for high definition video. Thus, depending on the composition of the compressed video streams 11 1-11 n, one or more of the decoder stages 14 1-14 n will have the capability of de-compressing video in one of the MPEG 2, DV25, and DV50 SD formats or in one of the MPEG 2, H.264-MPE4 AVC and DV 100 HD formats.
  • Upon receipt of an incoming one of the streams 11 1-11 n of a specific compression format, a decoder stage associated with that format will decode the stream to yield successive uncompressed frames. Each uncompressed frame output by a decoder stage undergoes storage in a corresponding one of presentation buffer stages 18 1-18 n, respectively, sized to receive the uncompressed frame. In practice, each of the presentation buffer stages 18 1-18 n holds a frame of a particular size which depends on the decompression format of its associated decoding stage. Typically, each of the presentation buffer stages 18 1-18 n will have one of the following standard sizes:
  • NTSC (720×480)
  • PAL (720×576)
  • 1080i (1920×1080)
  • 720p (1280×720)
  • Uncompressed SD frames will typically conform to either the NTSC or PAL format. Uncompressed HD frames, depending on their size, will undergo storage in a buffer stage capable of accommodating 1080i or 720p HD frames. If a decoded frame image does not conform to size of a presentation buffer stage, as can occur with some MPEG 2 video frames, the frame undergoes a clipping or cropping operation. For example, a 720×512 MPEG 2 frame will have 32 lines clipped before placed in presentation buffer stage.
  • A scaler 20 scales the frames read from each of the presentation buffer stages 18 1-18 n and also performs the required color space conversion associated with a video display renderer 22. For example, if the video display renderer 22 has an input frame size of 1920×1080 pixels associated with 1080i HD frames, the scaler 20 will scale all frames to that size. Any frames smaller than 1920×1080 pixels will undergo up-conversion by the scaler 20. Conversely, if the video display renderer 22 has an input frame size of 720×480 pixels, then the scaler 20 will down-convert larger size frames.
  • The decoder arrangement 10 of FIG. 1 improves efficiency by affording temporal parallelism. With software-based decoder arrangements, reading the compressed video stream can involve time spent with disk access. Additionally, writing frames of the uncompressed output stream typically will involve time spent performing bus transfers to display hardware. Waiting for to complete such input/output (I/O) tasks to occur does not constitute an efficient use of processing cycles. The decoder arrangement 10 of the present principles makes use of the presentation buffers 18 1-18 n to decouple the decoding stages 14 1-14 n from the scaler 20 that performs the scaling and color space conversion. After a frame (N) undergoes decoding and storage in a presentation buffer, a decoder stage can decode the next frame (N+1) as the scaler 20 formats the now uncompressed frame (N) for output. A lock semaphore protects each presentation buffer stage to prevent one thread from overrunning the other.
  • As discussed above, the decoder arrangement 10 makes use of a processor 12 for executing a software based program for decoding a plurality of streams 11 1-11 n, typically, although not necessarily in serial fashion. Although FIG. 1 depicts the processor 12 as being dedicated to the decoding task, the processor could perform other operations in addition to decoding. Although not explicitly depicted in FIG. 1, the processor 12, when networked with other processors in a large system, could detect the number of other of processors available. If another processor besides processor 12 becomes available, then spatial parallelism becomes automatically enabled. Most video compression algorithms process an individual frame using a series of smaller sub-picture areas called macroblocks. The total number of macroblocks blocks can be divided among the available processors by creating multiple threads of execution. If the number of blocks for each processor remains less than the total number of blocks requiring decoding, then the time required to decode each frame is less. When using this method, each thread of execution is synchronized at the end of each frame to prevent image tearing.
  • The software-based decoding arrangement 10 affords several advantages are compared to stand-alone hardware decoders. For example, the decoder arrangement 10 can expand easily to support new compression standards as they become available since all that becomes necessary is a software update. In addition, individual 3rd-party CODEC components can be replaced if faster or better substitutes are found. Further, the decoder arrangement 10 can switch compression types on a video frame boundary, typically every 33 ms at NTSC rates and 40 ms at PAL rates to provide seamless decoding operations for all supported compression formats.
  • The foregoing describes a decoder arrangement for decoding video streams of different formats.

Claims (21)

1-16. (canceled)
17. A decoder for decoding at least first and second streams, comprising:
at least first and second decoder stages, each capable of simultaneously decoding a separate one of the at least first and second streams to yield at least first and second uncompressed streams;
a routing stage for routing a separate one of the at least first and second streams to the at least first and second decoding stages, respectively; and
at least first and second buffer stages each storing a frame of a separate one of the first and second uncompressed streams, respectively, wherein a lock semaphore prevents each of the at least one of the first and second buffer stages from overrunning the other buffer stage.
18. The decoder of claim 17, further comprising:
a scaler for scaling the frames from the first and second buffer stages to a common size.
19. The decoder of claim 18, wherein frames smaller than the common size are upconverted and frames larger than the common size are downconverted.
20. The decoder of claim 18, wherein each of the at least first and second presentation buffer stages decouples an associated one of the at least first and second decoder stages from the scaler to permit each decoder stage to decode independently to operate in temporal parallelism.
21. The decoder of claim 18, wherein the scaler performs color space conversion.
22. The decoder of claim 17, wherein each of the at least first and second streams are encoded first and second formats, which format is known before decoding commences, and wherein each of the at least first and second decoder stages have first and second decoding formats, each matching the first and second formats of the at least first and second streams
23. The decoder of claim 17, wherein the first encoding format comprises one of the MPEG 2, DV25, and DV50 for standard definition (SD) video, and MPEG 2, H.264-MPE4 AVC and DV 100 for high definition video.
24. The decoder of claim 17, wherein the second encoding format comprises one of the MPEG 2, DV25, and DV50 for standard definition (SD) video, and MPEG 2, H.264-MPE4 AVC and DV 100 for high definition video.
25. The decoder of claim 17, wherein the at least first and second buffer stages have different sizes.
26. The decoder of claim 25, wherein the first and second presentation buffer stages have a size of one of 720×480 pixels, 720×576 pixels, 1920×1080 pixels or 1280×720 pixels.
27. A method for decoding at least first and second streams, comprising:
routing a separate one of the at least first and second streams for decoding;
decoding a separate one of the at least first and second streams to yield at least first and second uncompressed streams, respectively; and
storing a frame of a separate one of the at least first and second uncompressed streams, wherein a lock semaphore prevents each of the at least one or the first and second buffer stages from overrunning the other buffer stage.
28. The method of claim 27, wherein the decoding the first and second streams to yield at least first and second uncompressed streams, respectively, occurs simultaneously.
29. The method of claim 27, wherein each of the at least first and second streams are encoded first and second formats known before decoding commences and the decoding further comprises decoding the at least first and second streams using the first and second decoding formats.
30. The method of claim 27, further comprising:
scaling the stored frames of the first and second uncompressed streams to a common size.
31. The method of claim 30, wherein frames smaller than the common size are upconverted and frames larger than the common size are down converted.
32. The method of claim 27, wherein the first encoding format comprises one of the MPEG 2, DV25, and DV50 encoding formats for standard definition (SD) video, and MPEG 2, H.264-MPE4 AVC and DV 100 encoding formats for high definition video.
33. The method of claim 27, wherein the second encoding format comprises one of the MPEG 2, DV25, and DV50 encoding formats for standard definition (SD) video, and MPEG 2, H.264-MPE4 AVC and DV100 encoding formats for high definition video.
34. The method of claim 27, further comprising:
decoupling encoding of the at least first and second streams from scaling of the stored streams.
35. The method of claim 27, further comprising:
performing color conversion on the stored frames.
36. The method of claim 27, further comprising:
decoding multiple streams in spatial parallelism.
US14/449,894 2005-02-16 2014-08-01 Agile decoder Abandoned US20140341300A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US14/449,894 US20140341300A1 (en) 2005-02-16 2014-08-01 Agile decoder

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US65353805P 2005-02-16 2005-02-16
PCT/US2006/003520 WO2006088644A1 (en) 2005-02-16 2006-02-01 Agile decoder
US88398407A 2007-08-08 2007-08-08
US14/449,894 US20140341300A1 (en) 2005-02-16 2014-08-01 Agile decoder

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US11/883,984 Continuation US8831109B2 (en) 2005-02-16 2006-02-01 Agile decoder
PCT/US2006/003520 Continuation WO2006088644A1 (en) 2005-02-16 2006-02-01 Agile decoder

Publications (1)

Publication Number Publication Date
US20140341300A1 true US20140341300A1 (en) 2014-11-20

Family

ID=36228686

Family Applications (2)

Application Number Title Priority Date Filing Date
US11/883,984 Expired - Fee Related US8831109B2 (en) 2005-02-16 2006-02-01 Agile decoder
US14/449,894 Abandoned US20140341300A1 (en) 2005-02-16 2014-08-01 Agile decoder

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/883,984 Expired - Fee Related US8831109B2 (en) 2005-02-16 2006-02-01 Agile decoder

Country Status (7)

Country Link
US (2) US8831109B2 (en)
EP (1) EP1851968A1 (en)
JP (1) JP2008538457A (en)
KR (1) KR20070105999A (en)
CN (1) CN101120592A (en)
CA (1) CA2597536A1 (en)
WO (1) WO2006088644A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4901389B2 (en) * 2006-09-20 2012-03-21 株式会社東芝 Video server and material output method
JP4901390B2 (en) * 2006-09-20 2012-03-21 株式会社東芝 Video server and material output method
KR100960147B1 (en) * 2007-11-23 2010-05-27 한국전자통신연구원 Motion Compensation Method of Motion Compensator
US8462841B2 (en) * 2007-12-31 2013-06-11 Netlogic Microsystems, Inc. System, method and device to encode and decode video data having multiple video data formats
CN101242538B (en) * 2008-03-18 2010-06-02 华为技术有限公司 A code stream decoding method and device
US8392942B2 (en) * 2008-10-02 2013-03-05 Sony Corporation Multi-coded content substitution
US10015285B2 (en) * 2013-03-14 2018-07-03 Huawei Technologies Co., Ltd. System and method for multi-stream compression and decompression
JP2015171020A (en) * 2014-03-07 2015-09-28 日本電気株式会社 Receiving device and receiving method
CN105096367B (en) * 2014-04-30 2018-07-13 广州市动景计算机科技有限公司 Method and device for optimizing Canvas rendering performance
CN104768051B (en) * 2015-03-06 2017-12-15 深圳市九洲电器有限公司 The adaptive method for switching and system of odd encoder formatted program stream
CN107368430B (en) * 2017-07-12 2020-02-18 青岛海信移动通信技术股份有限公司 Method and device for reducing video memory

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179665A (en) * 1987-06-24 1993-01-12 Westinghouse Electric Corp. Microprocessor information exchange with updating of messages by asynchronous processors using assigned and/or available buffers in dual port memory
US6205187B1 (en) * 1997-12-12 2001-03-20 General Dynamics Government Systems Corporation Programmable signal decoder
US7055018B1 (en) * 2001-12-31 2006-05-30 Apple Computer, Inc. Apparatus for parallel vector table look-up
US7133408B1 (en) * 2000-10-13 2006-11-07 Sun Microsystems, Inc. Shared decoder
US20070041444A1 (en) * 2004-02-27 2007-02-22 Gutierrez Novelo Manuel R Stereoscopic 3D-video image digital decoding system and method

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4570217A (en) * 1982-03-29 1986-02-11 Allen Bruce S Man machine interface
US6879266B1 (en) * 1997-08-08 2005-04-12 Quickshift, Inc. Memory module including scalable embedded parallel data compression and decompression engines
US7215376B2 (en) * 1997-10-06 2007-05-08 Silicon Image, Inc. Digital video system and methods for providing same
JP4427827B2 (en) 1998-07-15 2010-03-10 ソニー株式会社 Data processing method, data processing apparatus, and recording medium
US6636222B1 (en) * 1999-11-09 2003-10-21 Broadcom Corporation Video and graphics system with an MPEG video decoder for concurrent multi-row decoding
US6246720B1 (en) * 1999-10-21 2001-06-12 Sony Corporation Of Japan Flexible software-based decoding system with decoupled decoding timing and output timing
JP2002354475A (en) 2001-05-28 2002-12-06 Matsushita Electric Ind Co Ltd Image decoding processing apparatus and image decoding processing method
JP2003152546A (en) 2001-11-15 2003-05-23 Matsushita Electric Ind Co Ltd Multi-format stream decoding device and multi-format stream sending device
JP3828010B2 (en) * 2001-12-21 2006-09-27 株式会社日立国際電気 Image receiving system
US7167108B2 (en) * 2002-12-04 2007-01-23 Koninklijke Philips Electronics N.V. Method and apparatus for selecting particular decoder based on bitstream format detection
US7966642B2 (en) * 2003-09-15 2011-06-21 Nair Ajith N Resource-adaptive management of video storage
KR100547146B1 (en) 2003-10-06 2006-01-26 삼성전자주식회사 Image processing apparatus and method
KR100619053B1 (en) * 2003-11-10 2006-08-31 삼성전자주식회사 Information storage medium recording subtitles and processing apparatus thereof
US7400359B1 (en) * 2004-01-07 2008-07-15 Anchor Bay Technologies, Inc. Video stream routing and format conversion unit with audio delay
JP4737991B2 (en) * 2005-01-04 2011-08-03 株式会社東芝 Playback device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5179665A (en) * 1987-06-24 1993-01-12 Westinghouse Electric Corp. Microprocessor information exchange with updating of messages by asynchronous processors using assigned and/or available buffers in dual port memory
US6205187B1 (en) * 1997-12-12 2001-03-20 General Dynamics Government Systems Corporation Programmable signal decoder
US7133408B1 (en) * 2000-10-13 2006-11-07 Sun Microsystems, Inc. Shared decoder
US7055018B1 (en) * 2001-12-31 2006-05-30 Apple Computer, Inc. Apparatus for parallel vector table look-up
US20070041444A1 (en) * 2004-02-27 2007-02-22 Gutierrez Novelo Manuel R Stereoscopic 3D-video image digital decoding system and method

Also Published As

Publication number Publication date
US20090123081A1 (en) 2009-05-14
CN101120592A (en) 2008-02-06
US8831109B2 (en) 2014-09-09
KR20070105999A (en) 2007-10-31
EP1851968A1 (en) 2007-11-07
WO2006088644A1 (en) 2006-08-24
JP2008538457A (en) 2008-10-23
CA2597536A1 (en) 2006-08-24

Similar Documents

Publication Publication Date Title
US20140341300A1 (en) Agile decoder
JP2521010B2 (en) Method and apparatus for displaying multiple video windows
US8902966B2 (en) Video decoding device
US8462841B2 (en) System, method and device to encode and decode video data having multiple video data formats
US8170375B2 (en) Image processing apparatus and method for controlling the same
US7595743B1 (en) System and method for reducing storage requirements for content adaptive binary arithmetic coding
US8774540B2 (en) Tile support in decoders
US6728312B1 (en) Adaptive video decoding and rendering with respect to processor congestion
US20100118960A1 (en) Image decoding apparatus, image decoding method, and image data converting apparatus
US5943508A (en) Switcher using shared decompression processors for processing both broadband and compressed video data
US7068847B1 (en) High-resolution still picture decoding device
US20080075162A1 (en) Video decoding and transcoding method and system
JP3410669B2 (en) Video and audio processing device
JP2002164790A (en) Compressed stream decoding apparatus and method, and storage medium
CN1126927A (en) Digital image decoding device and method
US7301582B2 (en) Line address computer for providing line addresses in multiple contexts for interlaced to progressive conversion
US20160119649A1 (en) Device and Method for Processing Ultra High Definition (UHD) Video Data Using High Efficiency Video Coding (HEVC) Universal Decoder
US7864858B2 (en) Techniques for minimizing memory bandwidth used for motion compensation
US20110305442A1 (en) Method for utilizing at least one storage space sharing scheme to manage storage spaces utilized by video playback operation and related video playback apparatus thereof
KR100378706B1 (en) A multi-channel supporting apparatus and a method in the digital video system
US6614437B1 (en) Apparatus and method for efficient memory utilization in an electronic system
US10904578B2 (en) Video processing apparatus and video processing circuits thereof
US20080159637A1 (en) Deblocking filter hardware accelerator with interlace frame support
JP2008521325A (en) Reduction of latency of motion estimation based on video processing system
JP2001186524A (en) Video signal decoder

Legal Events

Date Code Title Description
AS Assignment

Owner name: THOMSON LICENSING, FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DELUCA, MICHAEL ANTHONY;REEL/FRAME:039479/0774

Effective date: 20060202

Owner name: GVBB HOLDINGS S.A.R.L., LUXEMBOURG

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:THOMSON LICENSING;REEL/FRAME:039479/0788

Effective date: 20101231

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION