[go: up one dir, main page]

US20160041993A1 - Apparatus and methods for lightweight transcoding - Google Patents

Apparatus and methods for lightweight transcoding Download PDF

Info

Publication number
US20160041993A1
US20160041993A1 US14/452,359 US201414452359A US2016041993A1 US 20160041993 A1 US20160041993 A1 US 20160041993A1 US 201414452359 A US201414452359 A US 201414452359A US 2016041993 A1 US2016041993 A1 US 2016041993A1
Authority
US
United States
Prior art keywords
content
data
format
codec
transcoding
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14/452,359
Inventor
Stephen Maynard
Trever Hallock
Nicholas Nielsen
Ernest Biancarelli
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Time Warner Cable Enterprises LLC
Original Assignee
Time Warner Cable Enterprises LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Time Warner Cable Enterprises LLC filed Critical Time Warner Cable Enterprises LLC
Priority to US14/452,359 priority Critical patent/US20160041993A1/en
Assigned to TIME WARNER CABLE ENTERPRISES LLC reassignment TIME WARNER CABLE ENTERPRISES LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: NIELSEN, NICHOLAS, BIANCARELLI, ERNEST, HALLOCK, TREVER, MAYNARD, STEPHEN
Publication of US20160041993A1 publication Critical patent/US20160041993A1/en
Assigned to BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT reassignment BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRIGHT HOUSE NETWORKS, LLC, CHARTER COMMUNICATIONS OPERATING, LLC, TIME WARNER CABLE ENTERPRISES LLC
Assigned to TIME WARNER CABLE ENTERPRISES LLC reassignment TIME WARNER CABLE ENTERPRISES LLC CHANGE OF ADDRESS Assignors: TIME WARNER CABLE ENTERPRISES LLC
Assigned to THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. reassignment THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: TIME WARNER CABLE INFORMATION SERVICES (NORTH CAROLINA), LLC, ADCAST NORTH CAROLINA CABLE ADVERTISING, LLC, ALABANZA LLC, AMERICAN CABLE ENTERTAINMENT COMPANY, LLC, AMERICA'S JOB EXCHANGE LLC, ATHENS CABLEVISION, LLC, AUSABLE CABLE TV, LLC, BHN HOME SECURITY SERVICES, LLC, BHN SPECTRUM INVESTMENTS, LLC, BRESNAN BROADBAND HOLDINGS, LLC, BRESNAN BROADBAND OF COLORADO, LLC, BRESNAN BROADBAND OF MONTANA, LLC, BRESNAN BROADBAND OF UTAH, LLC, BRESNAN BROADBAND OF WYOMING, LLC, BRESNAN COMMUNICATIONS, LLC, BRESNAN DIGITAL SERVICES, LLC, BRESNAN MICROWAVE OF MONTANA, LLC, BRIGHT HOUSE NETWORKS INFORMATION SERVICES (ALABAMA), LLC, BRIGHT HOUSE NETWORKS INFORMATION SERVICES (CALIFORNIA), LLC, BRIGHT HOUSE NETWORKS INFORMATION SERVICES (FLORIDA), LLC, BRIGHT HOUSE NETWORKS INFORMATION SERVICES (INDIANA), LLC, BRIGHT HOUSE NETWORKS INFORMATION SERVICES (MICHIGAN), LLC, BRIGHT HOUSE NETWORKS, LLC, CABLE EQUITIES COLORADO, LLC, CABLE EQUITIES OF COLORADO MANAGEMENT LLC CC 10, LLC, CC FIBERLINK, LLC, CC MICHIGAN, LLC, CC SYSTEMS, LLC, CC V HOLDINGS, LLC, CC VI FIBERLINK, LLC, CC VI OPERATING COMPANY, LLC, CC VII FIBERLINK, LLC, CC VIII FIBERLINK, LLC, CC VIII HOLDINGS, LLC, CC VIII OPERATING, LLC, CC VIII, LLC, CCO FIBERLINK, LLC, CCO HOLDCO TRANSFERS VII, LLC, CCO LP, LLC, CCO NR HOLDINGS, LLC, CCO PURCHASING, LLC, CCO SOCAL I, LLC, CCO SOCAL II, LLC, CCO SOCAL VEHICLES, LLC, CCO TRANSFERS, LLC, CHARTER ADVANCED SERVICES (AL), LLC, CHARTER ADVANCED SERVICES (CA), LLC, CHARTER ADVANCED SERVICES (CO), LLC, CHARTER ADVANCED SERVICES (CT), LLC, CHARTER ADVANCED SERVICES (GA), LLC, CHARTER ADVANCED SERVICES (IL), LLC, CHARTER ADVANCED SERVICES (IN), LLC, CHARTER ADVANCED SERVICES (KY), LLC, CHARTER ADVANCED SERVICES (LA), LLC, CHARTER ADVANCED SERVICES (MA), LLC, CHARTER ADVANCED SERVICES (MD), LLC, CHARTER ADVANCED SERVICES (MI), LLC, CHARTER ADVANCED SERVICES (MN), LLC, CHARTER ADVANCED SERVICES (MO), LLC, CHARTER ADVANCED SERVICES (MS), LLC, CHARTER ADVANCED SERVICES (MT), LLC, CHARTER ADVANCED SERVICES (NC), LLC, CHARTER ADVANCED SERVICES (NE), LLC, CHARTER ADVANCED SERVICES (NH), LLC, CHARTER ADVANCED SERVICES (NV), LLC, CHARTER ADVANCED SERVICES (NY), LLC, CHARTER ADVANCED SERVICES (OH), LLC, CHARTER ADVANCED SERVICES (OR), LLC, CHARTER ADVANCED SERVICES (PA), LLC, CHARTER ADVANCED SERVICES (SC), LLC, CHARTER ADVANCED SERVICES (TN), LLC, CHARTER ADVANCED SERVICES (TX), LLC, CHARTER ADVANCED SERVICES (UT), LLC, CHARTER ADVANCED SERVICES (VA), LLC, CHARTER ADVANCED SERVICES (VT), LLC, CHARTER ADVANCED SERVICES (WA), LLC, CHARTER ADVANCED SERVICES (WI), LLC, CHARTER ADVANCED SERVICES (WV), LLC, CHARTER ADVANCED SERVICES (WY), LLC, CHARTER ADVANCED SERVICES VIII (MI), LLC, CHARTER ADVANCED SERVICES VIII (MN), LLC, CHARTER ADVANCED SERVICES VIII (WI), LLC, CHARTER ADVERTISING OF SAINT LOUIS, LLC, CHARTER CABLE OPERATING COMPANY, LLC, CHARTER CABLE PARTNERS, LLC, CHARTER COMMUNICATIONS ENTERTAINMENT I, LLC, CHARTER COMMUNICATIONS ENTERTAINMENT II, LLC, CHARTER COMMUNICATIONS ENTERTAINMENT, LLC, CHARTER COMMUNICATIONS OF CALIFORNIA, LLC, CHARTER COMMUNICATIONS OPERATING CAPITAL CORP., CHARTER COMMUNICATIONS OPERATING, LLC, CHARTER COMMUNICATIONS PROPERTIES LLC, CHARTER COMMUNICATIONS V, LLC, CHARTER COMMUNICATIONS VENTURES, LLC, CHARTER COMMUNICATIONS VI, L.L.C., CHARTER COMMUNICATIONS VII, LLC, CHARTER COMMUNICATIONS, LLC, CHARTER DISTRIBUTION, LLC, CHARTER FIBERLINK - ALABAMA, LLC, CHARTER FIBERLINK - GEORGIA, LLC, CHARTER FIBERLINK - ILLINOIS, LLC, CHARTER FIBERLINK - MARYLAND II, LLC, CHARTER FIBERLINK - MICHIGAN, LLC, CHARTER FIBERLINK - MISSOURI, LLC, CHARTER FIBERLINK - NEBRASKA, LLC, CHARTER FIBERLINK - PENNSYLVANIA, LLC, CHARTER FIBERLINK - TENNESSEE, LLC, CHARTER FIBERLINK AR-CCVII, LLC, CHARTER FIBERLINK CA-CCO, LLC, CHARTER FIBERLINK CC VIII, LLC, CHARTER FIBERLINK CCO, LLC, CHARTER FIBERLINK CT-CCO, LLC, CHARTER FIBERLINK LA-CCO, LLC, CHARTER FIBERLINK MA-CCO, LLC, CHARTER FIBERLINK MS-CCVI, LLC, CHARTER FIBERLINK NC-CCO, LLC, CHARTER FIBERLINK NH-CCO, LLC, CHARTER FIBERLINK NV-CCVII, LLC, CHARTER FIBERLINK NY-CCO, LLC, CHARTER FIBERLINK OH-CCO, LLC, CHARTER FIBERLINK OR-CCVII, LLC, CHARTER FIBERLINK SC-CCO, LLC, CHARTER FIBERLINK TX-CCO, LLC, CHARTER FIBERLINK VA-CCO, LLC, CHARTER FIBERLINK VT-CCO, LLC, CHARTER FIBERLINK WA-CCVII, LLC, CHARTER HELICON, LLC, CHARTER HOME SECURITY, LLC, CHARTER LEASING HOLDING COMPANY, LLC, CHARTER LEASING OF WISCONSIN, LLC, CHARTER RMG, LLC, CHARTER STORES FCN, LLC, CHARTER VIDEO ELECTRONICS, LLC, COAXIAL COMMUNICATIONS OF CENTRAL OHIO LLC, DUKENET COMMUNICATIONS HOLDINGS, LLC, DUKENET COMMUNICATIONS, LLC, FALCON CABLE COMMUNICATIONS, LLC, FALCON CABLE MEDIA, A CALIFORNIA LIMITED PARTNERSHIP, FALCON CABLE SYSTEMS COMPANY II, L.P., FALCON CABLEVISION, A CALIFORNIA LIMITED PARTNERSHIP, FALCON COMMUNITY CABLE, L.P., FALCON COMMUNITY VENTURES I LIMITED PARTNERSHIP, FALCON FIRST CABLE OF THE SOUTHEAST, LLC, FALCON FIRST, LLC, FALCON TELECABLE, A CALIFORNIA LIMITED PARTNERSHIP, FALCON VIDEO COMMUNICATIONS, L.P., HELICON PARTNERS I, L.P., HOMETOWN T.V., LLC, HPI ACQUISITION CO. LLC, ICI HOLDINGS, LLC, INSIGHT BLOCKER LLC, INSIGHT CAPITAL LLC, INSIGHT COMMUNICATIONS COMPANY LLC, INSIGHT COMMUNICATIONS COMPANY, L.P, INSIGHT COMMUNICATIONS MIDWEST, LLC, INSIGHT COMMUNICATIONS OF CENTRAL OHIO, LLC, INSIGHT COMMUNICATIONS OF KENTUCKY, L.P., INSIGHT INTERACTIVE, LLC, INSIGHT KENTUCKY CAPITAL, LLC, INSIGHT KENTUCKY PARTNERS I, L.P., INSIGHT KENTUCKY PARTNERS II, L.P., INSIGHT MIDWEST HOLDINGS, LLC, INSIGHT MIDWEST, L.P., INSIGHT PHONE OF INDIANA, LLC, INSIGHT PHONE OF KENTUCKY, LLC, INSIGHT PHONE OF OHIO, LLC, INTERACTIVE CABLE SERVICES, LLC, INTERLINK COMMUNICATIONS PARTNERS, LLC, INTREPID ACQUISITION LLC, LONG BEACH, LLC, MARCUS CABLE ASSOCIATES, L.L.C., MARCUS CABLE OF ALABAMA, L.L.C., MARCUS CABLE, LLC, MIDWEST CABLE COMMUNICATIONS, LLC, NAVISITE LLC, NEW WISCONSIN PROCUREMENT LLC, OCEANIC TIME WARNER CABLE LLC, PARITY ASSETS, LLC, PEACHTREE CABLE TV, L.P., PEACHTREE CABLE TV, LLC, PHONE TRANSFERS (AL), LLC, PHONE TRANSFERS (CA), LLC, PHONE TRANSFERS (GA), LLC, PHONE TRANSFERS (NC), LLC, PHONE TRANSFERS (TN), LLC, PHONE TRANSFERS (VA), LLC, PLATTSBURGH CABLEVISION, LLC, RENAISSANCE MEDIA LLC, RIFKIN ACQUISITION PARTNERS, LLC, ROBIN MEDIA GROUP, LLC, SCOTTSBORO TV CABLE, LLC TENNESSEE, LLC, THE HELICON GROUP, L.P., TIME WARNER CABLE BUSINESS LLC, TIME WARNER CABLE ENTERPRISES LLC, TIME WARNER CABLE INFORMATION SERVICES (ALABAMA), LLC, TIME WARNER CABLE INFORMATION SERVICES (ARIZONA), LLC, TIME WARNER CABLE INFORMATION SERVICES (CALIFORNIA), LLC, TIME WARNER CABLE INFORMATION SERVICES (COLORADO), LLC, TIME WARNER CABLE INFORMATION SERVICES (HAWAII), LLC, TIME WARNER CABLE INFORMATION SERVICES (IDAHO), LLC, TIME WARNER CABLE INFORMATION SERVICES (ILLINOIS), LLC, TIME WARNER CABLE INFORMATION SERVICES (INDIANA), LLC, TIME WARNER CABLE INFORMATION SERVICES (KANSAS), LLC, TIME WARNER CABLE INFORMATION SERVICES (KENTUCKY), LLC, TIME WARNER CABLE INFORMATION SERVICES (MAINE), LLC, TIME WARNER CABLE INFORMATION SERVICES (MASSACHUSETTS), LLC, TIME WARNER CABLE INFORMATION SERVICES (MICHIGAN), LLC, TIME WARNER CABLE INFORMATION SERVICES (MISSOURI), LLC, TIME WARNER CABLE INFORMATION SERVICES (NEBRASKA), LLC, TIME WARNER CABLE INFORMATION SERVICES (NEW HAMPSHIRE), LLC, TIME WARNER CABLE INFORMATION SERVICES (NEW JERSEY), LLC, TIME WARNER CABLE INFORMATION SERVICES (NEW MEXICO) LLC, TIME WARNER CABLE INFORMATION SERVICES (NEW YORK), LLC, TIME WARNER CABLE INFORMATION SERVICES (OHIO), LLC, TIME WARNER CABLE INFORMATION SERVICES (PENNSYLVANIA), LLC, TIME WARNER CABLE INFORMATION SERVICES (SOUTH CAROLINA), LLC, TIME WARNER CABLE INFORMATION SERVICES (TENNESSEE), LLC, TIME WARNER CABLE INFORMATION SERVICES (TEXAS), LLC, TIME WARNER CABLE INFORMATION SERVICES (VIRGINIA), LLC, TIME WARNER CABLE INFORMATION SERVICES (WASHINGTON), LLC, TIME WARNER CABLE INFORMATION SERVICES (WEST VIRGINIA), LLC, TIME WARNER CABLE INFORMATION SERVICES (WISCONSIN), LLC, TIME WARNER CABLE INTERNATIONAL LLC, TIME WARNER CABLE INTERNET HOLDINGS III LLC, TIME WARNER CABLE INTERNET HOLDINGS LLC, TIME WARNER CABLE INTERNET LLC, TIME WARNER CABLE MEDIA LLC, TIME WARNER CABLE MIDWEST LLC, TIME WARNER CABLE NEW YORK CITY LLC, TIME WARNER CABLE NORTHEAST LLC, TIME WARNER CABLE PACIFIC WEST LLC, TIME WARNER CABLE SERVICES LLC, TIME WARNER CABLE SOUTHEAST LLC, TIME WARNER CABLE SPORTS LLC, TIME WARNER CABLE TEXAS LLC, TWC ADMINISTRATION LLC, TWC COMMUNICATIONS, LLC, TWC DIGITAL PHONE LLC, TWC MEDIA BLOCKER LLC, TWC NEWCO LLC, TWC NEWS AND LOCAL PROGRAMMING HOLDCO LLC, TWC NEWS AND LOCAL PROGRAMMING LLC, TWC REGIONAL SPORTS NETWORK I LLC, TWC SECURITY LLC, TWC SEE HOLDCO LLC, TWC WIRELESS LLC, TWC/CHARTER DALLAS CABLE ADVERTISING, LLC, TWCIS HOLDCO LLC, VISTA BROADBAND COMMUNICATIONS, LLC, VOIP TRANSFERS (AL), LLC, VOIP TRANSFERS (CA) LLC, VOIP TRANSFERS (GA), LLC, VOIP TRANSFERS (NC), LLC, VOIP TRANSFERS (TN), LLC, VOIP TRANSFERS (VA), LLC, WISCONSIN PROCUREMENT HOLDCO LLC
Assigned to WELLS FARGO TRUST COMPANY, N.A. reassignment WELLS FARGO TRUST COMPANY, N.A. SECURITY INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BRIGHT HOUSE NETWORKS, LLC, CHARTER COMMUNICATIONS OPERATING, LLC, TIME WARNER CABLE ENTERPRISES LLC, TIME WARNER CABLE INTERNET LLC
Priority to US16/538,714 priority patent/US20200034332A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06F17/30076
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/11File system administration, e.g. details of archiving or snapshots
    • G06F16/116Details of conversion of file system types or formats
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/178Techniques for file synchronisation in file systems
    • G06F16/1794Details of file format conversion
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/14Fourier, Walsh or analogous domain transformations, e.g. Laplace, Hilbert, Karhunen-Loeve, transforms
    • G06F17/147Discrete orthonormal transforms, e.g. discrete cosine transform, discrete sine transform, and variations therefrom, e.g. modified discrete cosine transform, integer transforms approximating the discrete cosine transform
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • H04N19/23Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding with coding of regions that are present throughout a whole video segment, e.g. sprites, background or mosaic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Definitions

  • the present disclosure relates generally to the field of data transcoding. More particularly, the present disclosure is related, in one exemplary aspect, to apparatus and methods for lightweight data transcoding.
  • H.264/MPEG-4 AVC is a block-oriented motion-compensation-based video compression standard, commonly used in, e.g., Blu-rayTM disc players, streaming Internet sources, web software, and also various HDTV broadcasts over terrestrial (ATSC, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2).
  • H.264 is commonly used for lossy compression applications, and provides inter alia the benefit of good quality video at substantially reduced bitrate over prior codecs.
  • H.265 significantly increases the data compression ratio compared to H.264/MPEG-4 AVC at the same level of video quality.
  • HEVC be used to provide substantially improved video quality at the same bit rate.
  • transcode data including those which are able to transcode between any of H.261, H.262, H.264, and/or H.265 (or others).
  • H.261, H.262, H.264, and/or H.265 or others.
  • Such products require specialized hardware, are CPU intensive, and/or are comparatively expensive.
  • those which utilize software solutions for transcoding are slow, and cannot offer near-live or “on the fly” transcoding.
  • the present disclosure addresses the foregoing needs by disclosing, inter alia, apparatus and methods for lightweight data transcoding.
  • a method of transcoding media data is disclosed.
  • the media data is encoded according to a first format
  • the method includes: (i) performing, using a decoding apparatus, a partial decoding of the media data, the portion to produce intermediate data and undecoded data; (ii) performing at least one transcoding process on the intermediate data to produce transcoded data; and (iii) combining the transcoded data and the undecoded data into a data structure which can then be decoded and rendered by a decoding apparatus according to a second format.
  • a method of providing content compatible with a second codec from content encoded with a first codec includes: (i) decoding only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; and (ii) processing at least part of the decoded content portion, and combining the processed at least part and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • an apparatus configured to decode content in a first format and encode said content in a second, different format in near-real time.
  • the apparatus includes: data processor apparatus; and storage apparatus in data communication with the data processor apparatus and having at least one computer program disposed thereon, the at least one program being configured to. when executed on the processor apparatus: decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; process at least part of the decoded content portion to produce a processed portion; and combine the processed portion and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • a computer-readable storage apparatus in one embodiment, includes a non-transitory storage medium with at least one program stored thereon.
  • the at least one program is configured to, when executed, decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; process at least part of the decoded content portion to produce a processed portion; and combine the processed portion and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • a computer readable apparatus comprising a storage medium.
  • the storage medium is, in one embodiment, configured to store a plurality of data, the plurality of data comprising media data that has a portion that has been transcoded between a first and second encoding format, and a portion which has not been transcoded from the first format to the second format.
  • the plurality of data can be used by a processing apparatus in communication with the computer-readable apparatus to render the media data compliant with the second format.
  • a method of providing data encoded according to a first format using apparatus having a configuration not supporting such first format, but supporting a second format includes: processing a portion of data encoded in the second format relating to a plurality of corresponding features between the first format and the second format, the processing configured to encode the portion according to the first format; and combining the encoded portion and at least one other portion of the data encoded in the second format, the combined encoded portion and at least one other portion being decodable by an apparatus supporting the first format.
  • a lightweight transcoder apparatus is disclosed.
  • the apparatus is configured to decode content in a first format and encode the content in a second, different format, and the apparatus is not capable of decoding content rendered in the second format.
  • the apparatus includes data processor apparatus and storage apparatus in data communication with the data processor apparatus and having at least one computer program disposed thereon.
  • the at least one program is configured to, when executed on the processor apparatus: decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; process at least part of the decoded content portion to produce a processed portion; and combine the processed portion and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • the at least one program is configured to encode only portions of the decoded content into the second format, such that the resultant media is compatible with the second format, but not fully functioned with respect thereto (i.e., the resultant media is capable of being decoded and rendered by a device configured to utilize the second format, but the decoded and rendered media is not identical (e.g., is lesser in at least one quality or performance aspect) to a version of the media which was completely encoded using the second format.
  • FIG. 1 is a functional block diagram illustrating an exemplary hybrid fiber network configuration useful with various aspects of the present disclosure.
  • FIG. 1A is a functional block diagram illustrating one exemplary embodiment of a packetized content delivery network architecture useful with various aspects of the present disclosure.
  • FIG. 2 is a functional block diagram illustrating one exemplary embodiment of a network architecture for providing lightweight transcoding according to the present disclosure.
  • FIG. 3 is a diagram illustrating an exemplary H.261/H.262 to H.264 lightweight transcoding scheme according to the present disclosure.
  • FIG. 4 is a diagram illustrating an exemplary embodiment of the simplified approach for modifying H.262 picture blocks to H.264 blocks according to the present disclosure.
  • FIG. 5 is a diagram illustrating an exemplary H.261/H.262 to H.265 lightweight transcoding scheme according to the present disclosure.
  • FIG. 6 is a logical flow diagram illustrating an exemplary method for performing lightweight transcoding according to the present disclosure.
  • FIG. 6 a is a logical flow diagram illustrating an exemplary method for providing lightweight transcoding according to the present disclosure.
  • FIG. 6 b is a logical flow diagram illustrating an exemplary method for stream processing useful in lightweight transcoding according to the present disclosure.
  • FIG. 6 c is a logical flow diagram illustrating an exemplary method for entropy decoding useful in lightweight transcoding according to the present disclosure.
  • FIG. 6 d is a logical flow diagram illustrating an exemplary method for matrix retransformation useful in lightweight transcoding according to the present disclosure.
  • FIG. 6 e is a logical flow diagram illustrating an exemplary method for repackaging useful in lightweight transcoding according to the present disclosure.
  • FIG. 7 is a functional block diagram illustrating an exemplary process for partial data decoding to a disk according to the present disclosure.
  • FIG. 8 is a functional block diagram illustrating an exemplary process for lightweight data transcoding for delivery to a rendering device according to the present disclosure.
  • FIG. 9 is a functional block diagram illustrating one embodiment of a lightweight transcoding apparatus according to the present disclosure.
  • the term “application” refers generally to a unit of executable software that implements a certain functionality or theme.
  • the themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme.
  • the unit of executable software generally runs in a predetermined environment; for example, the unit could comprise a downloadable Java XletTM that runs within the JavaTVTM environment.
  • codec refers to a video, audio, or other data coding and/or decoding algorithm, process or apparatus including, without limitation, those of the MPEG (e.g., MPEG-1, MPEG-2, MPEG-4/H.2641H.265, etc.), Real (RealVideo, etc.), AC-3 (audio), DiVX, XViD/ViDX, Windows Media Video (e.g., WMV 7, 8, 9, 10, or 11), ATI Video codec, or VC-1 (SMPTE standard 421M) families.
  • MPEG e.g., MPEG-1, MPEG-2, MPEG-4/H.2641H.265, etc.
  • Real Real
  • Real Real
  • AC-3 audio
  • DiVX XViD/ViDX
  • Windows Media Video e.g., WMV 7, 8, 9, 10, or 11
  • ATI Video codec e.g., WMV 7, 8, 9, 10, or 11
  • VC-1 SMPTE standard 421M
  • client device and “user device” include, but are not limited to, set top boxes (e.g., DSTBs), personal computers (PCs), and minicomputers, whether desktop, laptop, or otherwise, and mobile devices such as handheld computers, tablets, “phablets”, PDAs, personal media devices (PMDs), and smartphones.
  • set top boxes e.g., DSTBs
  • PCs personal computers
  • minicomputers whether desktop, laptop, or otherwise
  • mobile devices such as handheld computers, tablets, “phablets”, PDAs, personal media devices (PMDs), and smartphones.
  • PMDs personal media devices
  • As used herein, the term “computer program” or “software application” is meant to include any sequence or human or machine cognizable steps which perform a function.
  • Such program may be rendered in virtually any programming language or environment including, for example and without limitation, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), JavaTM (including J2ME, Java Beans, etc.), Binary Runtime Environment (e.g., BREW), and the like.
  • CORBA Common Object Request Broker Architecture
  • JavaTM including J2ME, Java Beans, etc.
  • BREW Binary Runtime Environment
  • CPE Customer Premises Equipment
  • set-top boxes e.g., DSTBs or IPTV devices
  • televisions e.g., cable modems (CMs), embedded multimedia terminal adapters (eMTAs), whether stand-alone or integrated with other devices, Digital Video Recorders (DVR), gateway storage devices (Furnace), and ITV Personal Computers.
  • CMs cable modems
  • eMTAs embedded multimedia terminal adapters
  • DVR Digital Video Recorders
  • Furnace gateway storage devices
  • ITV Personal Computers ITV Personal Computers.
  • display means any type of device adapted to display information, including without limitation CRTs, LCDs, TFTs, plasma displays, LEDs, OLEDs, incandescent and fluorescent devices. Display devices may also include less dynamic devices such as, for example, printers, e-ink devices, and the like.
  • Internet and “internes” are used interchangeably to refer to inter-networks including, without limitation, the Internet.
  • memory or “storage” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), and PSRAM.
  • flash memory e.g., NAND/NOR
  • microprocessor and “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable compute fabrics (RCFs), array processors, and application-specific integrated circuits (ASICs).
  • DSPs digital signal processors
  • RISC reduced instruction set computers
  • CISC general-purpose
  • microprocessors e.g., FPGAs), PLDs, reconfigurable compute fabrics (RCFs), array processors, and application-specific integrated circuits (ASICs).
  • FPGAs field-programmable gate arrays
  • RCFs reconfigurable compute fabrics
  • ASICs application-specific integrated circuits
  • MSO multiple systems operator
  • multiple systems operator refer without limitation to a cable, satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums.
  • network and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets).
  • HFC hybrid fiber coax
  • Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25 Frame Relay, 3GPP, 3GPP2, LTE/LTE-A, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.).
  • HFC hybrid fiber coax
  • satellite networks e.g., satellite networks, telco networks, and data networks (
  • the term “network interface” refers to any signal or data interface with a component or network including, without limitation, those of the Firewire (e.g., FW400, FW800, etc.), USB (e.g., USB2, USB 3.0), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnetTM), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (e.g., 802.11a,b,g,n), WiMAX (802.16), PAN (802.15), cellular (e.g., LTE/LTE-A, 3GPP, 3GPP2, UMTS), or IrDA families.
  • Firewire e.g., FW400, FW800, etc.
  • USB e.g., USB2, USB 3.0
  • Ethernet e.g., 10/100, 10/100/1000 (
  • server refers without limitation to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, media, or other services to one or more other devices or entities on a computer network.
  • user interface refers to, without limitation, any visual, graphical, tactile, audible, sensory, or other means of providing information to and/or receiving information from a user or other entity.
  • Wi-Fi refers to, without limitation, any of the variants of IEEE-Std. 802.11 or related standards including inter alia 802.11 a/b/g/n/v.
  • wireless means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, NFC (e.g., ISO 14443A/B), narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, Zigbee, CDPD, satellite systems. millimeter wave or microwave systems, acoustic, and infrared (i.e., IrDA).
  • the present disclosure provides apparatus and methods for “lightweight” data transcoding.
  • a minimal function transcoder for the purposes of, e.g., supporting multiple devices in the home network that require varying video formats, resolutions, or bit-rates, is disclosed.
  • the transcoding functionality may for example be downloaded or otherwise provided (such as via provisioning at the time of install) to an existing device within the home network or at a network edge.
  • the transcoder uses an intermediate set of audio/video data resulting from a partial decode of an input stream (e.g., (enough to obtain required data to transform or rearrange the previously encoded information) that is temporarily stored until all transformation operations have completed.
  • the partially decoded data is re-encoded and output in any format, resolution, and/or bitrate desired.
  • premises networked devices are registered to the lightweight transcoder.
  • the lightweight transcoder may transcode content requested by the registered devices based on any of a number of different events/criteria, such as e.g., (i) upon detection that the registered device is “in use”; (ii) at the time of original content playback or broadcast; and/or (iii) at a time prior to a previously scheduled time of intended use.
  • the lightweight transcoder apparatus merely “ignores” some of the features of the more advanced/complex content formats.
  • the resultant output of the minimal or lightweight transcoder is a “just-in-time” or “near real-time” transcoded content for use in a premises network comprising non-legacy (such as IP-enabled) client devices with the more advanced codecs.
  • the exemplary embodiment of the disclosed lightweight transcoder apparatus utilizes extant software-based processing capability to “fit” data of a first format into a second format in a time-efficient manner (e.g., in near-real time). In doing so, the lightweight transcoder surrenders traditional goals of obtaining the best compression or highest quality output in an effort to simply create the desired format content stream with an acceptable level of quality/compression, and using non-application specific hardware (e.g., ASICs particularly configured to decode/transcode).
  • non-application specific hardware e.g., ASICs particularly configured to decode/transcode
  • HFC hybrid fiber coax
  • IP-based content delivery e.g. IP video delivery or streaming
  • present disclosure may be readily adapted to other types of IP-based delivery (e.g., IP-based content multicasts, etc.) as well.
  • IP-based content multicasts e.g. IP-based content multicasts, etc.
  • FIG. 1 illustrates a typical content delivery network configuration useful for delivery of encoded content according to the present disclosure.
  • the various components of the network 100 include: (i) one or more data and application origination points 102 ; (ii) one or more content sources 103 ; (iii) one or more application distribution servers 104 ; (iv) one or more VOD servers 105 ; and (v) customer premises equipment (CPE) 106 .
  • the distribution server(s) 104 , VOD servers 105 and CPE(s) 106 are connected via a bearer (e.g., HFC) network 101 .
  • a simple architecture comprising one of each of the aforementioned components 102 , 104 , 105 , 106 is shown in FIG. 1 for simplicity, although it will be recognized that comparable architectures with multiple origination points, distribution servers, VOD servers, and/or CPE devices (as well as different network topologies) may be utilized consistent with the disclosure.
  • the data/application origination point 102 comprises any medium that allows data and/or applications (such as a VOD-based or “Watch TV” application) to be transferred to a distribution server 104 .
  • a distribution server 104 This can include for example a third party data source, application vendor website, CD-ROM, external network interface, mass storage device (e.g., RAID system), etc.
  • Such transference may be automatic, initiated upon the occurrence of one or more specified events (such as the receipt of a request packet or ACK), performed manually, or accomplished in any number of other modes readily recognized by those of ordinary skill.
  • the application distribution server 104 comprises a computer system where such applications can enter the network system. Distribution servers are well known in the networking arts, and accordingly not described further herein.
  • the VOD server 105 comprises a computer system where on-demand content can be received from one or more of the aforementioned data sources 102 and enter the network system. These servers may generate the content locally, or alternatively act as a gateway or intermediary from a distant source.
  • the CPE 106 includes any equipment in the “customers' premises” (or other locations, whether local or remote to the distribution server 104 ) that can be accessed by a distribution server 104 .
  • Content (e.g., audio, video, data, files, etc.) is provided in to the client devices 106 in a given data format (e.g., MPEG-2, MPEG-4, etc.).
  • the CPE 106 may use the out-of-band (OOB) or DOCSIS channels and associated protocols.
  • OOB out-of-band
  • the OCAP 1.0, 2.0, 3.0 (and subsequent) specification provides for exemplary networking protocols both downstream and upstream, although the present disclosure is in no way limited to these approaches.
  • FIG. 1A illustrates one exemplary implementation of such a network, in the context of a 3GPP IMS (IP Multimedia Subsystem) network with common control plane and service delivery platform (SDP), as described in co-owned U.S. patent application Ser. No. 12/764,746 filed Apr.
  • 3GPP IMS IP Multimedia Subsystem
  • SDP common control plane and service delivery platform
  • a substantially session-based and packetized content delivery approach (e.g., using the well known Internet Protocol) which allows for temporal, device, and location flexibility in the delivery of the content, and transportability/migration of user sessions, as well as service/content personalization (e.g., on a per-session/user basis) and blending (integration) is provided.
  • This approach uses a common or unified delivery architecture in providing what were heretofore heterogeneous services supplied by substantially different, and often vendor-specific, networks.
  • the foregoing improved apparatus and methods provide for enhanced content access, reproduction, and distribution control (via e.g., a DRM-based approach and other security and content control measures), as well as quality-of-service (QoS) guarantees which maintain high media quality and user experience, especially when compared to prior art “Internet TV” paradigms.
  • the network comprises both “managed” and “unmanaged” (or off-network) services, so that a network operator can utilize both its own and external infrastructure to provide content delivery to its subscribers in various locations and use cases.
  • network services are sent “over the top” of other provider's infrastructure, thereby making the service network substantially network-agnostic.
  • a cooperative approach between providers is utilized, so that features or capabilities present in one provider's network (e.g., authentication of mobile devices) can be leveraged by another provider operating in cooperation therewith.
  • a network provides, inter alia, significant enhancements in terms of common control of different services, implementation and management of content delivery sessions according to unicast or multicast models, etc.; however, it is appreciated that the various features of the present disclosure are in no way limited to this or any of the other foregoing architectures.
  • FIG. 2 illustrates an exemplary embodiment of a network architecture 200 for providing lightweight transcoding according to the present disclosure.
  • the network 200 of FIG. 2 is utilized to receive content and transcode the content from the format it is received in, into a different format, based on e.g., the capabilities of the devices in the network 200 which will render the content.
  • the rendering device 204 capabilities may relate to for example, video formats, codecs (e.g., H.264/.265), resolutions, and/or available bit-rates for communications between the transcoding apparatus and the rendering device.
  • the exemplary illustrated network entities and apparatus are configured to operate within one or more of various the above-described bearer networks of FIGS. 1-1A , although others may readily be used.
  • the network may be based on wireless and/or wireline networking technologies (e.g., Wi-Fi family 802.11, WiMAX 802.16, wired Ethernet standards (802.3), optical standards/paradigms such as FIOS, SONET, etc.).
  • the technologies forming the bearer networks may also range in scope from PAN (personal area networking), “mesh” networking, to nationwide or even global architectures). It will also be appreciated that bridges may be used to create a hybrid network environment using multiple ones of such technologies (e.g. cellular or Wi-Fi wireless/wired Ethernet hybrid).
  • the network 200 generally comprises a lightweight transcoder entity 202 which receives content from a content distribution or delivery network (such as the network disclosed in FIGS. 1-1A ) and which is in data communication with at least metadata storage 206 , video storage 208 , and temporary storage 210 .
  • the transcoder entity 202 is further in communication with one or more rendering devices 204 .
  • the transcoder 202 and/or storage devices may comprise premises network devices or may be located at a network edge or other location in communication with the customer's premises.
  • a user registers each of the user's rendering devices 204 to the transcoder 202 . The user may do so by placing these in communication with the transcoder 202 and, via a series of message exchanges between the devices establish that the user of the rendering device 204 is a subscriber to the content delivery network and a user of the device 204 .
  • the user may register more than one rendering device 204 ( FIG.
  • the transcoder 202 is further made aware of the capabilities of each of the rendering devices 204 via generation of a device profile for each rendering device and/or a home network profile for each subscriber or user.
  • the transcoder 202 comprises a network edge device (i.e., is not located at the consumer's premises)
  • the transcoder 202 is further configured to associate each rendering device with a particular one of the users/subscribers which may also register their devices to the transcoder 202 .
  • the rendering devices 204 comprise any device capable of receiving, decoding and displaying (or communicating decoded data to a device configured to display) audio/video content.
  • Exemplary rendering devices include IP-enabled devices such as smart phones, tablet computers, hand held computers, laptop computers, personal computers, smart televisions, streaming media devices, etc., as well as non-IP enabled set top boxes, etc.
  • IP-enabled devices such as smart phones, tablet computers, hand held computers, laptop computers, personal computers, smart televisions, streaming media devices, etc., as well as non-IP enabled set top boxes, etc.
  • the present disclosure is intended to provide functionality irrespective of the specific formats with which the rendering devices are compatible.
  • the transcoder 202 (also referred to herein as the “lightweight transcoder”) is configured to receive content delivered from the content delivery network.
  • content is, in one embodiment, delivered in H.261 or H.262 format; the content may be either live or previously recorded and may delivered as a broadcast, multicast, or unicast.
  • the rendering devices 204 within the home network require, in one embodiment, H.264 video format. It is appreciated, however, that the herein described approach may be utilized for conversion between any data formats; H.262 to H.264 conversion being merely exemplary of the general process.
  • the transcoding process occurs in either hardware or software the transcoder device 202 .
  • the transcoder device 202 may comprise a premises apparatus (such as a set top box, gateway device, or other CPE), or a network or network edge device (e.g., a server processor in a network operations center).
  • the process since the transcoding process discussed herein is “lightweight”, the process may comprise a downloadable software upgrade provided via another network entity and may utilize substantially extant device hardware.
  • MPEG2 video content arrives in a QAM or Ethernet port, and is transcoded to MPEG4 over HTTP Live Steaming (HLS) to an Apple iPad® on the same home network as the transcoding device 202 .
  • HLS HTTP Live Steaming
  • the lightweight transcoder 202 receives data and, irrespective of the input format, metadata associated to the received data is stored at the metadata storage 206 entity. If the data it is in an appropriate format for the home network (e.g., H.264), a copy of the data is immediately stored at the video storage apparatus 208 . If the received data is not in an appropriate format for the home network, the data input is partially decoded, then the partially decoded discrete cosine transforms (DCTs) which constitute the data are either stored onto a disk at the temporary storage entity 210 , or are immediately re-mapped to DCTs of a particular format.
  • DCTs discrete cosine transforms
  • the format selected for re-encoding may be a format previously selected by the requesting user or may be selected based on the device and/or user profile (e.g., based on the compatibility of the requesting rendering device 204 ).
  • the re-mapped DCT may be recorded to temporary storage 210 or may be immediately repackaged into the new format's packaging. Once repackaged, the data is recorded in its new format to storage (at the storage apparatus 208 ) for later consumption, or is sent to a rendering device 204 for audio/video display via a backend interface of the transcoder 202 (e.g., MoCA, Ethernet, WiFi, etc.) based on a request for the content being received from the rendering device 204 .
  • a backend interface of the transcoder 202 e.g., MoCA, Ethernet, WiFi, etc.
  • the intermediate or temporary storage entity 210 may be of sufficient size to accommodate data storage during the transformation process.
  • a storage entity of large enough to enable time-shifting for twice the amount of time required for all transformation operations for a given device to be completed is utilized.
  • a typical premises networks may utilize up to Gigabit speed Ethernet services.
  • transcoding and delivery of the transcoded content from an in home transcoder 202 to a rendering device 204 in the premises network may approximate real-time.
  • the present disclosure provides a mechanism for transcoding content at a rate which is 1.5-3 times faster than traditional transcoding rates.
  • the present mechanism accomplishes this goal by circumventing various traditional transcoding steps to arrive at a lower quality, less efficiently transcoded content. For example, when converting H.262 to H.264 it may be ignored that the H.264 format is capable of having multiple reference frames. spatial prediction, and varying block sizes.
  • the present disclosure takes advantage of secure read/write functions available within the operating systems of existing premises devices, including a premises located transeoder 202 and/or the rendering devices 204 .
  • a network operator may define the read and/or write access of the various devices (transcoder 202 , rendering device 204 , etc.) with respect to given a content or content type, or generally with respect to all content.
  • Conditional Access is controlled by a hardware device called a cable card or other secure micro device.
  • the secure micro device stores the list of entitlements on behalf of the subscriber. These entitlements control access to premium channels, pay-per-view services, and system resources such as the hard disk drive used for digital video recording.
  • the hard disk drive is used to store partially decoded sections for transcoding and/or remapping as discussed herein. This temporary storage must be conditionally accessed to be in full support of copy protection within the device.
  • the entitlement agent within the CPE thus verifies the ability to use the disk and provide the open/read/write/close method capability. Data written and subsequently read will have been encoded and decoded via these write/read methods.
  • other means for controlling access may be utilized such as, e.g., Digital Rights Management (DRM).
  • DRM Digital Rights Management
  • the lightweight transcoder 202 may transcode content requested by the registered devices in at least one of the following instances: (i) upon detection that the registered device is “in use”; (ii) at the time of original content playback or broadcast; and/or (iii) at a time prior to a previously scheduled time of intended use.
  • the registered user devices 204 which are capable of rendering content are configured to automatically signal to the transcoder 202 when they have been powered on and/or have entered a home network. Any content requested by these devices is then automatically transcoded for delivery to the devices via the premises network.
  • the transcoder 202 may periodically send a heartbeat message to which the rendering devices 204 in the network respond. When a new device enters the network and/or is powered on, the transcoder 202 is made aware of its presence and may begin transcoding content which is requested to be received thereat.
  • the transcoder will select particular content to be automatically transcoded at the time it is broadcast (irrespective of a request) and/or at the time it is requested to be rendered.
  • Requests for particular content may be received simultaneous to a broadcast of the content, or after a broadcast of the content (in this instance the content is delivered from video storage 208 or temporary storage 210 ).
  • the content selected to be automatically transcoded at the time it is broadcast according to this embodiment may comprise content which is determined to have a high viewership rate among most subscribers, content which is previously identified by the subscriber to be of particular interest, content which is concurrently being requested or recorded at another device associated to the subscriber, and/or content which is identified as being of particular interest to a subscriber based on a profile associated thereto.
  • the requesting rendering device 204 may pre-establish a time at which content is intended to be displayed.
  • the pre-established time may be as early as a portion of a second past its live broadcast time.
  • the subscriber merely schedules a particular content in advance via a scheduler function of the transcoder apparatus 202 .
  • the scheduler enables the subscriber to identify the requested content as well as a time for delivery thereof.
  • the transcoder 202 uses this information to arrange resources to be available to transcode the particular content in advance of the scheduled time for delivery thereof.
  • the transcoder 202 may further use what is known about a time needed to transcode the entirety of the content to determine an adequate time to begin the transcoding process so as not to interrupt delivery there of to the subscriber.
  • FIGS. 3-5 illustrate exemplary lightweight transcoding according to the present disclosure.
  • the illustrated embodiments are exemplary of the general principles of the disclosure and are in no way intended to limit the scope thereof.
  • the exemplary transcoding schemes of FIGS. 3-5 are, in one embodiment, performed at a processor associated to the lightweight transcoder apparatus 202 .
  • software for performing the herein described transcoding may be downloaded or otherwise provided to the transcoding device 202 thereby taking advantage of the device's indigenous hardware capabilities.
  • each data element in a first format is re-used in generating the data in the second format.
  • a loss of some frames may be tolerable given the nature of the present disclosure to forego certain quality requirements in an effort to ensure overall readability of the content in the transcoded format.
  • the data elements are re-used by repackaging them from a first encoding standard object to a standard object of the second encoding scheme.
  • removing a header portion of the data and replacing it with a header particular to the desired codec may, in many instances, be sufficient to perform the lightweight transcoding discussed herein.
  • the present disclosure purposely does not take advantage of some of the advancements that are available to “higher” codec content formats so as to arrive at a transcoded content version more quickly than would be ordinarily obtainable.
  • various ones of these advancements may be utilized during the lightweight transcoding process to address specific desired results, such as e.g., taking advantage of a higher codec's multilevel capabilities to arrive at a transcoded content which is smaller in size (thus easier to store) than would be obtained without the multilevel transcoding.
  • FIG. 3 illustrates a high-level diagram of one exemplary H.261/H.262 to H.264 lightweight transcoding scheme.
  • the lightweight transcoder 202 repackages each frame in H.261/H.262 to a single sequence, single object, single layer video object in H.264.
  • H.264 is configured to utilize a more complex video object, the mere categorization from frames in H.261/H.262 to video objects in H.264 is sufficient to enable the frames to be rendered by an H.264 device.
  • Each picture in H.261/H.262 is repackaged into video object plane (VOP) background (i.e., layer 0). Given that there is no additional repackaging required for utilizing layer 0 in H.264, using this layer eliminates any prediction between planes.
  • VOP video object plane
  • a group of pictures (GOP) in H.261/H.262 is repackaged as a group of video objects (GOV) in H.264.
  • the GOV in the H.264 stream is substantially similar to a GOP in H.262 in that it holds the frame sequence (e.g. IBBPBBPBBP), the difference being the sequence describes VOPs rather than frames.
  • the I, B, and P frames are simply set to I, B, and P VOPs (within VOP layer 0).
  • the H.261/H.262 data is assigned a rectangle. The H.264 rendering device will then decode the entire rectangle to obtain the data.
  • FIG. 4 illustrates and exemplary embodiment of the simplified approach for modifying H.262 picture blocks to H.264 blocks according to the present disclosure. It is appreciated that H.262 uses a fixed 16 ⁇ 16 block for lura DCTs, an 8 ⁇ 8 block for chroma DCTs, and a 16 ⁇ 16 block for motion estimation.
  • the H.264 offers more coding options by supporting variable block size prediction for inter as well as intra block coding.
  • the intra prediction modes can use 16 ⁇ 16 or 4 ⁇ 4 block sizes (8 ⁇ 8 block size can also be used optionally).
  • the DCT blocks recovered from the MPEG-2 partial decoding stage are used to estimate the prediction modes of DCT blocks in H.264.
  • the header bits for ‘frame_mbs_only_flag’ and ‘direct — 8 ⁇ 8_inference_’ flag are set to 1.
  • VLC intra/inter variable length coding
  • H.264 provides for two types of entropy encoding, context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable length coding (CAVLC).
  • CABAC context-adaptive binary arithmetic coding
  • CAVLC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • CABAC context-adaptive variable length coding
  • CABAC context-adaptive variable length coding
  • Translation is accomplished, in one embodiment, by decoding the VLC in H.262 to obtain the DCT coefficients to be used in the (re)transformation activity while moving to either H.264 or H.265 output. This activity is followed by re-encoding to either CAVLC (in the case of H.264) or CABAC (for H.265).
  • the slice start structure of H.261/H.262 is repackaged to fit the key frame marker structure of H.264.
  • the H.264 slice type header field is set to 7 (I-VOP) for each H.262 I-frame processed (this is the key frame marker).
  • the zig-zag mode in H.261/H.262 can be forced to H.264 Mode 3 using a diagonal, down, then left pattern. This may be accomplished by rewriting the bits of the zig-zag mode. In one embodiment, this is accomplished by setting the H.264 slice entropy coding mode header field to three (diagonal down left) for each H.262 slice processed.
  • FIG. 5 a high-level diagram of one exemplary H.261/H.262 to H.265 lightweight transcoding scheme is illustrated.
  • the lightweight transcoder 202 repackages each frame in H.261/H.262 to a single sequence, single object, single layer video object in H.265.
  • the H.265 standard utilizes a more complex video object; the present disclosure provides a mechanism to enable the frames to be rendered by an H.264 device without taking advantage of the specific complexities of H.265.
  • Each picture in H.265 is repackaged into the video object plane (VOP) via its plane 0 or background plane (similar to that discussed above for H.264 repackaging).
  • VOP video object plane
  • a group of pictures (GOP) in H.261/H.262 is repackaged as a group of video objects (GOV) in H.265.
  • the GOV in the H.265 stream is essentially the same as a GOP in H.262 in that it holds the frame sequence (e.g. IBBPBBPBBP), the difference is that the sequence describes VOPs rather than frames.
  • the present disclosure does not create different VOPs from the H.262 stream when repackaging as H.265, instead the I, B, and P frames are set to I, B, and P VOPs (within VOP layer 0).
  • the H.261/H.262 data is assigned a rectangle rather than taking advantage of the H.265 ability to define various shapes.
  • the H.265 rendering device simply decodes the entire rectangle to render the data.
  • the 16 ⁇ 16 blocks utilized in H.261/H.262 are, in one embodiment, forced into fixed size Transform Units to simple Coding Tree Units (CTU) in the H.265 standard.
  • CTU Coding Tree Unit
  • VLC intra/inter variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • the slice start structure of H.261/H.262 is repackaged to fit the key frame marker or tile marker structure of H.265. Since the NAL header fields are backward compatible to H.264 the slice type is set to 7 (I-VOP) for each H.262 I-frame processed (this is the key frame marker).
  • the zig-zag mode in H.261/H.262 can be forced to H.265 Mode 3 using a diagonal, down, then left pattern. This may be accomplished by rewriting the bits of the zig-zag mode.
  • the slice entropy coding mode header field will be set to 3 (diagonal down left) for each H.262 slice processed.
  • the foregoing lightweight transcoding schemes of FIGS. 3-5 improve the speed of encoding such that content may be transcoded in near real time. That is, although there is some delay associated with the in-bound quality and available bandwidth, there is generally an undetectable delay associated to the lightweight transcoding process itself.
  • a delay associated with the transcoding process may be detectable in the instance the transcoded section comprises e.g., a high bandwidth scene or portion of content.
  • the delay period associated with the in-bound quality, bandwidth availability, and/or bitrate requirements of the transcoded content itself may be accounted for in advance.
  • delivery of the transcoded content stream may be delayed by an amount of time equal to an amount of time necessary to account for the so-called “worst case scenario”, i.e., an instance of highest delay due to one or more of in-bound quality, bandwidth availability, bitrate requirements of the transcoded content itself, and/or other factors effecting transcoding speed.
  • an upstream network entity or the transcoder 202 itself may be configured to pre-scan content to determine portions which have high bitrate requirements. These portions may be utilized to determine an amount of delay (as discussed above) and/or for pre-processing and mapping.
  • a network entity or the transcoder 202 ) may review selected content to determine high bandwidth portions, instructions or a map are given to the transcoder process to provide a time at which it is determined the delay would not be detectable by a subscriber and/or the rendering device during streaming of the transcoded content.
  • the method 600 generally comprises receiving a content stream (step 602 ).
  • the content stream may be received at a lightweight transcoder 202 which may be located at a user premises or elsewhere in the network, including e.g., the network edge.
  • a lightweight transcoder 202 which may be located at a user premises or elsewhere in the network, including e.g., the network edge.
  • metadata associated with the received content is stored at the metadata storage entity 206 .
  • the transcoder 202 determines whether the received content is in an appropriate format based on what is known about the subscriber network devices registered to the transcoder 202 .
  • the transcoder 202 may make this decision based on e.g., the capabilities of a rendering device 204 which requested the content and/or other devices which are known to be in the network (i.e., other registered devices).
  • the transcoder 202 may be given a pre-configured set of rules for transcoding either entered by the subscriber or by a network operator. For example, it may be specified that all content which is to be stored at the video storage 208 be in a specific format (e.g., H.264). In another example, it may be that only content for which a current request form a rendering device has been received is to be transcoded, while all other content is stored as is in the video storage 208 .
  • the content is placed in video storage 208 .
  • the content is partially decoded via stream processing (step 610 ), entropy decoding (step 612 ), and matrix retransformation (step 614 ), then repackaged (step 616 ) and placed in storage 208 .
  • the stream processing (step 610 ), entropy decoding (step 612 ), matrix retransformation (step 614 ), and repackaging (step 616 ) will be discussed in greater detail below with respect to FIGS. 6 a - 6 e.
  • an intermediary device may be provided either at the network edge or within the user's premises which initially receives and evaluates the content stream.
  • a premises gateway apparatus may be disposed between the network and the premises transcoder 202 .
  • the gateway or other intermediary entity which causes the metadata relating to the received content to be stored (step 604 ), determines whether the received content is in an appropriate format (step 606 ) and directs the content to video storage 208 or to be partially decoded (steps 610 - 616 ).
  • content is held in temporary storage 210 prior to being transcoded to one or more new formats.
  • the formats to which content are to be repackaged into using the lightweight repackaging solutions discussed herein are determined based on e.g., the capabilities of a requesting device, the capabilities of all of the devices associated or registered to the subscriber, and/or one or more user or network-established rules for transcoding. Accordingly, particular content may be transcoded into more than one new fauna to accommodate the capabilities of all of the devices within the network. Alternatively, a single format may be selected for use within the premises network, and the particular content is repackaged to only that format. Exemplary repackaging techniques which may be utilized to transform from H.261/H.262 to H264 or H.265 are discussed elsewhere herein and may be used with equal success in accordance with the method of FIG. 6 .
  • the one or more transcoded content versions are then placed in video storage 208 alongside the content which was received already in the appropriate format (discussed above). In this manner, the system creates a video storage 208 having only content which can be delivered to requesting devices. Stated differently, all content which is received in an inappropriate format is only temporarily stored then saved to more permanent storage upon transcoding thereof.
  • step 618 content which was placed in the video storage 208 is delivered to a rendering device 204 .
  • the delivery may occur at a time of request thereof by the rendering device 204 or may be pre-scheduled by the rendering device 204 (or other device on behalf of the rendering device).
  • the disclosed method 600 may be performed on live broadcast content which is streamed to the transcoder 202 for immediate repackaging and delivery of the content in near real-time.
  • the method generally comprises receiving an H.262 input at step 621 .
  • the input signal may comprise an H.261 input in another alterative embodiment.
  • the input stream is first processed including e.g., dequantization (step 622 ) such that the nonlinear signals are reconstructed. This may occur using e.g., smooth and/or step signal reconstruction.
  • dequantization e.g., smooth and/or step signal reconstruction.
  • Alternative mechanisms for dequantization which are well known in the art may also be utilized.
  • Entropy decoding is applied to the dequantized stream (step 612 ). As will be discussed in further detail elsewhere herein (see e.g., FIG. 6 c and discussion relating thereto), entropy decoding may include translation to obtain DCT coefficients which are later used for re-encoding according to either CAVLC or CABAC.
  • step 624 it is determined whether the content is to be transcoded (via the lightweight transcoder) in so-called near real time. In the instance, the content is not required to be transcoded immediately (i.e., transcoding is deferred) it is placed in storage at step 626 .
  • the storage entity used for deferred transeoding may comprise the temporary storage 210 , video storage 208 , and/or another storage entity (not shown). Content which is to be transcoded in near real time is placed in temporary storage 210 .
  • a profile is selected (per step 627 ) to correspond to the appropriate device and/or user.
  • Profile A 627 a, Profile B 627 b, through Profile N 267 n may be selected.
  • a quantization remapping is performed (step 628 ) to process the signal in preparation for retransformation (step 614 ), which will be discussed in further detail below.
  • repackaging of the stream is performed which may include adding new motion vectors 630 and encoding new entropy values 631 to create an H.264 or H.265 output at step 632 .
  • FIG. 6 b illustrates one exemplary method for stream processing 610 according to the present disclosure.
  • an H.262 (or H.261) is input from temporary storage 210 at step 621 .
  • the stream assembler receives the input at step 641 and determines whether a GOP header is present (step 642 ) and if so generates a GOV header therefrom (step 643 a ).
  • After the GOP header has been removed it is determined whether a picture header is present (step 644 ) and if so a VOP header is created from the picture header information (step 643 b ).
  • Amer the picture header has been removed it is determined whether a slice header is present (step 646 ) and if so the slice header is adapted (step 643 c ).
  • the new headers 643 a, 643 b, and 643 c are then stored in temporary storage 210 and are utilized in repackaging (discussed below).
  • the header-less data is processed using an MB Data processor 648 . It is determined at step 649 whether MB data processing is completed and if not, the process continues again at the stream assembler (step 641 ). When the MB data processing is complete, the processed data is placed in temporary storage 210 and the process proceeds to the repackager 203 for entropy decoding 612 (as discussed below).
  • H.262 (or H.261) entropy values are obtained (such as from the data streams held in temporary storage 210 ).
  • the entropy values are decoded at step 652 using Huffman decoding, which is well-known in the art and DCT coefficients are obtained (step 653 ).
  • the DCT coefficients are then transformed (step 654 ) to create new coefficients (step 655 ).
  • CALVC context-adaptive variable length coding
  • CABAC context-adaptive binary arithmetic coding
  • FIG. 6 d illustrates one exemplary method for matrix retransformation 614 .
  • a DCT cosine matrix is obtained by the MB data processor 648 from the streams in temporary storage 210 .
  • a transformation is applied at step 663 either from a selected profile 627 or from among one of a plurality of pre-determined transforms 664 .
  • a new cosine matrix is thereby generated (step 665 ) and placed in temporary storage 210 .
  • the new cosine matrix is utilized in repackaging (discussed below).
  • FIG. 6 e illustrates the final repackaging process 616 associated with the lightweight transcoding discussed herein.
  • the new headers 643 a, 643 b, and 643 c, new entropy values 659 , and new matrices 665 are utilized such that data synch timestamps are arranged to match those of the original H.262 stream input (step 671 ).
  • network abstraction layer (NAL) packets are created in accordance with H.264 coding standards.
  • a new H.264 stream with the desired profile is output. It is appreciated, however, that similar repackaging techniques may be utilized to generate an H.265 stream output as well.
  • FIG. 7 illustrates another exemplary process 700 for the partial data decoding discussed at step 610 of FIG. 6
  • FIG. 8 illustrates another exemplary process 800 for the repackaging of data discussed at step 616 of FIG. 6 and delivery of repackaged content to a rendering device discussed at step 618 of FIG. 6 .
  • the illustrated embodiments specifically discuss H.262 to H.264 transcoding and delivery of H.264 content, the apparatus and processes disclosed herein are equally applicable to transcending from any of the given formats to another one of the given formats, the foregoing being exemplary of the overall concepts disclosed herein.
  • the partial decode occurs when a transport stream is received within a premises network. Metadata relating to the stream is stored at metadata storage 206 per 701 . Per 703 , the stream is passed to an entity for determining whether it is in an appropriate format (in the given example, H.264 format). As noted above, the entity charged with this evaluation may comprise a gateway entity within the premises, the lightweight transcoder (whether in the premises or at a network), or other network entity.
  • variable length decoder 702 is, in one embodiment, a software application run on the lightweight transcoder 202 . Alternatively, the variable length decoder 702 may be run on another device within the premises (e.g., the gateway apparatus, not shown) or at the network edge.
  • the variable length decoder 702 decompresses the received content into an intermediate format represented by the data obtained from the decompression techniques 709 .
  • DCT coefficients for I-frames, B and P-frames are derived to arrive at the complete set of coefficients for those respective frames.
  • an inverse DCT algorithm is, in one embodiment, specifically not utilized so as to conserve processing resources. That end result is then used to create the transforms used for the H.264 (or H.265) output.
  • field and frame motion vectors are extracted from the compressed motion data (which describes object change from frame to frame).
  • picture information is obtained to determine which frames are interlaced, bi-directional, or progressive.
  • group of pictures (GOP) information is obtained from the compressed data which indicates timestamps for each frame.
  • the temporary storage entity 210 is, in one embodiment, large enough to accommodate data to enable time-shifting for twice the amount of time required for all transformation operations for a given device to be completed.
  • the data 709 which is stored in temporary storage 210 comprises at least frame and field motion vectors, frame and field DCTs, picture information and GOP information.
  • the data 709 is transmitted to a lightweight transcoder entity 202 for motion, texture, and shape coding 802 to arrive at repackaged data.
  • the motion coder determines where and how sets of blocks have moved from frame to frame and uses this information to generate compressed data.
  • the texture coder uses the DCTs to create a compressed signal by identifying information which has changed (other than motion).
  • a shape coder is used to force the data into an H.264 shape.
  • the shape which is used is a rectangle therefore causing decoding at the rendering device 204 of the entire screen.
  • the repackaging process discussed herein may occur immediately upon receipt of content at the temporary storage 210 (so as to provide near-live streaming of received content) and/or upon user request.
  • metadata stored at the metadata storage entity 206 is transformed from an original video profile to an output video profile 809 by adding mapping information and information regarding the profiles supported.
  • the output video profile and the repackaged data are then provided (at 805 ) to a multiplexer entity 804 of the transcoder 202 .
  • the multiplexer 804 may be separate from the transcoder 202 yet in communication therewith.
  • the multiplexer 804 causes the metadata and repackaged content to be provided as a single data stream 803 to a rendering device 204 or to storage 208 for subsequent delivery to a capable rendering device 204 (i.e., a rendering device which is configured to decode and display (or cause to be displayed) H.264 content in the present example).
  • FIGS. 7-8 illustrated specifically H.262 to H.264 transcoding
  • any of the herein disclosed lightweight transcoding schemes including but not limited to those discussed in FIGS. 3-5 above, may be utilized consistent with the present invention.
  • the partial decode and subsequent repackaging of the received content may occur in any manner which accomplishes the overall schemes identified in FIGS. 3-5 .
  • FIG. 9 illustrates an exemplary lightweight transcoder apparatus 202 .
  • the apparatus 202 generally comprises a network interface 902 , a processor 904 , a plurality of backend interfaces 906 , and memory 908 .
  • the network interface 902 is configured to enable communication between the lightweight transcoder 202 and the content delivery network.
  • the transcoder receives data from and communicates to various network entities via the interface 902 .
  • Communication may be effected via any signal or data interface including, e.g., a radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi, and/or Wi-MAX, etc.
  • a radio frequency tuner e.g., in-band or OOB, cable modem, etc.
  • Wi-Fi Wireless Fidelity
  • Wi-MAX Wi-MAX
  • the backend interfaces 906 are configured to enable communication between the transcoder apparatus 202 and the various premises network devices including e.g., metadata storage 206 , video storage 208 , temporary storage 210 , and a plurality of rendering devices 204 . Communication is enabled via e.g., Firewire, USB, Ethernet, MoCA, Wi-Fi, Wi-MAX, etc. interfaces.
  • the storage apparatus 908 is configured to store a plurality of information used by the transcoder 202 . For example, information relating each rendering device 204 to a particular user or subscriber account may be stored. Additionally, infoiivation relating to the capabilities of each of the registered rendering devices may also be stored. Moreover, content requests and scheduling data for each rendering device 204 are also stored.
  • the digital processor 904 of the transcoder apparatus 202 is configured to run a plurality of software applications thereon.
  • a decoder application 702 , an encoder application 802 , a multiplexer 804 , and a scheduler 910 are illustrated; however, other applications necessary to complete the herein described lightweight transcoding process may also be provided.
  • one or more of the decoder 702 , the encoder 802 , the multiplexer 804 , and/or the scheduler 910 may be configured to run on a device which is separate from yet in communication with the transcoder apparatus 202 .
  • the decoder application 702 is a software application which enables the transcoder 202 to partially decode received content as discussed elsewhere herein. Specifically, the decoder application 702 unpackages the received content into an intermediate format represented by the data obtained from one or more techniques. In one specific embodiment, the decoder application 702 utilizes one or more of a DCT algorithm, a field and frame motion vectors extraction algorithm, decompression to obtain picture information and GOP information. The decompressed intermediate data structure is stored in the temporary storage 210 via transmission thereto via the appropriate backend interface 906 .
  • the encoder application 802 is a software application which enables the transcoder 202 to repackage the partially decoded data structure generated by the decoder application 702 .
  • the encoder application performs motion, texture, and shape coding of the content to arrive at repackaged data.
  • the repackaging techniques discussed herein with respect to FIGS. 3-5 are performed by the encoder application 802 to encode the content.
  • the multiplexer application 804 is a software application which enables output video profile data and the repackaged content to be provided as a single data stream to a rendering device 204 or to a storage apparatus 208 (for subsequent delivery to a capable rendering device 204 ).
  • the scheduler application 910 is a software application which generates a user interface by which a user of a rendering device 204 may define a date and/or time at which content is to be delivered. For example, a user of the rendering device 204 may access the scheduler application 910 to determine that particular content is broadcast at 8:00 pm, Tuesday. The scheduler then may utilize the previously disclosed look-ahead features to predict a delay time associated with transcoding the particular content (based on its bitrate requirement, length, etc.). Alternatively, delay information may simply be provided to the scheduler 910 from a network entity.
  • the delay is added to the broadcast time, thus the user may select to have delivery of the content at, e.g., 8:01 pm, Tuesday (after the appropriate delay time has elapsed).
  • the scheduler application 910 Prior to the time for delivery selected by the user, the scheduler application 910 causes the transcoder to obtain the desired content and being transcoding. The time at which the transcoding is scheduled to occur may coincide with the amount of time of the delay associated with the transcoding process.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Discrete Mathematics (AREA)
  • Algebra (AREA)
  • Software Systems (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

Apparatus and methods for lightweight transcoding. In one embodiment, a minimal function transcoder is disclosed which supports multiple devices requiring various video formats. Transcoding functionality may be downloaded to an existing device and comprises using an intermediate set of data resulting from a partial decode of an input stream that is temporarily stored until all transformation operations have completed. Premises devices register to the transcoder and the transcoder transcodes content requested by the registered devices (i) upon detection that the registered device is “in use”; (ii) at the time of original content playback or broadcast; and/or (iii) at a time prior to a previously scheduled time of intended use. The transcoder exploits the similarities between the mechanisms by which the various encoding formats and, in one variant, ignores some of the features of the more advanced content formats to arrive at a “just-in-time” or “near real-time” transcoded content.

Description

    COPYRIGHT
  • A portion of the disclosure of this patent document contains material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
  • BACKGROUND
  • 1. Technological Field
  • The present disclosure relates generally to the field of data transcoding. More particularly, the present disclosure is related, in one exemplary aspect, to apparatus and methods for lightweight data transcoding.
  • 2. Description of Related Technology
  • In the field of content and data delivery, many different compression and encoding standards have been developed. Standards such as the well-known H.261 and H.262 or Moving Picture Experts Group (MPEG-2) are commonly utilized for many audio/video data applications. More evolved standards include H.264 or MPEG-4 AVC (Advanced Video Coding) and its successor H.265 or High Efficiency Video Coding (HEVC).
  • H.264/MPEG-4 AVC is a block-oriented motion-compensation-based video compression standard, commonly used in, e.g., Blu-ray™ disc players, streaming Internet sources, web software, and also various HDTV broadcasts over terrestrial (ATSC, ISDB-T, DVB-T or DVB-T2), cable (DVB-C), and satellite (DVB-S and DVB-S2). H.264 is commonly used for lossy compression applications, and provides inter alia the benefit of good quality video at substantially reduced bitrate over prior codecs.
  • H.265 (HEVC) significantly increases the data compression ratio compared to H.264/MPEG-4 AVC at the same level of video quality. Alternatively, HEVC be used to provide substantially improved video quality at the same bit rate.
  • Current cable and satellite distribution infrastructures primarily deliver audio and video data using either H.261 or H.262. Moreover, current end-to-end systems include millions of interoperable encoders, multiplexers and decoding devices (such as e.g., set-top boxes) deployed. These devices are all compatible with one or both of H.261 and H.262; however, very few of these are compatible with the newer H.264 and/or H.265 encoding schemes. It is further appreciated that newer IP-capable devices may prefer or even require H.264 (and eventually H.265) for video consumption. Therefore, there is a need in a user's premises network for content rendered in a different format than what is currently provided via most content delivery networks.
  • Many products exist to transcode data, including those which are able to transcode between any of H.261, H.262, H.264, and/or H.265 (or others). However, such products require specialized hardware, are CPU intensive, and/or are comparatively expensive. Moreover, those which utilize software solutions for transcoding are slow, and cannot offer near-live or “on the fly” transcoding.
  • Hence, what is needed is a mechanism for efficient transcoding. Ideally, such mechanism would also be capable of sufficient transcoding rate so as to support, e.g., near-real time transcoding applications.
  • SUMMARY
  • The present disclosure addresses the foregoing needs by disclosing, inter alia, apparatus and methods for lightweight data transcoding.
  • In one aspect, a method of transcoding media data is disclosed. In one embodiment, the media data is encoded according to a first format, and the method includes: (i) performing, using a decoding apparatus, a partial decoding of the media data, the portion to produce intermediate data and undecoded data; (ii) performing at least one transcoding process on the intermediate data to produce transcoded data; and (iii) combining the transcoded data and the undecoded data into a data structure which can then be decoded and rendered by a decoding apparatus according to a second format.
  • In a second aspect, a method of providing content compatible with a second codec from content encoded with a first codec is disclosed. In one embodiment, the method includes: (i) decoding only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; and (ii) processing at least part of the decoded content portion, and combining the processed at least part and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • In a third aspect, an apparatus configured to decode content in a first format and encode said content in a second, different format in near-real time is disclosed. In one embodiment, the apparatus includes: data processor apparatus; and storage apparatus in data communication with the data processor apparatus and having at least one computer program disposed thereon, the at least one program being configured to. when executed on the processor apparatus: decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; process at least part of the decoded content portion to produce a processed portion; and combine the processed portion and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • In a fourth aspect, a computer-readable storage apparatus is disclosed. In one embodiment, the computer-readable storage apparatus includes a non-transitory storage medium with at least one program stored thereon. The at least one program is configured to, when executed, decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; process at least part of the decoded content portion to produce a processed portion; and combine the processed portion and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • In a fifth aspect, a computer readable apparatus comprising a storage medium is disclosed. The storage medium is, in one embodiment, configured to store a plurality of data, the plurality of data comprising media data that has a portion that has been transcoded between a first and second encoding format, and a portion which has not been transcoded from the first format to the second format. The plurality of data can be used by a processing apparatus in communication with the computer-readable apparatus to render the media data compliant with the second format.
  • In a further aspect, a method of providing data encoded according to a first format using apparatus having a configuration not supporting such first format, but supporting a second format. is disclosed. In one embodiment, the method includes: processing a portion of data encoded in the second format relating to a plurality of corresponding features between the first format and the second format, the processing configured to encode the portion according to the first format; and combining the encoded portion and at least one other portion of the data encoded in the second format, the combined encoded portion and at least one other portion being decodable by an apparatus supporting the first format.
  • In another aspect, a lightweight transcoder apparatus is disclosed. In one embodiment, the apparatus is configured to decode content in a first format and encode the content in a second, different format, and the apparatus is not capable of decoding content rendered in the second format. In one variant, the apparatus includes data processor apparatus and storage apparatus in data communication with the data processor apparatus and having at least one computer program disposed thereon. In one implementation, the at least one program is configured to, when executed on the processor apparatus: decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; process at least part of the decoded content portion to produce a processed portion; and combine the processed portion and the plurality of undecoded portions so as to produce the content compatible with the second codec.
  • In another implementation, the at least one program is configured to encode only portions of the decoded content into the second format, such that the resultant media is compatible with the second format, but not fully functioned with respect thereto (i.e., the resultant media is capable of being decoded and rendered by a device configured to utilize the second format, but the decoded and rendered media is not identical (e.g., is lesser in at least one quality or performance aspect) to a version of the media which was completely encoded using the second format.
  • These and other aspects become apparent when considered in light of the disclosure provided herein.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a functional block diagram illustrating an exemplary hybrid fiber network configuration useful with various aspects of the present disclosure.
  • FIG. 1A is a functional block diagram illustrating one exemplary embodiment of a packetized content delivery network architecture useful with various aspects of the present disclosure.
  • FIG. 2 is a functional block diagram illustrating one exemplary embodiment of a network architecture for providing lightweight transcoding according to the present disclosure.
  • FIG. 3 is a diagram illustrating an exemplary H.261/H.262 to H.264 lightweight transcoding scheme according to the present disclosure.
  • FIG. 4 is a diagram illustrating an exemplary embodiment of the simplified approach for modifying H.262 picture blocks to H.264 blocks according to the present disclosure.
  • FIG. 5 is a diagram illustrating an exemplary H.261/H.262 to H.265 lightweight transcoding scheme according to the present disclosure.
  • FIG. 6 is a logical flow diagram illustrating an exemplary method for performing lightweight transcoding according to the present disclosure.
  • FIG. 6 a is a logical flow diagram illustrating an exemplary method for providing lightweight transcoding according to the present disclosure.
  • FIG. 6 b is a logical flow diagram illustrating an exemplary method for stream processing useful in lightweight transcoding according to the present disclosure.
  • FIG. 6 c is a logical flow diagram illustrating an exemplary method for entropy decoding useful in lightweight transcoding according to the present disclosure.
  • FIG. 6 d is a logical flow diagram illustrating an exemplary method for matrix retransformation useful in lightweight transcoding according to the present disclosure.
  • FIG. 6 e is a logical flow diagram illustrating an exemplary method for repackaging useful in lightweight transcoding according to the present disclosure.
  • FIG. 7 is a functional block diagram illustrating an exemplary process for partial data decoding to a disk according to the present disclosure.
  • FIG. 8 is a functional block diagram illustrating an exemplary process for lightweight data transcoding for delivery to a rendering device according to the present disclosure.
  • FIG. 9 is a functional block diagram illustrating one embodiment of a lightweight transcoding apparatus according to the present disclosure.
  • All Figures © Copyright 2014 Time Warner Cable Enterprises LLC. All rights reserved.
  • DETAILED DESCRIPTION
  • Reference is now made to the drawings, wherein like numerals refer to like parts throughout.
  • As used herein, the term “application” refers generally to a unit of executable software that implements a certain functionality or theme. The themes of applications vary broadly across any number of disciplines and functions (such as on-demand content management, e-commerce transactions, brokerage transactions, home entertainment, calculator etc.), and one application may have more than one theme. The unit of executable software generally runs in a predetermined environment; for example, the unit could comprise a downloadable Java Xlet™ that runs within the JavaTV™ environment.
  • As used herein, the term “codec” refers to a video, audio, or other data coding and/or decoding algorithm, process or apparatus including, without limitation, those of the MPEG (e.g., MPEG-1, MPEG-2, MPEG-4/H.2641H.265, etc.), Real (RealVideo, etc.), AC-3 (audio), DiVX, XViD/ViDX, Windows Media Video (e.g., WMV 7, 8, 9, 10, or 11), ATI Video codec, or VC-1 (SMPTE standard 421M) families.
  • As used herein, the terms “client device” and “user device” include, but are not limited to, set top boxes (e.g., DSTBs), personal computers (PCs), and minicomputers, whether desktop, laptop, or otherwise, and mobile devices such as handheld computers, tablets, “phablets”, PDAs, personal media devices (PMDs), and smartphones.
  • As used herein, the term “computer program” or “software application” is meant to include any sequence or human or machine cognizable steps which perform a function. Such program may be rendered in virtually any programming language or environment including, for example and without limitation, C/C++, Fortran, COBOL, PASCAL, assembly language, markup languages (e.g., HTML, SGML, XML, VoXML), and the like, as well as object-oriented environments such as the Common Object Request Broker Architecture (CORBA), Java™ (including J2ME, Java Beans, etc.), Binary Runtime Environment (e.g., BREW), and the like.
  • The term “Customer Premises Equipment (CPE)” refers to any type of electronic equipment located within a customer's or user's premises and connected to a network, such as set-top boxes (e.g., DSTBs or IPTV devices), televisions, cable modems (CMs), embedded multimedia terminal adapters (eMTAs), whether stand-alone or integrated with other devices, Digital Video Recorders (DVR), gateway storage devices (Furnace), and ITV Personal Computers.
  • As used herein, the term “display” means any type of device adapted to display information, including without limitation CRTs, LCDs, TFTs, plasma displays, LEDs, OLEDs, incandescent and fluorescent devices. Display devices may also include less dynamic devices such as, for example, printers, e-ink devices, and the like.
  • As used herein, the terms “Internet” and “internes” are used interchangeably to refer to inter-networks including, without limitation, the Internet.
  • As used herein, the term “memory” or “storage” includes any type of integrated circuit or other storage device adapted for storing digital data including, without limitation, ROM, PROM, EEPROM, DRAM, SDRAM, DDR/2 SDRAM, EDO/FPMS, RLDRAM, SRAM, “flash” memory (e.g., NAND/NOR), and PSRAM.
  • As used herein, the terms “microprocessor” and “digital processor” are meant generally to include all types of digital processing devices including, without limitation, digital signal processors (DSPs), reduced instruction set computers (RISC), general-purpose (CISC) processors, microprocessors, gate arrays (e.g., FPGAs), PLDs, reconfigurable compute fabrics (RCFs), array processors, and application-specific integrated circuits (ASICs). Such digital processors may be contained on a single unitary IC die, or distributed across multiple components.
  • As used herein, the terms “MSO” or “multiple systems operator” refer without limitation to a cable, satellite, or terrestrial network provider having infrastructure required to deliver services including programming and data over those mediums.
  • As used herein, the terms “network” and “bearer network” refer generally to any type of telecommunications or data network including, without limitation, hybrid fiber coax (HFC) networks, satellite networks, telco networks, and data networks (including MANs, WANs, LANs, WLANs, internets, and intranets). Such networks or portions thereof may utilize any one or more different topologies (e.g., ring, bus, star, loop, etc.), transmission media (e.g., wired/RF cable, RF wireless, millimeter wave, optical, etc.) and/or communications or networking protocols (e.g., SONET, DOCSIS, IEEE Std. 802.3, ATM, X.25 Frame Relay, 3GPP, 3GPP2, LTE/LTE-A, WAP, SIP, UDP, FTP, RTP/RTCP, H.323, etc.).
  • As used herein, the term “network interface” refers to any signal or data interface with a component or network including, without limitation, those of the Firewire (e.g., FW400, FW800, etc.), USB (e.g., USB2, USB 3.0), Ethernet (e.g., 10/100, 10/100/1000 (Gigabit Ethernet), 10-Gig-E, etc.), MoCA, Coaxsys (e.g., TVnet™), radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi (e.g., 802.11a,b,g,n), WiMAX (802.16), PAN (802.15), cellular (e.g., LTE/LTE-A, 3GPP, 3GPP2, UMTS), or IrDA families.
  • As used herein, the term “server” refers without limitation to any computerized component, system or entity regardless of form which is adapted to provide data, files, applications, content, media, or other services to one or more other devices or entities on a computer network.
  • As used herein, the term “user interface” refers to, without limitation, any visual, graphical, tactile, audible, sensory, or other means of providing information to and/or receiving information from a user or other entity.
  • As used herein, the term “Wi-Fi” refers to, without limitation, any of the variants of IEEE-Std. 802.11 or related standards including inter alia 802.11 a/b/g/n/v.
  • As used herein, the term “wireless” means any wireless signal, data, communication, or other interface including without limitation Wi-Fi, Bluetooth, 3G (3GPP/3GPP2), HSDPA/HSUPA, TDMA, CDMA (e.g., IS-95A, WCDMA, etc.), FHSS, DSSS, GSM, PAN/802.15, WiMAX (802.16), 802.20, NFC (e.g., ISO 14443A/B), narrowband/FDMA, OFDM, PCS/DCS, LTE/LTE-A/TD-LTE, analog cellular, Zigbee, CDPD, satellite systems. millimeter wave or microwave systems, acoustic, and infrared (i.e., IrDA).
  • Overview
  • In one salient aspect, the present disclosure provides apparatus and methods for “lightweight” data transcoding. Specifically, in one exemplary embodiment, a minimal function transcoder for the purposes of, e.g., supporting multiple devices in the home network that require varying video formats, resolutions, or bit-rates, is disclosed. The transcoding functionality may for example be downloaded or otherwise provided (such as via provisioning at the time of install) to an existing device within the home network or at a network edge.
  • In one embodiment, the transcoder (via various components thereof) uses an intermediate set of audio/video data resulting from a partial decode of an input stream (e.g., (enough to obtain required data to transform or rearrange the previously encoded information) that is temporarily stored until all transformation operations have completed. The partially decoded data is re-encoded and output in any format, resolution, and/or bitrate desired.
  • In another embodiment, premises networked devices are registered to the lightweight transcoder. The lightweight transcoder may transcode content requested by the registered devices based on any of a number of different events/criteria, such as e.g., (i) upon detection that the registered device is “in use”; (ii) at the time of original content playback or broadcast; and/or (iii) at a time prior to a previously scheduled time of intended use.
  • Various of the methods and apparatus disclosed herein advantageously exploit the similarities between the mechanisms by which the various encoding formats (e.g., H.261, H.262, H.264, H.265, etc.) account for certain behaviors or artifacts, such as motion compensation, quantization and entropy. In one variant, the lightweight transcoder apparatus merely “ignores” some of the features of the more advanced/complex content formats. The resultant output of the minimal or lightweight transcoder is a “just-in-time” or “near real-time” transcoded content for use in a premises network comprising non-legacy (such as IP-enabled) client devices with the more advanced codecs.
  • The exemplary embodiment of the disclosed lightweight transcoder apparatus utilizes extant software-based processing capability to “fit” data of a first format into a second format in a time-efficient manner (e.g., in near-real time). In doing so, the lightweight transcoder surrenders traditional goals of obtaining the best compression or highest quality output in an effort to simply create the desired format content stream with an acceptable level of quality/compression, and using non-application specific hardware (e.g., ASICs particularly configured to decode/transcode).
  • DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
  • Exemplary embodiments of the apparatus and methods of the present disclosure are now described in detail. While these exemplary embodiments are described in the context of a managed content distribution network (e.g., hybrid fiber coax (HFC) cable) architecture having a multiple systems operator, digital networking capability, and plurality of client devices/CPE, the general principles and advantages of the disclosure may be extended to other types of networks, architectures and applications, whether broadband, narrowband, wired or wireless, terrestrial or satellite, managed or unmanaged (or combinations thereof), or otherwise, the following therefore being merely exemplary in nature.
  • It will also be appreciated that while described generally in the context of point-to-point IP-based content delivery (e.g. IP video delivery or streaming), the present disclosure may be readily adapted to other types of IP-based delivery (e.g., IP-based content multicasts, etc.) as well. A myriad of other applications are possible.
  • Also, while certain aspects are described primarily in the context of the well-known Internet Protocol (described in, inter alia, RFC 791 and 2460), it will be appreciated that the present disclosure may utilize other types of protocols (and in fact bearer networks to include other intemets and intranets) to implement the described functionality.
  • Bearer Network—
  • FIG. 1 illustrates a typical content delivery network configuration useful for delivery of encoded content according to the present disclosure. The various components of the network 100 include: (i) one or more data and application origination points 102; (ii) one or more content sources 103; (iii) one or more application distribution servers 104; (iv) one or more VOD servers 105; and (v) customer premises equipment (CPE) 106. The distribution server(s) 104, VOD servers 105 and CPE(s) 106 are connected via a bearer (e.g., HFC) network 101. A simple architecture comprising one of each of the aforementioned components 102, 104, 105, 106 is shown in FIG. 1 for simplicity, although it will be recognized that comparable architectures with multiple origination points, distribution servers, VOD servers, and/or CPE devices (as well as different network topologies) may be utilized consistent with the disclosure.
  • The data/application origination point 102 comprises any medium that allows data and/or applications (such as a VOD-based or “Watch TV” application) to be transferred to a distribution server 104. This can include for example a third party data source, application vendor website, CD-ROM, external network interface, mass storage device (e.g., RAID system), etc. Such transference may be automatic, initiated upon the occurrence of one or more specified events (such as the receipt of a request packet or ACK), performed manually, or accomplished in any number of other modes readily recognized by those of ordinary skill. The application distribution server 104 comprises a computer system where such applications can enter the network system. Distribution servers are well known in the networking arts, and accordingly not described further herein.
  • The VOD server 105 comprises a computer system where on-demand content can be received from one or more of the aforementioned data sources 102 and enter the network system. These servers may generate the content locally, or alternatively act as a gateway or intermediary from a distant source.
  • The CPE 106 includes any equipment in the “customers' premises” (or other locations, whether local or remote to the distribution server 104) that can be accessed by a distribution server 104.
  • Content (e.g., audio, video, data, files, etc.) is provided in to the client devices 106 in a given data format (e.g., MPEG-2, MPEG-4, etc.). To communicate with the headend or intermediary node (e.g., hub server), the CPE 106 may use the out-of-band (OOB) or DOCSIS channels and associated protocols. The OCAP 1.0, 2.0, 3.0 (and subsequent) specification provides for exemplary networking protocols both downstream and upstream, although the present disclosure is in no way limited to these approaches.
  • While the foregoing network architectures described herein can (and in fact do) carry packetized content ( e.g., IP over MPEG for high-speed data or Internet TV, MPEG2 packet content over QAM for MPTS, etc.), they are often not optimized for such delivery. Hence, in accordance with another embodiment of the disclosure, a “packet optimized” delivery network is used for delivery of the packetized content (e.g., encoded content). FIG. 1A illustrates one exemplary implementation of such a network, in the context of a 3GPP IMS (IP Multimedia Subsystem) network with common control plane and service delivery platform (SDP), as described in co-owned U.S. patent application Ser. No. 12/764,746 filed Apr. 21, 2010 and entitled “METHODS AND APPARATUS FOR PACKETIZED CONTENT DELIVERY OVER A CONTENT DELIVERY NETWORK”, which claims priority to U.S. Provisional Patent Application Ser. No. 61/256,903 filed Oct. 30, 2009, and which is now published as U.S. Patent Application Publication No. 2011/0103374, each of which is incorporated herein by reference in its entirety.
  • As discussed therein, a substantially session-based and packetized content delivery approach (e.g., using the well known Internet Protocol) which allows for temporal, device, and location flexibility in the delivery of the content, and transportability/migration of user sessions, as well as service/content personalization (e.g., on a per-session/user basis) and blending (integration) is provided. This approach uses a common or unified delivery architecture in providing what were heretofore heterogeneous services supplied by substantially different, and often vendor-specific, networks. Moreover, the foregoing improved apparatus and methods provide for enhanced content access, reproduction, and distribution control (via e.g., a DRM-based approach and other security and content control measures), as well as quality-of-service (QoS) guarantees which maintain high media quality and user experience, especially when compared to prior art “Internet TV” paradigms. In another implementation, the network comprises both “managed” and “unmanaged” (or off-network) services, so that a network operator can utilize both its own and external infrastructure to provide content delivery to its subscribers in various locations and use cases. In one variant of this approach, network services are sent “over the top” of other provider's infrastructure, thereby making the service network substantially network-agnostic.
  • In another variant, a cooperative approach between providers is utilized, so that features or capabilities present in one provider's network (e.g., authentication of mobile devices) can be leveraged by another provider operating in cooperation therewith. Such a network provides, inter alia, significant enhancements in terms of common control of different services, implementation and management of content delivery sessions according to unicast or multicast models, etc.; however, it is appreciated that the various features of the present disclosure are in no way limited to this or any of the other foregoing architectures.
  • Notwithstanding the foregoing, it will be appreciated that the various aspects and functionalities of the present disclosure are effectively agnostic to the bearer network architecture or medium, and hence literally any type of delivery mechanism can be utilized consistent with the disclosure provided herein.
  • Lightweight Transcoding Architecture—
  • FIG. 2 illustrates an exemplary embodiment of a network architecture 200 for providing lightweight transcoding according to the present disclosure. The network 200 of FIG. 2 is utilized to receive content and transcode the content from the format it is received in, into a different format, based on e.g., the capabilities of the devices in the network 200 which will render the content. The rendering device 204 capabilities may relate to for example, video formats, codecs (e.g., H.264/.265), resolutions, and/or available bit-rates for communications between the transcoding apparatus and the rendering device.
  • The exemplary illustrated network entities and apparatus are configured to operate within one or more of various the above-described bearer networks of FIGS. 1-1A, although others may readily be used. The network may be based on wireless and/or wireline networking technologies (e.g., Wi-Fi family 802.11, WiMAX 802.16, wired Ethernet standards (802.3), optical standards/paradigms such as FIOS, SONET, etc.). The technologies forming the bearer networks may also range in scope from PAN (personal area networking), “mesh” networking, to nationwide or even global architectures). It will also be appreciated that bridges may be used to create a hybrid network environment using multiple ones of such technologies (e.g. cellular or Wi-Fi wireless/wired Ethernet hybrid).
  • As shown, the network 200 generally comprises a lightweight transcoder entity 202 which receives content from a content distribution or delivery network (such as the network disclosed in FIGS. 1-1A) and which is in data communication with at least metadata storage 206, video storage 208, and temporary storage 210. The transcoder entity 202 is further in communication with one or more rendering devices 204.
  • The transcoder 202 and/or storage devices (metadata storage 206, video storage 208, and/or temporary storage 210) may comprise premises network devices or may be located at a network edge or other location in communication with the customer's premises. In one variant, a user registers each of the user's rendering devices 204 to the transcoder 202. The user may do so by placing these in communication with the transcoder 202 and, via a series of message exchanges between the devices establish that the user of the rendering device 204 is a subscriber to the content delivery network and a user of the device 204. The user may register more than one rendering device 204 (FIG. 2 being merely exemplary of the overall system); in this case, the devices and/or user will also establish that particular user as being the same user across the various devices. During the registration process, the transcoder 202 is further made aware of the capabilities of each of the rendering devices 204 via generation of a device profile for each rendering device and/or a home network profile for each subscriber or user. In the instance that the transcoder 202 comprises a network edge device (i.e., is not located at the consumer's premises), the transcoder 202 is further configured to associate each rendering device with a particular one of the users/subscribers which may also register their devices to the transcoder 202.
  • The rendering devices 204 comprise any device capable of receiving, decoding and displaying (or communicating decoded data to a device configured to display) audio/video content. Exemplary rendering devices include IP-enabled devices such as smart phones, tablet computers, hand held computers, laptop computers, personal computers, smart televisions, streaming media devices, etc., as well as non-IP enabled set top boxes, etc. The present disclosure is intended to provide functionality irrespective of the specific formats with which the rendering devices are compatible.
  • As will be discussed in greater detail below, the transcoder 202 (also referred to herein as the “lightweight transcoder”) is configured to receive content delivered from the content delivery network. As noted above, content is, in one embodiment, delivered in H.261 or H.262 format; the content may be either live or previously recorded and may delivered as a broadcast, multicast, or unicast. Additionally, the rendering devices 204 within the home network require, in one embodiment, H.264 video format. It is appreciated, however, that the herein described approach may be utilized for conversion between any data formats; H.262 to H.264 conversion being merely exemplary of the general process.
  • The transcoding process occurs in either hardware or software the transcoder device 202. The transcoder device 202 may comprise a premises apparatus (such as a set top box, gateway device, or other CPE), or a network or network edge device (e.g., a server processor in a network operations center). In one variant, since the transcoding process discussed herein is “lightweight”, the process may comprise a downloadable software upgrade provided via another network entity and may utilize substantially extant device hardware. In one specific example, MPEG2 video content arrives in a QAM or Ethernet port, and is transcoded to MPEG4 over HTTP Live Steaming (HLS) to an Apple iPad® on the same home network as the transcoding device 202.
  • The lightweight transcoder 202 receives data and, irrespective of the input format, metadata associated to the received data is stored at the metadata storage 206 entity. If the data it is in an appropriate format for the home network (e.g., H.264), a copy of the data is immediately stored at the video storage apparatus 208. If the received data is not in an appropriate format for the home network, the data input is partially decoded, then the partially decoded discrete cosine transforms (DCTs) which constitute the data are either stored onto a disk at the temporary storage entity 210, or are immediately re-mapped to DCTs of a particular format. The format selected for re-encoding may be a format previously selected by the requesting user or may be selected based on the device and/or user profile (e.g., based on the compatibility of the requesting rendering device 204). The re-mapped DCT may be recorded to temporary storage 210 or may be immediately repackaged into the new format's packaging. Once repackaged, the data is recorded in its new format to storage (at the storage apparatus 208) for later consumption, or is sent to a rendering device 204 for audio/video display via a backend interface of the transcoder 202 (e.g., MoCA, Ethernet, WiFi, etc.) based on a request for the content being received from the rendering device 204.
  • The intermediate or temporary storage entity 210 may be of sufficient size to accommodate data storage during the transformation process. In one variant, a storage entity of large enough to enable time-shifting for twice the amount of time required for all transformation operations for a given device to be completed is utilized.
  • A typical premises networks may utilize up to Gigabit speed Ethernet services. Hence, transcoding and delivery of the transcoded content from an in home transcoder 202 to a rendering device 204 in the premises network may approximate real-time. In other words, the present disclosure provides a mechanism for transcoding content at a rate which is 1.5-3 times faster than traditional transcoding rates. As will be discussed in greater detail below, the present mechanism accomplishes this goal by circumventing various traditional transcoding steps to arrive at a lower quality, less efficiently transcoded content. For example, when converting H.262 to H.264 it may be ignored that the H.264 format is capable of having multiple reference frames. spatial prediction, and varying block sizes. In so much as these features are not strictly “required” to generate H.264 format data, they are simply skipped, i.e., the re-encoded data does not take advantage of these features. The same logic is applied in conversion between other data formats. A salient difference between the present disclosure and typical transcoding systems is that in the present disclosure the ability to transcode in near-real time or near-live and stream within a home for alternate screen devices is taken to outweigh the excess bandwidth consumption needed to support multiple simultaneous devices and/or profiles and the reduced quality of the transcode (as being less than best possible class).
  • In another variant, the present disclosure takes advantage of secure read/write functions available within the operating systems of existing premises devices, including a premises located transeoder 202 and/or the rendering devices 204. In this manner, a network operator may define the read and/or write access of the various devices (transcoder 202, rendering device 204, etc.) with respect to given a content or content type, or generally with respect to all content. Specifically, Conditional Access is controlled by a hardware device called a cable card or other secure micro device. The secure micro device stores the list of entitlements on behalf of the subscriber. These entitlements control access to premium channels, pay-per-view services, and system resources such as the hard disk drive used for digital video recording. In one embodiment, the hard disk drive is used to store partially decoded sections for transcoding and/or remapping as discussed herein. This temporary storage must be conditionally accessed to be in full support of copy protection within the device. The entitlement agent within the CPE thus verifies the ability to use the disk and provide the open/read/write/close method capability. Data written and subsequently read will have been encoded and decoded via these write/read methods. In an alternative embodiment, other means for controlling access may be utilized such as, e.g., Digital Rights Management (DRM).
  • The lightweight transcoder 202 may transcode content requested by the registered devices in at least one of the following instances: (i) upon detection that the registered device is “in use”; (ii) at the time of original content playback or broadcast; and/or (iii) at a time prior to a previously scheduled time of intended use.
  • In the first instance, the registered user devices 204 which are capable of rendering content are configured to automatically signal to the transcoder 202 when they have been powered on and/or have entered a home network. Any content requested by these devices is then automatically transcoded for delivery to the devices via the premises network. Alternatively, the transcoder 202 may periodically send a heartbeat message to which the rendering devices 204 in the network respond. When a new device enters the network and/or is powered on, the transcoder 202 is made aware of its presence and may begin transcoding content which is requested to be received thereat.
  • In the second instance, the transcoder will select particular content to be automatically transcoded at the time it is broadcast (irrespective of a request) and/or at the time it is requested to be rendered. Requests for particular content may be received simultaneous to a broadcast of the content, or after a broadcast of the content (in this instance the content is delivered from video storage 208 or temporary storage 210). The content selected to be automatically transcoded at the time it is broadcast according to this embodiment may comprise content which is determined to have a high viewership rate among most subscribers, content which is previously identified by the subscriber to be of particular interest, content which is concurrently being requested or recorded at another device associated to the subscriber, and/or content which is identified as being of particular interest to a subscriber based on a profile associated thereto. An exemplary mechanism for determining a user profile and providing content recommendations is disclosed in co-owned, co-pending U.S. patent application Ser. No. 12/414,576 entitled “RECOMMENDATION ENGINE APPARATUS AND METHODS” and filed on Mar. 30, 2009, which is incorporated herein by reference in its entirety. As discussed therein, a mechanism for particularly selecting content to align with a user's preferences (which the viewer need not enter manually) is provided. The content provided to the user is compiled from various distinct sources, including, inter alia, DVR, broadcasts, VOD systems, start over systems, etc. The present invention provides a mechanism to learn (and unlearn) the user's preferences and which content they are likely to enjoy based on actions taken with regard to the content. The recommended content may then be transcoded and/or recorded to temporary storage 210 for transcoding at a later time.
  • In the third instance, the requesting rendering device 204 may pre-establish a time at which content is intended to be displayed. Given the speed at which the presently disclosed lightweight transcoder 202 is configured to transcode, the pre-established time may be as early as a portion of a second past its live broadcast time. According to this embodiment, the subscriber merely schedules a particular content in advance via a scheduler function of the transcoder apparatus 202. The scheduler enables the subscriber to identify the requested content as well as a time for delivery thereof. The transcoder 202 uses this information to arrange resources to be available to transcode the particular content in advance of the scheduled time for delivery thereof. The transcoder 202 may further use what is known about a time needed to transcode the entirety of the content to determine an adequate time to begin the transcoding process so as not to interrupt delivery there of to the subscriber.
  • Exemplary Lightweight Transcoding—
  • FIGS. 3-5 illustrate exemplary lightweight transcoding according to the present disclosure. The illustrated embodiments are exemplary of the general principles of the disclosure and are in no way intended to limit the scope thereof.
  • The exemplary transcoding schemes of FIGS. 3-5 are, in one embodiment, performed at a processor associated to the lightweight transcoder apparatus 202. As noted above, software for performing the herein described transcoding may be downloaded or otherwise provided to the transcoding device 202 thereby taking advantage of the device's indigenous hardware capabilities.
  • In the illustrated embodiments, each data element in a first format is re-used in generating the data in the second format. However, a loss of some frames may be tolerable given the nature of the present disclosure to forego certain quality requirements in an effort to ensure overall readability of the content in the transcoded format.
  • The data elements are re-used by repackaging them from a first encoding standard object to a standard object of the second encoding scheme. As will be discussed in detail herein, removing a header portion of the data and replacing it with a header particular to the desired codec may, in many instances, be sufficient to perform the lightweight transcoding discussed herein. The present disclosure purposely does not take advantage of some of the advancements that are available to “higher” codec content formats so as to arrive at a transcoded content version more quickly than would be ordinarily obtainable. However, in other embodiments, various ones of these advancements may be utilized during the lightweight transcoding process to address specific desired results, such as e.g., taking advantage of a higher codec's multilevel capabilities to arrive at a transcoded content which is smaller in size (thus easier to store) than would be obtained without the multilevel transcoding.
  • FIG. 3 illustrates a high-level diagram of one exemplary H.261/H.262 to H.264 lightweight transcoding scheme. As shown, the lightweight transcoder 202 repackages each frame in H.261/H.262 to a single sequence, single object, single layer video object in H.264. As will be discussed in greater detail below, although H.264 is configured to utilize a more complex video object, the mere categorization from frames in H.261/H.262 to video objects in H.264 is sufficient to enable the frames to be rendered by an H.264 device.
  • Each picture in H.261/H.262 is repackaged into video object plane (VOP) background (i.e., layer 0). Given that there is no additional repackaging required for utilizing layer 0 in H.264, using this layer eliminates any prediction between planes. A group of pictures (GOP) in H.261/H.262 is repackaged as a group of video objects (GOV) in H.264. Specifically, the GOV in the H.264 stream is substantially similar to a GOP in H.262 in that it holds the frame sequence (e.g. IBBPBBPBBP), the difference being the sequence describes VOPs rather than frames. Given that different VOPs are not being created from the H.262 stream (because the present disclosure operates in a single layer) the I, B, and P frames are simply set to I, B, and P VOPs (within VOP layer 0). Rather than taking advantage of the H.264 ability to define various shapes, the H.261/H.262 data is assigned a rectangle. The H.264 rendering device will then decode the entire rectangle to obtain the data.
  • The 16×16 blocks utilized in H.261/H.262 are, in one embodiment, forced into the H.264 standard. FIG. 4 illustrates and exemplary embodiment of the simplified approach for modifying H.262 picture blocks to H.264 blocks according to the present disclosure. It is appreciated that H.262 uses a fixed 16×16 block for lura DCTs, an 8×8 block for chroma DCTs, and a 16×16 block for motion estimation. The H.264 offers more coding options by supporting variable block size prediction for inter as well as intra block coding. The intra prediction modes can use 16×16 or 4×4 block sizes (8×8 block size can also be used optionally). The DCT blocks recovered from the MPEG-2 partial decoding stage are used to estimate the prediction modes of DCT blocks in H.264. To accomplish this, the header bits for ‘frame_mbs_only_flag’ and ‘direct8×8_inference_’ flag are set to 1.
  • The intra/inter variable length coding (VLC) of the H.261/H.262 format is translated to adaptive VLC. As will be discussed in greater detail below, H.264 provides for two types of entropy encoding, context-adaptive binary arithmetic coding (CABAC) and context-adaptive variable length coding (CAVLC). CAVLC is always selected in the case of H.264 and CABAC must be selected in the case of H.265. Translation is accomplished, in one embodiment, by decoding the VLC in H.262 to obtain the DCT coefficients to be used in the (re)transformation activity while moving to either H.264 or H.265 output. This activity is followed by re-encoding to either CAVLC (in the case of H.264) or CABAC (for H.265).
  • The slice start structure of H.261/H.262 is repackaged to fit the key frame marker structure of H.264. To accomplish this, in one embodiment, the H.264 slice type header field is set to 7 (I-VOP) for each H.262 I-frame processed (this is the key frame marker). Finally, the zig-zag mode in H.261/H.262 can be forced to H.264 Mode 3 using a diagonal, down, then left pattern. This may be accomplished by rewriting the bits of the zig-zag mode. In one embodiment, this is accomplished by setting the H.264 slice entropy coding mode header field to three (diagonal down left) for each H.262 slice processed.
  • Referring now to FIG. 5, a high-level diagram of one exemplary H.261/H.262 to H.265 lightweight transcoding scheme is illustrated. According to this embodiment, the lightweight transcoder 202 repackages each frame in H.261/H.262 to a single sequence, single object, single layer video object in H.265. As will be discussed elsewhere herein, the H.265 standard utilizes a more complex video object; the present disclosure provides a mechanism to enable the frames to be rendered by an H.264 device without taking advantage of the specific complexities of H.265.
  • Each picture in H.265 is repackaged into the video object plane (VOP) via its plane 0 or background plane (similar to that discussed above for H.264 repackaging). A group of pictures (GOP) in H.261/H.262 is repackaged as a group of video objects (GOV) in H.265. The GOV in the H.265 stream is essentially the same as a GOP in H.262 in that it holds the frame sequence (e.g. IBBPBBPBBP), the difference is that the sequence describes VOPs rather than frames. The present disclosure does not create different VOPs from the H.262 stream when repackaging as H.265, instead the I, B, and P frames are set to I, B, and P VOPs (within VOP layer 0). The H.261/H.262 data is assigned a rectangle rather than taking advantage of the H.265 ability to define various shapes. The H.265 rendering device simply decodes the entire rectangle to render the data.
  • The 16×16 blocks utilized in H.261/H.262 are, in one embodiment, forced into fixed size Transform Units to simple Coding Tree Units (CTU) in the H.265 standard.
  • The intra/inter variable length coding (VLC) of the H.261/H.262 format is transitioned to context-adaptive binary arithmetic coding (CABAC) in H.265. This is accomplished by, in one embodiment, decoding the VLC in H.262 to obtain the DCT coefficients to be used in the (re)transformation activity while moving the H.265 output. This activity is followed by re-encoding to CABAC (for H.265).
  • The slice start structure of H.261/H.262 is repackaged to fit the key frame marker or tile marker structure of H.265. Since the NAL header fields are backward compatible to H.264 the slice type is set to 7 (I-VOP) for each H.262 I-frame processed (this is the key frame marker).
  • Finally, the zig-zag mode in H.261/H.262 can be forced to H.265 Mode 3 using a diagonal, down, then left pattern. This may be accomplished by rewriting the bits of the zig-zag mode. In other words, since the NAL header fields are backward compatible to H.264 the slice entropy coding mode header field will be set to 3 (diagonal down left) for each H.262 slice processed.
  • The foregoing lightweight transcoding schemes of FIGS. 3-5 improve the speed of encoding such that content may be transcoded in near real time. That is, although there is some delay associated with the in-bound quality and available bandwidth, there is generally an undetectable delay associated to the lightweight transcoding process itself. A delay associated with the transcoding process may be detectable in the instance the transcoded section comprises e.g., a high bandwidth scene or portion of content. In one embodiment, the delay period associated with the in-bound quality, bandwidth availability, and/or bitrate requirements of the transcoded content itself may be accounted for in advance.
  • In one variant, delivery of the transcoded content stream may be delayed by an amount of time equal to an amount of time necessary to account for the so-called “worst case scenario”, i.e., an instance of highest delay due to one or more of in-bound quality, bandwidth availability, bitrate requirements of the transcoded content itself, and/or other factors effecting transcoding speed.
  • In another variant, an upstream network entity or the transcoder 202 itself may be configured to pre-scan content to determine portions which have high bitrate requirements. These portions may be utilized to determine an amount of delay (as discussed above) and/or for pre-processing and mapping. In other words, a network entity (or the transcoder 202) may review selected content to determine high bandwidth portions, instructions or a map are given to the transcoder process to provide a time at which it is determined the delay would not be detectable by a subscriber and/or the rendering device during streaming of the transcoded content.
  • Exemplary Methods—
  • Referring now to FIG. 6, an exemplary method 600 for performing lightweight transcoding according to the present disclosure is given. As shown, the method 600 generally comprises receiving a content stream (step 602). The content stream may be received at a lightweight transcoder 202 which may be located at a user premises or elsewhere in the network, including e.g., the network edge. Per step 604, metadata associated with the received content is stored at the metadata storage entity 206. At step 606, the transcoder 202 determines whether the received content is in an appropriate format based on what is known about the subscriber network devices registered to the transcoder 202. The transcoder 202 may make this decision based on e.g., the capabilities of a rendering device 204 which requested the content and/or other devices which are known to be in the network (i.e., other registered devices). In another variant, the transcoder 202 may be given a pre-configured set of rules for transcoding either entered by the subscriber or by a network operator. For example, it may be specified that all content which is to be stored at the video storage 208 be in a specific format (e.g., H.264). In another example, it may be that only content for which a current request form a rendering device has been received is to be transcoded, while all other content is stored as is in the video storage 208.
  • When it is determined that the content is in an appropriate format based on the capabilities of the devices which have requested the content or are in the network and/or the aforementioned rules, the content is placed in video storage 208. When it is determined that the content is not in an appropriate format, the content is partially decoded via stream processing (step 610), entropy decoding (step 612), and matrix retransformation (step 614), then repackaged (step 616) and placed in storage 208. The stream processing (step 610), entropy decoding (step 612), matrix retransformation (step 614), and repackaging (step 616) will be discussed in greater detail below with respect to FIGS. 6 a-6 e.
  • In an alternative embodiment, an intermediary device may be provided either at the network edge or within the user's premises which initially receives and evaluates the content stream. For example, a premises gateway apparatus may be disposed between the network and the premises transcoder 202. In this instance it is the gateway (or other intermediary entity) which causes the metadata relating to the received content to be stored (step 604), determines whether the received content is in an appropriate format (step 606) and directs the content to video storage 208 or to be partially decoded (steps 610-616).
  • In one variant, content is held in temporary storage 210 prior to being transcoded to one or more new formats. The formats to which content are to be repackaged into using the lightweight repackaging solutions discussed herein are determined based on e.g., the capabilities of a requesting device, the capabilities of all of the devices associated or registered to the subscriber, and/or one or more user or network-established rules for transcoding. Accordingly, particular content may be transcoded into more than one new fauna to accommodate the capabilities of all of the devices within the network. Alternatively, a single format may be selected for use within the premises network, and the particular content is repackaged to only that format. Exemplary repackaging techniques which may be utilized to transform from H.261/H.262 to H264 or H.265 are discussed elsewhere herein and may be used with equal success in accordance with the method of FIG. 6.
  • The one or more transcoded content versions are then placed in video storage 208 alongside the content which was received already in the appropriate format (discussed above). In this manner, the system creates a video storage 208 having only content which can be delivered to requesting devices. Stated differently, all content which is received in an inappropriate format is only temporarily stored then saved to more permanent storage upon transcoding thereof.
  • Finally, at step 618, content which was placed in the video storage 208 is delivered to a rendering device 204. The delivery may occur at a time of request thereof by the rendering device 204 or may be pre-scheduled by the rendering device 204 (or other device on behalf of the rendering device).
  • As noted elsewhere herein, the disclosed method 600 may be performed on live broadcast content which is streamed to the transcoder 202 for immediate repackaging and delivery of the content in near real-time.
  • Referring now to FIG. 6 a, a specific variant of an exemplary method 620 for performing lightweight transcoding according to the present disclosure is given. As shown, the method generally comprises receiving an H.262 input at step 621. It is appreciated however, that the input signal may comprise an H.261 input in another alterative embodiment. The input stream is first processed including e.g., dequantization (step 622) such that the nonlinear signals are reconstructed. This may occur using e.g., smooth and/or step signal reconstruction. Alternative mechanisms for dequantization which are well known in the art may also be utilized.
  • Entropy decoding is applied to the dequantized stream (step 612). As will be discussed in further detail elsewhere herein (see e.g., FIG. 6 c and discussion relating thereto), entropy decoding may include translation to obtain DCT coefficients which are later used for re-encoding according to either CAVLC or CABAC.
  • At step 624 it is determined whether the content is to be transcoded (via the lightweight transcoder) in so-called near real time. In the instance, the content is not required to be transcoded immediately (i.e., transcoding is deferred) it is placed in storage at step 626. The storage entity used for deferred transeoding may comprise the temporary storage 210, video storage 208, and/or another storage entity (not shown). Content which is to be transcoded in near real time is placed in temporary storage 210.
  • At the time determined to begin transcoding (either in near real time or at some deferred time), a profile is selected (per step 627) to correspond to the appropriate device and/or user. In the illustrated example, Profile A 627 a, Profile B 627 b, through Profile N 267 n may be selected. Once the appropriate profile is selected, a quantization remapping is performed (step 628) to process the signal in preparation for retransformation (step 614), which will be discussed in further detail below.
  • Finally repackaging of the stream is performed which may include adding new motion vectors 630 and encoding new entropy values 631 to create an H.264 or H.265 output at step 632.
  • FIG. 6 b illustrates one exemplary method for stream processing 610 according to the present disclosure. As shown, an H.262 (or H.261) is input from temporary storage 210 at step 621. The stream assembler receives the input at step 641 and determines whether a GOP header is present (step 642) and if so generates a GOV header therefrom (step 643 a). After the GOP header has been removed, it is determined whether a picture header is present (step 644) and if so a VOP header is created from the picture header information (step 643 b). Amer the picture header has been removed, it is determined whether a slice header is present (step 646) and if so the slice header is adapted (step 643 c). The new headers 643 a, 643 b, and 643 c are then stored in temporary storage 210 and are utilized in repackaging (discussed below).
  • The header-less data is processed using an MB Data processor 648. It is determined at step 649 whether MB data processing is completed and if not, the process continues again at the stream assembler (step 641). When the MB data processing is complete, the processed data is placed in temporary storage 210 and the process proceeds to the repackager 203 for entropy decoding 612 (as discussed below).
  • Referring now to FIG. 6 c, an exemplary entropy decoding method 612 is illustrated. As shown, per step 651, H.262 (or H.261) entropy values are obtained (such as from the data streams held in temporary storage 210). The entropy values are decoded at step 652 using Huffman decoding, which is well-known in the art and DCT coefficients are obtained (step 653). The DCT coefficients are then transformed (step 654) to create new coefficients (step 655).
  • At step 656, it is determined whether an H.264 or H.265 stream is to be created. If the new codec is to be H.264, context-adaptive variable length coding (CALVC) entropy coding is performed at step 657 and new entropy values are output at step 659. CALVC is a well-known form of entropy coding used for H264 video encoding. In the present example, it is used to encode residual, zig-zag order blocks of transform coefficients. Alternatively, if the new codec is to be H.265, context-adaptive binary arithmetic coding (CABAC) entropy coding is performed at step 658 and new entropy values are outputted at step 659. CABAC is a well-known form of entropy coding used for H.265 video encoding. The new entropy values are utilized in repackaging (discussed below).
  • FIG. 6 d illustrates one exemplary method for matrix retransformation 614. As shown, per step 662, a DCT cosine matrix is obtained by the MB data processor 648 from the streams in temporary storage 210. A transformation is applied at step 663 either from a selected profile 627 or from among one of a plurality of pre-determined transforms 664. A new cosine matrix is thereby generated (step 665) and placed in temporary storage 210. The new cosine matrix is utilized in repackaging (discussed below).
  • FIG. 6 e illustrates the final repackaging process 616 associated with the lightweight transcoding discussed herein. As shown, the new headers 643 a, 643 b, and 643 c, new entropy values 659, and new matrices 665 are utilized such that data synch timestamps are arranged to match those of the original H.262 stream input (step 671). At step 672, network abstraction layer (NAL) packets are created in accordance with H.264 coding standards. Finally, at step 673 a new H.264 stream with the desired profile is output. It is appreciated, however, that similar repackaging techniques may be utilized to generate an H.265 stream output as well.
  • FIG. 7 illustrates another exemplary process 700 for the partial data decoding discussed at step 610 of FIG. 6 and FIG. 8 illustrates another exemplary process 800 for the repackaging of data discussed at step 616 of FIG. 6 and delivery of repackaged content to a rendering device discussed at step 618 of FIG. 6. It is appreciated that although the illustrated embodiments specifically discuss H.262 to H.264 transcoding and delivery of H.264 content, the apparatus and processes disclosed herein are equally applicable to transcending from any of the given formats to another one of the given formats, the foregoing being exemplary of the overall concepts disclosed herein.
  • As shown in FIG. 7, the partial decode occurs when a transport stream is received within a premises network. Metadata relating to the stream is stored at metadata storage 206 per 701. Per 703, the stream is passed to an entity for determining whether it is in an appropriate format (in the given example, H.264 format). As noted above, the entity charged with this evaluation may comprise a gateway entity within the premises, the lightweight transcoder (whether in the premises or at a network), or other network entity.
  • If the content is in the appropriate (H.264) format, it is passed at 705 to the video storage apparatus 208. If the content is not in the appropriate format, it is passed to a variable length decoder 702. The variable length decoder, 702 is, in one embodiment, a software application run on the lightweight transcoder 202. Alternatively, the variable length decoder 702 may be run on another device within the premises (e.g., the gateway apparatus, not shown) or at the network edge.
  • The variable length decoder 702 decompresses the received content into an intermediate format represented by the data obtained from the decompression techniques 709. Specifically in a first decompression technique, DCT coefficients for I-frames, B and P-frames are derived to arrive at the complete set of coefficients for those respective frames. It is noted that an inverse DCT algorithm is, in one embodiment, specifically not utilized so as to conserve processing resources. That end result is then used to create the transforms used for the H.264 (or H.265) output. In another decompression technique. field and frame motion vectors are extracted from the compressed motion data (which describes object change from frame to frame). Next, picture information is obtained to determine which frames are interlaced, bi-directional, or progressive. Finally, group of pictures (GOP) information is obtained from the compressed data which indicates timestamps for each frame.
  • Once the data is decompressed, it is stored at the transcode or temporary storage apparatus 210. The temporary storage entity 210 is, in one embodiment, large enough to accommodate data to enable time-shifting for twice the amount of time required for all transformation operations for a given device to be completed.
  • Referring now to FIG. 8, repackaging and delivery of data is illustrated. As shown, the data 709 which is stored in temporary storage 210 comprises at least frame and field motion vectors, frame and field DCTs, picture information and GOP information. The data 709 is transmitted to a lightweight transcoder entity 202 for motion, texture, and shape coding 802 to arrive at repackaged data. The motion coder determines where and how sets of blocks have moved from frame to frame and uses this information to generate compressed data. The texture coder uses the DCTs to create a compressed signal by identifying information which has changed (other than motion). Finally, a shape coder is used to force the data into an H.264 shape. In one embodiment the shape which is used is a rectangle therefore causing decoding at the rendering device 204 of the entire screen. The repackaging process discussed herein may occur immediately upon receipt of content at the temporary storage 210 (so as to provide near-live streaming of received content) and/or upon user request.
  • In the illustrated example, metadata stored at the metadata storage entity 206 is transformed from an original video profile to an output video profile 809 by adding mapping information and information regarding the profiles supported. The output video profile and the repackaged data are then provided (at 805) to a multiplexer entity 804 of the transcoder 202. In an alternate embodiment, the multiplexer 804 may be separate from the transcoder 202 yet in communication therewith. The multiplexer 804 causes the metadata and repackaged content to be provided as a single data stream 803 to a rendering device 204 or to storage 208 for subsequent delivery to a capable rendering device 204 (i.e., a rendering device which is configured to decode and display (or cause to be displayed) H.264 content in the present example).
  • Although the foregoing example of FIGS. 7-8 illustrated specifically H.262 to H.264 transcoding, it is appreciated that any of the herein disclosed lightweight transcoding schemes, including but not limited to those discussed in FIGS. 3-5 above, may be utilized consistent with the present invention. In other words, the partial decode and subsequent repackaging of the received content may occur in any manner which accomplishes the overall schemes identified in FIGS. 3-5.
  • Lightweight Transcoder Apparatus—
  • FIG. 9 illustrates an exemplary lightweight transcoder apparatus 202. As shown the apparatus 202 generally comprises a network interface 902, a processor 904, a plurality of backend interfaces 906, and memory 908.
  • The network interface 902 is configured to enable communication between the lightweight transcoder 202 and the content delivery network. The transcoder receives data from and communicates to various network entities via the interface 902. Communication may be effected via any signal or data interface including, e.g., a radio frequency tuner (e.g., in-band or OOB, cable modem, etc.), Wi-Fi, and/or Wi-MAX, etc. In one embodiment, in addition to the programming content, one or more of the lightweight transcoder applications discussed herein are provided via the network interface 902.
  • The backend interfaces 906 are configured to enable communication between the transcoder apparatus 202 and the various premises network devices including e.g., metadata storage 206, video storage 208, temporary storage 210, and a plurality of rendering devices 204. Communication is enabled via e.g., Firewire, USB, Ethernet, MoCA, Wi-Fi, Wi-MAX, etc. interfaces.
  • The storage apparatus 908 is configured to store a plurality of information used by the transcoder 202. For example, information relating each rendering device 204 to a particular user or subscriber account may be stored. Additionally, infoiivation relating to the capabilities of each of the registered rendering devices may also be stored. Moreover, content requests and scheduling data for each rendering device 204 are also stored.
  • The digital processor 904 of the transcoder apparatus 202 is configured to run a plurality of software applications thereon. A decoder application 702, an encoder application 802, a multiplexer 804, and a scheduler 910 are illustrated; however, other applications necessary to complete the herein described lightweight transcoding process may also be provided. Alternatively, one or more of the decoder 702, the encoder 802, the multiplexer 804, and/or the scheduler 910 may be configured to run on a device which is separate from yet in communication with the transcoder apparatus 202.
  • The decoder application 702 is a software application which enables the transcoder 202 to partially decode received content as discussed elsewhere herein. Specifically, the decoder application 702 unpackages the received content into an intermediate format represented by the data obtained from one or more techniques. In one specific embodiment, the decoder application 702 utilizes one or more of a DCT algorithm, a field and frame motion vectors extraction algorithm, decompression to obtain picture information and GOP information. The decompressed intermediate data structure is stored in the temporary storage 210 via transmission thereto via the appropriate backend interface 906.
  • The encoder application 802 is a software application which enables the transcoder 202 to repackage the partially decoded data structure generated by the decoder application 702. In one variant, the encoder application performs motion, texture, and shape coding of the content to arrive at repackaged data. In another alternative, the repackaging techniques discussed herein with respect to FIGS. 3-5 are performed by the encoder application 802 to encode the content.
  • The multiplexer application 804 is a software application which enables output video profile data and the repackaged content to be provided as a single data stream to a rendering device 204 or to a storage apparatus 208 (for subsequent delivery to a capable rendering device 204).
  • Finally, the scheduler application 910 is a software application which generates a user interface by which a user of a rendering device 204 may define a date and/or time at which content is to be delivered. For example, a user of the rendering device 204 may access the scheduler application 910 to determine that particular content is broadcast at 8:00 pm, Tuesday. The scheduler then may utilize the previously disclosed look-ahead features to predict a delay time associated with transcoding the particular content (based on its bitrate requirement, length, etc.). Alternatively, delay information may simply be provided to the scheduler 910 from a network entity. In either instance, the delay is added to the broadcast time, thus the user may select to have delivery of the content at, e.g., 8:01 pm, Tuesday (after the appropriate delay time has elapsed). Prior to the time for delivery selected by the user, the scheduler application 910 causes the transcoder to obtain the desired content and being transcoding. The time at which the transcoding is scheduled to occur may coincide with the amount of time of the delay associated with the transcoding process.
  • It will be recognized that while certain aspects are described in terms of a specific sequence of steps of a method, these descriptions are only illustrative of the broader methods of the disclosure, and may be modified as required by the particular application. Certain steps may be rendered unnecessary or optional under certain circumstances. Additionally, certain steps or functionality may be added to the disclosed embodiments, or the order of performance of two or more steps permuted. All such variations are considered to be disclosed and claimed herein.
  • While the above detailed description has shown, described, and pointed out novel features of the disclosure as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the device or process illustrated may be made by those skilled in the art without departing from the disclosure. The foregoing description is of the best mode presently contemplated of carrying out the techniques and architectures disclosed herein. This description is in no way meant to be limiting, but rather should be taken as illustrative of the general principles of the disclosure. The scope of the disclosure should be determined with reference to the claims.

Claims (20)

What is claimed is:
1. A method of transcoding media data encoded according to a first format, the method comprising:
performing, using a decoding apparatus, a partial decoding of the media data, the portion to produce intermediate data and undecoded data;
performing at least one transcoding process on the intermediate data to produce transcoded data; and
combining the transcoded data and the undecoded data into a data structure which can then be decoded and rendered by a decoding apparatus according to a second format.
2. The method of claim 1, wherein the first format comprises a format that requires a higher communication bandwidth for transmission than the second format.
3. The method of claim 2, wherein the first format comprises an H.261 or H.262 Standard compliant format, and the second format comprises an H.264 Standard compliant format.
4. The method of claim 1, wherein the decoding apparatus comprises an apparatus having processing capability less than that needed to transcode said media data completely from said first format to said second format.
5. The method of claim 1, wherein the partial decoding comprises obtaining and storing a plurality of discrete cosine transforms.
6. The method of claim 5, wherein the performing at least one transcoding process comprises repackaging at least a some of a plurality of frames present in the intermediate data to a single sequence, single object, single layer video object.
7. The method of claim 1, wherein the performing at least one transcoding process comprises repackaging at least a some of a plurality of frames present in the intermediate data to a single sequence, single object, single layer video object.
8. Apparatus configured to decode content in a first format and encode said content in a second, different format in near-real time, the apparatus comprising:
data processor apparatus; and
storage apparatus in data communication with the data processor apparatus and having at least one computer program disposed thereon, the at least one program being configured to, when executed on the processor apparatus:
decode only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions;
process at least part of the decoded content portion to produce a processed portion; and
combine the processed portion and the plurality of undecoded portions using at least a frame structure that is compatible with the second codec.
9. The apparatus of claim 8, further comprising an interface configured to transmit the content compatible with the second codec to a user device for rendering thereon.
10. The apparatus of claim 8, wherein:
the decode of only a portion comprises obtaining a plurality of discrete cosine transforms and storage of the obtained discrete cosine transforms in a temporary storage of the apparatus.
11. Computer readable apparatus comprising a storage medium, the storage medium configured to store a plurality of data, the plurality of data comprising media data that has a portion that has been transcoded between a first and second encoding format, and a portion which has not been transcoded from the first format to the second format;
wherein the plurality of data can be used by a processing apparatus in communication with the computer readable apparatus to render the media data compliant with the second format on a rendering device.
12. The apparatus of claim 11, wherein the computer readable apparatus comprises a NAND Flash memory integrated circuit, and the processing apparatus comprises a digital processor, the integrated circuit and the processor being part of a mobile wireless enabled user device.
13. The apparatus of claim 11, wherein the transcoded portion comprises media data that was received via a wireless interface of the user device, and then transcoded and stored in the storage medium.
14. The apparatus of claim 11, wherein the transcoded portion comprises media data that has a frame structure different than the frame structure of the media data prior to transcoding.
15. A method of providing content compatible with a second codec from content encoded with a first codec, the method comprising:
decoding only a portion of the content encoded with the first codec to produce a decoded content portion and a plurality of undecoded portions; and
processing at least part of the decoded content portion, and combining the processed at least part and the plurality of undecoded portions so as to produce the content compatible with the second codec.
16. The method of claim 15, wherein the decoding and processing are accomplished using non-application specific processing apparatus that can also be used to decode and render content compatible with the first codec.
17. The method of claim 16, wherein the non-application specific processing apparatus comprises a legacy subscriber premises device, the first codec comprises an H.262 codec, and the second codec comprises an H.264 codec.
18. The method of claim 16, wherein the decoding comprises extracting at least some of a plurality of discrete cosine transforms (DCTs) present in the content encoded with the first codec, and the processing and combining comprise disposing at least some of the extracted DCTs into a frame structure compatible with the second codec.
19. The method of claim 16, wherein the decoding and processing are accomplished by the non-application specific processing apparatus primarily in software.
20. The method of claim 16, wherein the decoding and processing are conducted in near-real time.
US14/452,359 2014-08-05 2014-08-05 Apparatus and methods for lightweight transcoding Abandoned US20160041993A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US14/452,359 US20160041993A1 (en) 2014-08-05 2014-08-05 Apparatus and methods for lightweight transcoding
US16/538,714 US20200034332A1 (en) 2014-08-05 2019-08-12 Apparatus and methods for lightweight transcoding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14/452,359 US20160041993A1 (en) 2014-08-05 2014-08-05 Apparatus and methods for lightweight transcoding

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/538,714 Division US20200034332A1 (en) 2014-08-05 2019-08-12 Apparatus and methods for lightweight transcoding

Publications (1)

Publication Number Publication Date
US20160041993A1 true US20160041993A1 (en) 2016-02-11

Family

ID=55267539

Family Applications (2)

Application Number Title Priority Date Filing Date
US14/452,359 Abandoned US20160041993A1 (en) 2014-08-05 2014-08-05 Apparatus and methods for lightweight transcoding
US16/538,714 Pending US20200034332A1 (en) 2014-08-05 2019-08-12 Apparatus and methods for lightweight transcoding

Family Applications After (1)

Application Number Title Priority Date Filing Date
US16/538,714 Pending US20200034332A1 (en) 2014-08-05 2019-08-12 Apparatus and methods for lightweight transcoding

Country Status (1)

Country Link
US (2) US20160041993A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10097594B1 (en) * 2017-08-31 2018-10-09 T-Mobile Usa, Inc. Resource-managed codec selection
US20190222893A1 (en) * 2018-01-16 2019-07-18 Dish Network L.L.C. Preparing mobile media content
US20190317484A1 (en) * 2016-04-05 2019-10-17 Wellaware Holdings, Inc. Device for monitoring and controlling industrial equipment
US10579050B2 (en) 2016-04-05 2020-03-03 Wellaware Holdings, Inc. Monitoring and controlling industrial equipment
US10652761B2 (en) 2016-04-05 2020-05-12 Wellaware Holdings, Inc. Monitoring and controlling industrial equipment
US11336710B2 (en) * 2017-06-16 2022-05-17 Amazon Technologies, Inc. Dynamically-generated encode settings for media content
US20220210492A1 (en) * 2020-12-30 2022-06-30 Comcast Cable Communications, Llc Systems and methods for transcoding content

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10958948B2 (en) 2017-08-29 2021-03-23 Charter Communications Operating, Llc Apparatus and methods for latency reduction in digital content switching operations

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5341318A (en) * 1990-03-14 1994-08-23 C-Cube Microsystems, Inc. System for compression and decompression of video data using discrete cosine transform and coding techniques
US20030002583A1 (en) * 2001-06-30 2003-01-02 Koninklijke Philips Electronics N.V. Transcoding of video data streams
US20040081242A1 (en) * 2002-10-28 2004-04-29 Amir Segev Partial bitstream transcoder system for compressed digital video bitstreams Partial bistream transcoder system for compressed digital video bitstreams
US20040151249A1 (en) * 2001-05-29 2004-08-05 Anthony Morel Method and device for video transcoding
US20040208247A1 (en) * 2001-07-10 2004-10-21 Eric Barrau Method and device for generating a scalable coded video signal from a non-scalable coded video signal
US20050132264A1 (en) * 2003-12-15 2005-06-16 Joshi Ajit P. System and method for intelligent transcoding
US20050232497A1 (en) * 2004-04-15 2005-10-20 Microsoft Corporation High-fidelity transcoding
US20070058718A1 (en) * 2005-09-14 2007-03-15 Microsoft Corporation Efficient integrated digital video transcoding
US20070177677A1 (en) * 2006-01-30 2007-08-02 Thomsen Jan H Systems and methods for transcoding bit streams
US20080253447A1 (en) * 2004-06-21 2008-10-16 Koninklijke Philips Electronics, N.V. Video Transcoding with Selection of Data Portions to be Processed
US20090006643A1 (en) * 2007-06-29 2009-01-01 The Chinese University Of Hong Kong Systems and methods for universal real-time media transcoding
US20090122863A1 (en) * 2007-11-09 2009-05-14 Paltronics, Inc. Video transcoding techniques for gaming networks, and gaming networks incorporating the same
US20090265617A1 (en) * 2005-10-25 2009-10-22 Sonic Solutions, A California Corporation Methods and systems for use in maintaining media data quality upon conversion to a different data format
US20100075606A1 (en) * 2008-09-24 2010-03-25 Cambridge Silicon Radio Ltd. Selective transcoding of encoded media files
US20120002728A1 (en) * 2006-03-29 2012-01-05 Alexandros Eleftheriadis System and method for transcoding between scalable and non-scalable video codecs
US20120124636A1 (en) * 2010-11-17 2012-05-17 General Instrument Corporation System and Method for Selectively Transcoding Signal from One Format to One of Plurality of Formats
US20140019635A1 (en) * 2012-07-13 2014-01-16 Vid Scale, Inc. Operation and architecture for dash streaming clients
US20140282789A1 (en) * 2013-03-14 2014-09-18 Comcast Cable Communications, Llc Allocation of Clamping Functionality
US20150281751A1 (en) * 2014-03-31 2015-10-01 Arris Enterprises, Inc. Adaptive streaming transcoder synchronization

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6647061B1 (en) * 2000-06-09 2003-11-11 General Instrument Corporation Video size conversion and transcoding from MPEG-2 to MPEG-4
US7403564B2 (en) * 2001-11-21 2008-07-22 Vixs Systems, Inc. System and method for multiple channel video transcoding
KR20040085890A (en) * 2003-04-02 2004-10-08 삼성전자주식회사 Digital recording/reproducing apparatus for providing timeshift function, and the method thereof
US7924913B2 (en) * 2005-09-15 2011-04-12 Microsoft Corporation Non-realtime data transcoding of multimedia content
CA2656922A1 (en) * 2006-06-16 2007-12-27 Droplet Technology, Inc. System, method, and apparatus of video processing and applications
US20080120676A1 (en) * 2006-11-22 2008-05-22 Horizon Semiconductors Ltd. Integrated circuit, an encoder/decoder architecture, and a method for processing a media stream
US8380864B2 (en) * 2006-12-27 2013-02-19 Microsoft Corporation Media stream slicing and processing load allocation for multi-user media systems
EP2577489A4 (en) * 2010-06-02 2014-09-10 Onmobile Global Ltd Method and apparatus for adapting media
US9843844B2 (en) * 2011-10-05 2017-12-12 Qualcomm Incorporated Network streaming of media data
US9432704B2 (en) * 2011-11-06 2016-08-30 Akamai Technologies Inc. Segmented parallel encoding with frame-aware, variable-size chunking

Patent Citations (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5341318A (en) * 1990-03-14 1994-08-23 C-Cube Microsystems, Inc. System for compression and decompression of video data using discrete cosine transform and coding techniques
US20040151249A1 (en) * 2001-05-29 2004-08-05 Anthony Morel Method and device for video transcoding
US20030002583A1 (en) * 2001-06-30 2003-01-02 Koninklijke Philips Electronics N.V. Transcoding of video data streams
US20040208247A1 (en) * 2001-07-10 2004-10-21 Eric Barrau Method and device for generating a scalable coded video signal from a non-scalable coded video signal
US20040081242A1 (en) * 2002-10-28 2004-04-29 Amir Segev Partial bitstream transcoder system for compressed digital video bitstreams Partial bistream transcoder system for compressed digital video bitstreams
US20050132264A1 (en) * 2003-12-15 2005-06-16 Joshi Ajit P. System and method for intelligent transcoding
US20050232497A1 (en) * 2004-04-15 2005-10-20 Microsoft Corporation High-fidelity transcoding
US20080253447A1 (en) * 2004-06-21 2008-10-16 Koninklijke Philips Electronics, N.V. Video Transcoding with Selection of Data Portions to be Processed
US20070058718A1 (en) * 2005-09-14 2007-03-15 Microsoft Corporation Efficient integrated digital video transcoding
US20090265617A1 (en) * 2005-10-25 2009-10-22 Sonic Solutions, A California Corporation Methods and systems for use in maintaining media data quality upon conversion to a different data format
US20070177677A1 (en) * 2006-01-30 2007-08-02 Thomsen Jan H Systems and methods for transcoding bit streams
US20120002728A1 (en) * 2006-03-29 2012-01-05 Alexandros Eleftheriadis System and method for transcoding between scalable and non-scalable video codecs
US20090006643A1 (en) * 2007-06-29 2009-01-01 The Chinese University Of Hong Kong Systems and methods for universal real-time media transcoding
US20090122863A1 (en) * 2007-11-09 2009-05-14 Paltronics, Inc. Video transcoding techniques for gaming networks, and gaming networks incorporating the same
US20100075606A1 (en) * 2008-09-24 2010-03-25 Cambridge Silicon Radio Ltd. Selective transcoding of encoded media files
US20120124636A1 (en) * 2010-11-17 2012-05-17 General Instrument Corporation System and Method for Selectively Transcoding Signal from One Format to One of Plurality of Formats
US20140380398A1 (en) * 2010-11-17 2014-12-25 Motorola Mobility Llc System and Method for Selectively Transcoding Signal From One Format to One of Plurality of Formats
US20140019635A1 (en) * 2012-07-13 2014-01-16 Vid Scale, Inc. Operation and architecture for dash streaming clients
US20140282789A1 (en) * 2013-03-14 2014-09-18 Comcast Cable Communications, Llc Allocation of Clamping Functionality
US20150281751A1 (en) * 2014-03-31 2015-10-01 Arris Enterprises, Inc. Adaptive streaming transcoder synchronization

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11513503B2 (en) 2016-04-05 2022-11-29 Wellaware Holdings, Inc. Monitoring and controlling industrial equipment
US11086301B2 (en) * 2016-04-05 2021-08-10 Wellaware Holdings, Inc. Monitoring and controlling industrial equipment
US20190317484A1 (en) * 2016-04-05 2019-10-17 Wellaware Holdings, Inc. Device for monitoring and controlling industrial equipment
US10579050B2 (en) 2016-04-05 2020-03-03 Wellaware Holdings, Inc. Monitoring and controlling industrial equipment
US10652761B2 (en) 2016-04-05 2020-05-12 Wellaware Holdings, Inc. Monitoring and controlling industrial equipment
US10698391B2 (en) * 2016-04-05 2020-06-30 Wellaware Holdings, Inc. Device for monitoring and controlling industrial equipment
US11336710B2 (en) * 2017-06-16 2022-05-17 Amazon Technologies, Inc. Dynamically-generated encode settings for media content
US11916992B2 (en) 2017-06-16 2024-02-27 Amazon Technologies, Inc. Dynamically-generated encode settings for media content
US10097594B1 (en) * 2017-08-31 2018-10-09 T-Mobile Usa, Inc. Resource-managed codec selection
US10764633B2 (en) * 2018-01-16 2020-09-01 DISH Networks L.L.C. Preparing mobile media content
US20190222893A1 (en) * 2018-01-16 2019-07-18 Dish Network L.L.C. Preparing mobile media content
US11330330B2 (en) * 2018-01-16 2022-05-10 Dish Network L.L.C. Preparing mobile media content
US20220248086A1 (en) * 2018-01-16 2022-08-04 Dish Network L.L.C. Preparing mobile media content
US11750880B2 (en) * 2018-01-16 2023-09-05 Dish Network L.L.C. Preparing mobile media content
US20230379534A1 (en) * 2018-01-16 2023-11-23 Dish Network L.L.C. Preparing mobile media content
US12149782B2 (en) * 2018-01-16 2024-11-19 Dish Network L.L.C. Preparing mobile media content
US20220210492A1 (en) * 2020-12-30 2022-06-30 Comcast Cable Communications, Llc Systems and methods for transcoding content

Also Published As

Publication number Publication date
US20200034332A1 (en) 2020-01-30

Similar Documents

Publication Publication Date Title
US20200034332A1 (en) Apparatus and methods for lightweight transcoding
US11695994B2 (en) Cloud-based digital content recorder apparatus and methods
US12342007B2 (en) Apparatus and methods for latency reduction in digital content switching operations
US20220248108A1 (en) Apparatus and methods for thumbnail generation
US9832534B2 (en) Content transmission device and content playback device
US10085047B2 (en) Methods and apparatus for content caching in a video network
KR101354833B1 (en) Techniques for variable resolution encoding and decoding of digital video
US8869218B2 (en) On the fly transcoding of video on demand content for adaptive streaming
KR100928998B1 (en) Adaptive Multimedia System and Method for Providing Multimedia Contents and Codecs to User Terminals
JP2018507591A (en) Interlayer prediction for scalable video encoding and decoding
TW201415901A (en) Sequence level flag for sub-picture level coded picture buffer parameters
CN102907096A (en) Method and device for sending and receiving layered encoded video
CA2843718C (en) Methods and systems for processing content
JP5734699B2 (en) Super-resolution device for distribution video
JP5027657B2 (en) Method and apparatus for supplying data to a decoder
Nightingale et al. Video adaptation for consumer devices: opportunities and challenges offered by new standards
Biatek et al. Versatile video coding for 3.0 next generation digital TV in Brazil
US20170347138A1 (en) Efficient transcoding in a network transcoder
US20240179202A1 (en) Method for media stream processing and apparatus for implementing the same
US10567703B2 (en) High frame rate video compatible with existing receivers and amenable to video decoder implementation
CN117178554A (en) Message referencing
Roy Implementation of a Personal Digital Radio Recorder for Digital Multimedia Broadcasting by Adapting the Open-Source Personal Digital Video Recorder Software MythTV
JP2016100831A (en) Image encoder and image encode method

Legal Events

Date Code Title Description
AS Assignment

Owner name: TIME WARNER CABLE ENTERPRISES LLC, NEW YORK

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MAYNARD, STEPHEN;HALLOCK, TREVER;NIELSEN, NICHOLAS;AND OTHERS;SIGNING DATES FROM 20140729 TO 20140731;REEL/FRAME:033624/0107

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, NEW YORK

Free format text: SECURITY INTEREST;ASSIGNORS:BRIGHT HOUSE NETWORKS, LLC;CHARTER COMMUNICATIONS OPERATING, LLC;TIME WARNER CABLE ENTERPRISES LLC;REEL/FRAME:038747/0507

Effective date: 20160518

Owner name: BANK OF AMERICA, N.A., AS ADMINISTRATIVE AGENT, NE

Free format text: SECURITY INTEREST;ASSIGNORS:BRIGHT HOUSE NETWORKS, LLC;CHARTER COMMUNICATIONS OPERATING, LLC;TIME WARNER CABLE ENTERPRISES LLC;REEL/FRAME:038747/0507

Effective date: 20160518

AS Assignment

Owner name: TIME WARNER CABLE ENTERPRISES LLC, MISSOURI

Free format text: CHANGE OF ADDRESS;ASSIGNOR:TIME WARNER CABLE ENTERPRISES LLC;REEL/FRAME:044456/0167

Effective date: 20160601

AS Assignment

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., ILLINOIS

Free format text: SECURITY INTEREST;ASSIGNORS:ADCAST NORTH CAROLINA CABLE ADVERTISING, LLC;ALABANZA LLC;AMERICA'S JOB EXCHANGE LLC;AND OTHERS;SIGNING DATES FROM 20160518 TO 20180518;REEL/FRAME:046567/0090

Owner name: THE BANK OF NEW YORK MELLON TRUST COMPANY, N.A., I

Free format text: SECURITY INTEREST;ASSIGNORS:ADCAST NORTH CAROLINA CABLE ADVERTISING, LLC;ALABANZA LLC;AMERICA'S JOB EXCHANGE LLC;AND OTHERS;SIGNING DATES FROM 20160518 TO 20180518;REEL/FRAME:046567/0090

AS Assignment

Owner name: WELLS FARGO TRUST COMPANY, N.A., UTAH

Free format text: SECURITY INTEREST;ASSIGNORS:BRIGHT HOUSE NETWORKS, LLC;CHARTER COMMUNICATIONS OPERATING, LLC;TIME WARNER CABLE ENTERPRISES LLC;AND OTHERS;REEL/FRAME:046630/0193

Effective date: 20180716

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION