[go: up one dir, main page]

WO2025085063A1 - Video enhancement - Google Patents

Video enhancement Download PDF

Info

Publication number
WO2025085063A1
WO2025085063A1 PCT/US2023/035503 US2023035503W WO2025085063A1 WO 2025085063 A1 WO2025085063 A1 WO 2025085063A1 US 2023035503 W US2023035503 W US 2023035503W WO 2025085063 A1 WO2025085063 A1 WO 2025085063A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
layers
blending
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
PCT/US2023/035503
Other languages
French (fr)
Inventor
Bang-Sian Liu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to PCT/US2023/035503 priority Critical patent/WO2025085063A1/en
Priority to TW113139764A priority patent/TW202520700A/en
Publication of WO2025085063A1 publication Critical patent/WO2025085063A1/en
Pending legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • G09G5/026Control of mixing and/or overlay of colours in general
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/003Details of a display terminal, the details relating to the control arrangement of the display terminal and to the interfaces thereto
    • G09G5/005Adapting incoming signals to the display format of the display terminal
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • G09G5/024Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour registers, e.g. to control background, foreground, surface filling
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • G09G5/06Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed using colour palettes, e.g. look-up tables
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/08Cursor circuits
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/14Display of multiple viewports
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/10Mixing of images, i.e. displayed pixel being the result of an operation, e.g. adding, on the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/12Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels
    • G09G2340/125Overlay of images, i.e. displayed pixel being the result of switching between the corresponding input pixels wherein one of the images is motion video
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/14Solving problems related to the presentation of information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2370/00Aspects of data communication
    • G09G2370/20Details of the management of multiple sources of image data

Definitions

  • This specification relates to video and image processing, and more particularly to systems and devices for video display processing.
  • Output video data that is viewed on display devices is often composited from multiple different input sources, which can include video data and non-video data, e.g., text and overlays, to name just a few examples.
  • Video display processing refers to the set of computational tasks for decoding multiple sources of video and non-video data (e.g., a stream of video data and a stream of image data representing a UI element to be displayed as an overlay), compositing these separate input sources together, and producing output video data in a format that can be read and displayed by display hardware.
  • video display processing includes a front-end stage that composites the multiple input sources and a back-end stage that applies a number of enhancements.
  • the back-end stage of video display processing often involves applying various image and video enhancements (e.g., converting between standard dynamic range (SDR) and high dynamic range (HDR) formats, upscaling to higher resolutions, applying denoising or sharpening filters, applying local contrast enhancement, etc.).
  • image and video enhancements e.g., converting between standard dynamic range (SDR) and high dynamic range (HDR) formats, upscaling to higher resolutions, applying denoising or sharpening filters, applying local contrast enhancement, etc.
  • SDR standard dynamic range
  • HDR high dynamic range
  • video display processing is often performed by specialized hardware, and reducing the complexity and computational burden, in terms the time, memory, and power required performing the display processing, of the video display processing pipeline is of great concern.
  • backend enhancements can become a computational bottleneck when different kinds of input data need to be treated differently in the backend enhancement stage.
  • a source video may require a local contrast enhancement that should not be applied to a static UI overlay.
  • Conventional approaches to backend enhancements have required additional or specialized hardware that increases the size and complexity of the device.
  • Backend enhancements can be applied to conventionally composited as a single conventional post-processing stage, but this does not enable selective application of the enhancements and results in worse output image quality.
  • Utilizing separate components to apply individualized enhancements to the input sources allows for selective enhancement but increases device complexity and computational burden. It is possible to apply the desired enhancements outside of the video display processing pipeline, though such solutions introduce a significant computational burden. Therefore, a system for applying separate video enhancements to separate input sources for video display processing that repurposes aspects of existing designs and requires few additional processing components is desired.
  • the video display processing techniques described below blend multiple input video layers together to produce intermediate blended video data that includes modified alpha-channel information.
  • the method by which the input video layers are blended can. but need not. be a process of alpha-blending.
  • the modified alpha values encoded within the intermediate video data generally provide information regarding the composition of the input layers in different regions of the output video (e.g., the modified alpha values may describe a ratio of video stream data to UI data within a given region).
  • the intermediate video data can. but need not. compress the input video data to enable the use of a greater range of modified alpha values without consuming additional data bandwidth.
  • Video enhancements are applied to this intermediate video data based on enhancement settings that are determined by the modified alpha-channel.
  • the method by which enhancement settings are chosen based on modified alpha-channel information can. but need not, be a procedure of interpolation or of table look-up based on the modified-alpha values.
  • the input video layers can, but need not, be produced by a front-end processor that decodes source video and image data into the input video layer format.
  • the input video layer format can, but need not, include traditional alpha channel information.
  • video and image data mean any appropriate data that can be used to generate video or image rendering, respectively, and thus includes actual video and image formats as well as other binary information or intermediate representations that can be used for the same purpose.
  • the described systems provide a method for applying separate video enhancements to separate input sources for video display processing.
  • the described systems can perform a selective application of video enhancements, thereby improving output image quality.
  • selective video enhancement as part of a post-processing back-end, the described systems have greatly reduced complexity and computational burden.
  • the described systems may utilize front-end components originally designed to perform traditional alpha blending. Therefore, compared to conventional video display processing systems, the described systems enable selective video enhancement processing while limiting the need to design and include costly specialized components.
  • FIG. 1 shows an example video enhancement system.
  • FIG. 2 is a diagram that illustrates an example blending process for producing blended video data containing a modified alpha value.
  • FIG. 3 shows an example backend display processing unit.
  • FIG. 4 is a flow diagram of an example process for video display processing.
  • FIG. 1 shows an example video enhancement system 100.
  • the video enhancement system 100 is configured to process source video data 102 in order to produce an enhanced video output 120 that processed composite of the source data 102.
  • the video enhancement system 100 includes a blending unit 112 and a backend display processing unit (DPU) 118.
  • the video system 100 is configured to composite together and to apply separate video enhancements to the multiple sources of video data 102.
  • the video system 100 is configured to generate and process a modified alpha value that governs the application of particular video enhancement configurations to particular sources of video data.
  • the blending unit 112 includes a sequence of blending modules 114A through 114N and includes a signal processing system 116.
  • Each of the blending modules 114A through 114N is configured to receive a layer of video data and to produce a blended output video that includes a modified alpha value that is used to determine how configurations of video enhancements are selected and applied to the blended output video.
  • the received layers of video data can be ARGB encoded video.
  • Each of the blending modules in the sequence is configured to additionally process the output video of the previous blending module to produce the blended output video.
  • the signal processing system 116 is configured to process the blended output video from the final blending module 114N and produce an intermediate blended video.
  • the blending modules 114A through 114N can process the received layers of video data using methods of alpha blending that the blending unit 112 determines for each blending module. For example, each blending module can determine the alpha blending method the blending module will use to blend a received video layer. As a further example, each blending module can select an appropriate compositing operator, such as the over, in, out, atop, and xor operators, from a table of compositing operators using the alpha value of the received video layer as a table lookup key.
  • an appropriate compositing operator such as the over, in, out, atop, and xor operators
  • each blending module can select an appropriate color blending mode, such as color differencing, color multiplication, color screening, and color overlay, from a table of color blending modes using the alpha value of the received video layer as a table lookup key.
  • an appropriate color blending mode such as color differencing, color multiplication, color screening, and color overlay
  • the backend DPU 118 is configured to process the intermediate blended video received from the blending unit 112 and to produce the enhanced video output 120.
  • the backend DPU 1 18 can perform this processing by applying one or more enhancements to the processed video with enhancement configurations determined by the modified alpha values contained within the processed video.
  • the video enhancement system 100 includes a frontend DPU 104.
  • the frontend DPU 104 can decode received source video data 102 and produce corresponding video layers 10 A through 106N.
  • the blending unit 112 can receive ARGB encoded video, and the front end DPU 104 can decode the source data 102 into ARGB-encoded video layers.
  • the video enhancement system 100 can be further configured to receive additional non-video source data.
  • the video enhancement system 100 is configured to receive additional non-video source data and where the video enhancement system 100 includes the frontend DPU 104
  • the frontend DPU 104 can be further configured to produce non-video UI layer data 108A through 108N.
  • the blending unit 112 and blending modules 114A through 114N can be further configured to process non-video data.
  • the blending modules 114A through 114N are configured to receive non-video data, they can be further configured to assign predetermined alpha values to the received non-video data before processing and blending the non-video data.
  • the video enhancement system 100 includes a multiplexer 110.
  • the multiplexer 110 is configured to route particular sources of data to particular blending modules in the blending unit 112.
  • the multiplexer 110 is configured to route particular data layers to particular blending modules within of the blending unit 112.
  • the intermediate blended video output by the blending unit 112 can be produced in a compressed video format that encodes the modified alpha values alongside a compressed representation of the RGB data of the video.
  • the signal processing system 116 can perform the conversion to the compressed format.
  • the blending modules 114A through 114N can perform this conversion.
  • the backend DPU 118 can be configured to encode the enhanced output video in a particular format.
  • the enhanced output video can be encoded as ARGB video or RGB video.
  • FIG. 2 illustrates an example blending process for producing blended video data containing a modified alpha value.
  • the blending unit 112 is an example of a system that can be configured to perform the illustrated blending process.
  • a layer of video data 202 is blended with layers of UI data 204 and 206 to produce a blended video output 214.
  • Each of the input data layers 202, 204, and 206 encodes numerical alpha values alongside RGB data.
  • the blending modules 208 and 212 are each configured to process input video data to produce blended output data.
  • the blending module 208 blends the input video data 202 and the input UI data 204 to produce intermediate video data 210.
  • the blending module 212 blends the intermediate video data 212 and the input UI data 206 to produce the output blended video data 214.
  • the blended output video data 214 encodes modified alpha values and RGB video data.
  • the encoded RGB data of the output video data 214 is a composited blend of the RGB data of the input data layers 202, 204, and 206.
  • the encoded modified alpha values of the output video data 214 determines the configurations of video enhancements to be applied to the video data by a backend DPU, such as the backend DPU 118.
  • the blending processes applied by the blending modules 208 and 212 to produce output RGB data can be any appropriate mode of alpha blending.
  • the modified alpha values can be determined from input data following any procedure that results in the modified alpha values specifying suitable enhancement configurations.
  • the modified alpha values can be calculated so as to represent the ratio of video to UI data present within the output data layer, as is illustrated in FIG. 2.
  • the input video data 202 and input UI data 204 are set to have alpha values of 1.0 and 0.3, respectively, and the blending module 208 is configured to produce a corresponding intermediate output 210 with a modified alpha value of 0.7, representing that the intermediate output 210 is 70% video data.
  • the blending module 212 is configured to blend the input UI data 206 set with a set alpha value of 0.8 and the intermediate output 210 with modified alpha value of 0.7 to produce corresponding output data 214 with a modified alpha value of 0.56, representing that the output data 214 is composed of 56% video data.
  • the backend DPU 118 can be configured to select enhancement configurations based on the video to UI data ratio represented by such modified alpha values. For the illustrated example, the backend DPU 118 can select and apply video enhancements suitable for data composed of 56% video data and 44% UI data.
  • FIG. 3 illustrates an example backend display processing unit 118.
  • the backend display processing unit 1 18 includes enhancement units 304A through 304N, can send modified alpha values 306 to each of the enhancement units, and can process the video data 308 encoded within a blending unit output 302 using the enhancements units to produce the enhanced video output 120.
  • the enhancement units 304A through 304N are configured to receive video data and modified alpha values 306, select an enhancement configuration based on the received alpha values 306, and process the received video data by applying a video enhancement following the selected enhancement configuration.
  • the enhancement configuration can be specified by using the modified alpha values to interpolate among different enhancement parameter sets.
  • the enhancement configuration can be specified by using the modified alpha values as table keys for a table look-up of enhancement configurations.
  • FIG. 4 is a flow diagram of an example process for video display processing.
  • a video enhancement system such as the video enhancement system 100 of FIG. 1, appropriately configured in accordance with this specification, can perform the process 400.
  • the system receives source data (402).
  • the source data generally encodes data from one or more video sources and can also encode data from additional non-video sources.
  • the non-video sources can include user interface elements, including menus, overlays, and other types of graphics.
  • the system processes the source data to produce layers of video data (404).
  • the produced layers can be formatted in a first video format that can be different from a second video format that will be the final output of the display processing system.
  • the first video format can be ARGB encoded video or any other appropriate video format.
  • the system can process the source data to produce corresponding layers of non-video data.
  • the system can convert the source data to the first video format to produce the layers of video data.
  • the system blends processed data layers to produce a blended video that encodes modified alpha values (406).
  • the system can blend the data layers using any appropriate blending process, e.g., alpha blending.
  • the system can blend the non-video data layers based on pre-defined alpha values.
  • the output blended video is in a second video format that can be a compressed video format.
  • this compressed video format can encode both modified alpha values and a compressed representation of the RGB data of the blended input data layers.
  • the system processes the blended video data by applying video enhancements with configurations determined based on the modified alpha values to produce an enhanced video data output (408).
  • the system can determine enhancement configurations using any suitable process based on the modified alpha values. For example, the system can determine the configurations based on an interpolation between enhancement parameters based on the modified alpha value. As another example, the system can perform a table lookup of enhancement configurations using the modified alpha values as a table key.
  • the enhanced video data output can be in the first video format.
  • the enhanced video data output can be in a third video format, such as ARGB encoded video or RGB encoded video.
  • Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them.
  • Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus.
  • the computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them.
  • the program instructions can be encoded on an artificially -generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
  • an artificially -generated propagated signal e.g., a machine-generated electrical, optical, or electromagnetic signal
  • the term '‘data processing apparatus’’ refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers.
  • the apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
  • the apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
  • a computer program which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
  • a program may, but need not, correspond to a file in a file system.
  • a program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code.
  • a computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network.
  • For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions.
  • the one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
  • an ‘'engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input.
  • An engine can be an encoded block of functionality', such as a library', a platform, a software development kit (“SDK”), or an object.
  • SDK software development kit
  • Each engine can be implemented on any appropriate type of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices.
  • the processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output.
  • the processes and logic flo vs can also be performed by special purpose logic circuitry, e g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.
  • Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit.
  • a central processing unit will receive instructions and data from a read-only memory or a random access memory or both.
  • the essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data.
  • the central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
  • a computer yvill also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks.
  • a computer need not have such devices.
  • a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
  • PDA personal digital assistant
  • GPS Global Positioning System
  • USB universal serial bus
  • Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory , media and memory 7 devices, including by way of example semiconductor memory devices, e.g., EPROM. EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
  • semiconductor memory devices e.g., EPROM. EEPROM, and flash memory devices
  • magnetic disks e.g., internal hard disks or removable disks
  • magneto-optical disks e.g., CD-ROM and DVD-ROM disks.
  • embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and pointing device, e.g, a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer.
  • a display device e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • keyboard and pointing device e.g, a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer.
  • Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
  • a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s device in response to requests received from the web browser.
  • a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone, running a messaging application, and receiving responsive messages from the user in return.
  • Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middlew are, or front-end components.
  • the components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network.
  • Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
  • the computing system can Include clients and servers.
  • a client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
  • a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client.
  • Data generated at the user device e.g., a result of the user interaction, can be received at the server from the device.
  • Embodiment 1 is a display processing system comprising: a blending unit configured to receive a plurality of layers of data generated from source data, the plurality of layers comprising one or more layers of video data in a first video format, and to generate an output in a second video format, wherein the blending unit comprises a plurality of blending modules that blend the plurality of layers of data in a sequence, wherein the blending unit is configured to generate a modified alpha value for each data block in the output representing a ratio how much video data from the one or more layers of input video data is represented in the data block in the output; and a backend display processing unit (DPU) configured to receive the output from the blending unit and to apply one or more enhancements to the output, wherein for each enhancement, the backend DPU is configured to select a configuration based on the modified alpha value.
  • a blending unit configured to receive a plurality of layers of data generated from source data, the plurality of layers comprising one or more layers of video data in a first video format, and
  • Embodiment 2 is the system of embodiment 1, further comprising a frontend DPU configured to read the source data and to generate, from the source data, the plurality of layers of data comprising the one or more layers of video data in the first video format and the one or more layers of non-video data.
  • a frontend DPU configured to read the source data and to generate, from the source data, the plurality of layers of data comprising the one or more layers of video data in the first video format and the one or more layers of non-video data.
  • Embodiment 3 is the system of any one of embodiments 1-2, wherein the second video format encodes a combination of the modified alpha value and a compressed representation of RGB values.
  • Embodiment 4 is the system of any one of embodiments 1-3, wherein the backend DPU comprises one or more enhancement units, being configured to: select the configuration indicated by the modified alpha value by (i) interpolating between stored configuration parameters, or (ii) performing a table lookup of particular configuration parameters, and apply an enhancement to received video data in accordance with the selected configuration.
  • the backend DPU comprises one or more enhancement units, being configured to: select the configuration indicated by the modified alpha value by (i) interpolating between stored configuration parameters, or (ii) performing a table lookup of particular configuration parameters, and apply an enhancement to received video data in accordance with the selected configuration.
  • Embodiment 5 is the system of any one of embodiments 1-4, wherein the backend DPU is further configured to output data in a third video format.
  • Embodiment 6 is the system of embodiment 5, wherein the third video format is (i) ARGB encoded video or (ii) RGB encoded video.
  • Embodiment 7 is the system of any one of embodiments 1-6, wherein the first video format is ARGB encoded video.
  • Embodiment 8 is the system of any one of embodiments 1-7, wherein the blending modules are configured to blend the plurality of layers of data in a sequence following a process of alpha blending.
  • Embodiment 9 is the system of any one of embodiments 1-8, wherein the plurality of layers received by the blending unit further comprises one or more layers of non-video data.
  • Embodiment 10 is the system of embodiment 9, wherein the blending modules are configured to blend the plurality of lay ers of data in a sequence following a process of alpha blending in which the one or more layers of non-video data are processed in accordance with predefined alpha values.
  • Embodiment 11 is the system of any one of embodiments 8 or 10, wherein each blending module is configured to determine a respective one of a plurality of blending modes based on the alpha values of the plurality of layers of data.
  • Embodiment 12 is the system of embodiment 1 1, wherein determining one of a plurality of blending modes comprises performing a table lookup from a table of the plurality of blending modes.
  • Embodiment 13 is a method comprising performing the operations of any one of embodiments 1-12.
  • Embodiment 14 is a computer storage medium encoded with instructions that are operable, when executed by data processing apparatus, to cause the data processing apparatus to perform the operations of any one of embodiments 1-12.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Image Processing (AREA)

Abstract

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for selective video enhancements on multiple layers of video data. One of the methods includes receiving a plurality of layers of data generated from source data, the plurality of layers comprising one or more layers of video data in a first video format. An output in a second video format is generated, including using a plurality of blending modules to blend the plurality of layers of data in a sequence. A modified alpha value is generated for each data block in the output representing a ratio how much video data from the one or more layers of input video data is represented in the block in the output. One or more enhancements are applied to the output, including selecting, for each enhancement, a configuration based on the modified alpha value.

Description

VIDEO ENHANCEMENT
BACKGROUND
This specification relates to video and image processing, and more particularly to systems and devices for video display processing.
Output video data that is viewed on display devices is often composited from multiple different input sources, which can include video data and non-video data, e.g., text and overlays, to name just a few examples. Video display processing refers to the set of computational tasks for decoding multiple sources of video and non-video data (e.g., a stream of video data and a stream of image data representing a UI element to be displayed as an overlay), compositing these separate input sources together, and producing output video data in a format that can be read and displayed by display hardware.
Typically, video display processing includes a front-end stage that composites the multiple input sources and a back-end stage that applies a number of enhancements. For example, the back-end stage of video display processing often involves applying various image and video enhancements (e.g., converting between standard dynamic range (SDR) and high dynamic range (HDR) formats, upscaling to higher resolutions, applying denoising or sharpening filters, applying local contrast enhancement, etc.). For consumer devices, video display processing is often performed by specialized hardware, and reducing the complexity and computational burden, in terms the time, memory, and power required performing the display processing, of the video display processing pipeline is of great concern.
However, backend enhancements can become a computational bottleneck when different kinds of input data need to be treated differently in the backend enhancement stage. For example, a source video may require a local contrast enhancement that should not be applied to a static UI overlay. Conventional approaches to backend enhancements have required additional or specialized hardware that increases the size and complexity of the device. Backend enhancements can be applied to conventionally composited as a single conventional post-processing stage, but this does not enable selective application of the enhancements and results in worse output image quality. Utilizing separate components to apply individualized enhancements to the input sources allows for selective enhancement but increases device complexity and computational burden. It is possible to apply the desired enhancements outside of the video display processing pipeline, though such solutions introduce a significant computational burden. Therefore, a system for applying separate video enhancements to separate input sources for video display processing that repurposes aspects of existing designs and requires few additional processing components is desired.
SUMMARY
This specification describes technologies for performing selective video enhancements on multiple layers of video data in a way that has reduced complexity and computational burden. The video display processing techniques described below blend multiple input video layers together to produce intermediate blended video data that includes modified alpha-channel information. The method by which the input video layers are blended can. but need not. be a process of alpha-blending. The modified alpha values encoded within the intermediate video data generally provide information regarding the composition of the input layers in different regions of the output video (e.g., the modified alpha values may describe a ratio of video stream data to UI data within a given region). The intermediate video data can. but need not. compress the input video data to enable the use of a greater range of modified alpha values without consuming additional data bandwidth. Video enhancements are applied to this intermediate video data based on enhancement settings that are determined by the modified alpha-channel. The method by which enhancement settings are chosen based on modified alpha-channel information can. but need not, be a procedure of interpolation or of table look-up based on the modified-alpha values. The input video layers can, but need not, be produced by a front-end processor that decodes source video and image data into the input video layer format. The input video layer format can, but need not, include traditional alpha channel information. In addition, the terms video and image data mean any appropriate data that can be used to generate video or image rendering, respectively, and thus includes actual video and image formats as well as other binary information or intermediate representations that can be used for the same purpose.
The subject matter described in this specification can be implemented in particular embodiments so as to realize one or more of the following advantages. The described systems provide a method for applying separate video enhancements to separate input sources for video display processing. By altering the applied enhancement configurations depending on the modified alpha values, the described systems can perform a selective application of video enhancements, thereby improving output image quality. By employing selective video enhancement as part of a post-processing back-end, the described systems have greatly reduced complexity and computational burden. By repurposing techniques of alpha-blending to specify desired video enhancement settings beyond the alpha channel’s traditional role for encoding transparency information, the described systems may utilize front-end components originally designed to perform traditional alpha blending. Therefore, compared to conventional video display processing systems, the described systems enable selective video enhancement processing while limiting the need to design and include costly specialized components.
The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the subject matter will become apparent from the description, the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an example video enhancement system.
FIG. 2 is a diagram that illustrates an example blending process for producing blended video data containing a modified alpha value.
FIG. 3 shows an example backend display processing unit.
FIG. 4 is a flow diagram of an example process for video display processing.
Like reference numbers and designations in the various drawings indicate like elements.
DETAILED DESCRIPTION
FIG. 1 shows an example video enhancement system 100.
The video enhancement system 100 is configured to process source video data 102 in order to produce an enhanced video output 120 that processed composite of the source data 102. The video enhancement system 100 includes a blending unit 112 and a backend display processing unit (DPU) 118. The video system 100 is configured to composite together and to apply separate video enhancements to the multiple sources of video data 102. The video system 100 is configured to generate and process a modified alpha value that governs the application of particular video enhancement configurations to particular sources of video data.
The blending unit 112 includes a sequence of blending modules 114A through 114N and includes a signal processing system 116. Each of the blending modules 114A through 114N is configured to receive a layer of video data and to produce a blended output video that includes a modified alpha value that is used to determine how configurations of video enhancements are selected and applied to the blended output video. In some implementations, the received layers of video data can be ARGB encoded video. Each of the blending modules in the sequence is configured to additionally process the output video of the previous blending module to produce the blended output video. The signal processing system 116 is configured to process the blended output video from the final blending module 114N and produce an intermediate blended video.
In some implementations, the blending modules 114A through 114N can process the received layers of video data using methods of alpha blending that the blending unit 112 determines for each blending module. For example, each blending module can determine the alpha blending method the blending module will use to blend a received video layer. As a further example, each blending module can select an appropriate compositing operator, such as the over, in, out, atop, and xor operators, from a table of compositing operators using the alpha value of the received video layer as a table lookup key. As a further example, each blending module can select an appropriate color blending mode, such as color differencing, color multiplication, color screening, and color overlay, from a table of color blending modes using the alpha value of the received video layer as a table lookup key.
The backend DPU 118 is configured to process the intermediate blended video received from the blending unit 112 and to produce the enhanced video output 120. The backend DPU 1 18 can perform this processing by applying one or more enhancements to the processed video with enhancement configurations determined by the modified alpha values contained within the processed video.
The video enhancement system 100 includes a frontend DPU 104. The frontend DPU 104 can decode received source video data 102 and produce corresponding video layers 10 A through 106N. In some implementations, the blending unit 112 can receive ARGB encoded video, and the front end DPU 104 can decode the source data 102 into ARGB-encoded video layers.
In some implementations, the video enhancement system 100 can be further configured to receive additional non-video source data. In implementations where the video enhancement system 100 is configured to receive additional non-video source data and where the video enhancement system 100 includes the frontend DPU 104, the frontend DPU 104 can be further configured to produce non-video UI layer data 108A through 108N. In implementations where the video enhancement system 100 is configured to receive additional non-video source data, the blending unit 112 and blending modules 114A through 114N can be further configured to process non-video data. In implementations where the blending modules 114A through 114N are configured to receive non-video data, they can be further configured to assign predetermined alpha values to the received non-video data before processing and blending the non-video data.
In some implementations, the video enhancement system 100 includes a multiplexer 110. The multiplexer 110 is configured to route particular sources of data to particular blending modules in the blending unit 112. In implementations where the video enhancement system 100 includes a frontend DPU 104, the multiplexer 110 is configured to route particular data layers to particular blending modules within of the blending unit 112.
In some implementations, the intermediate blended video output by the blending unit 112 can be produced in a compressed video format that encodes the modified alpha values alongside a compressed representation of the RGB data of the video. In these implementations where the intermediate blended video is compressed, the signal processing system 116 can perform the conversion to the compressed format. Alternatively, the blending modules 114A through 114N can perform this conversion.
In some implementations, the backend DPU 118 can be configured to encode the enhanced output video in a particular format. In some implementations where the enhanced output video is encoded in a particular format, the enhanced output video can be encoded as ARGB video or RGB video.
FIG. 2 illustrates an example blending process for producing blended video data containing a modified alpha value. The blending unit 112 is an example of a system that can be configured to perform the illustrated blending process. In this example, a layer of video data 202 is blended with layers of UI data 204 and 206 to produce a blended video output 214. Each of the input data layers 202, 204, and 206 encodes numerical alpha values alongside RGB data. The blending modules 208 and 212 are each configured to process input video data to produce blended output data. The blending module 208 blends the input video data 202 and the input UI data 204 to produce intermediate video data 210. The blending module 212 blends the intermediate video data 212 and the input UI data 206 to produce the output blended video data 214. The blended output video data 214 encodes modified alpha values and RGB video data. The encoded RGB data of the output video data 214 is a composited blend of the RGB data of the input data layers 202, 204, and 206. The encoded modified alpha values of the output video data 214 determines the configurations of video enhancements to be applied to the video data by a backend DPU, such as the backend DPU 118.
The blending processes applied by the blending modules 208 and 212 to produce output RGB data can be any appropriate mode of alpha blending. The modified alpha values can be determined from input data following any procedure that results in the modified alpha values specifying suitable enhancement configurations. In some implementations, the modified alpha values can be calculated so as to represent the ratio of video to UI data present within the output data layer, as is illustrated in FIG. 2. In the illustrated example, the input video data 202 and input UI data 204 are set to have alpha values of 1.0 and 0.3, respectively, and the blending module 208 is configured to produce a corresponding intermediate output 210 with a modified alpha value of 0.7, representing that the intermediate output 210 is 70% video data. The blending module 212 is configured to blend the input UI data 206 set with a set alpha value of 0.8 and the intermediate output 210 with modified alpha value of 0.7 to produce corresponding output data 214 with a modified alpha value of 0.56, representing that the output data 214 is composed of 56% video data. The backend DPU 118 can be configured to select enhancement configurations based on the video to UI data ratio represented by such modified alpha values. For the illustrated example, the backend DPU 118 can select and apply video enhancements suitable for data composed of 56% video data and 44% UI data. FIG. 3 illustrates an example backend display processing unit 118. In some implementations, the backend display processing unit 1 18 includes enhancement units 304A through 304N, can send modified alpha values 306 to each of the enhancement units, and can process the video data 308 encoded within a blending unit output 302 using the enhancements units to produce the enhanced video output 120. The enhancement units 304A through 304N are configured to receive video data and modified alpha values 306, select an enhancement configuration based on the received alpha values 306, and process the received video data by applying a video enhancement following the selected enhancement configuration. In some implementations, the enhancement configuration can be specified by using the modified alpha values to interpolate among different enhancement parameter sets. In other implementations, the enhancement configuration can be specified by using the modified alpha values as table keys for a table look-up of enhancement configurations.
FIG. 4 is a flow diagram of an example process for video display processing. A video enhancement system, such as the video enhancement system 100 of FIG. 1, appropriately configured in accordance with this specification, can perform the process 400.
The system receives source data (402). The source data generally encodes data from one or more video sources and can also encode data from additional non-video sources. The non-video sources can include user interface elements, including menus, overlays, and other types of graphics.
The system processes the source data to produce layers of video data (404). The produced layers can be formatted in a first video format that can be different from a second video format that will be the final output of the display processing system. For example, the first video format can be ARGB encoded video or any other appropriate video format. When the source data encodes data from non-video sources, the system can process the source data to produce corresponding layers of non-video data. When the source data is not encoded within the first video format, the system can convert the source data to the first video format to produce the layers of video data.
The system blends processed data layers to produce a blended video that encodes modified alpha values (406). The system can blend the data layers using any appropriate blending process, e.g., alpha blending. When the data layers include layers of non-video data, the system can blend the non-video data layers based on pre-defined alpha values. The output blended video is in a second video format that can be a compressed video format. For example, this compressed video format can encode both modified alpha values and a compressed representation of the RGB data of the blended input data layers.
The system processes the blended video data by applying video enhancements with configurations determined based on the modified alpha values to produce an enhanced video data output (408). The system can determine enhancement configurations using any suitable process based on the modified alpha values. For example, the system can determine the configurations based on an interpolation between enhancement parameters based on the modified alpha value. As another example, the system can perform a table lookup of enhancement configurations using the modified alpha values as a table key. The enhanced video data output can be in the first video format. The enhanced video data output can be in a third video format, such as ARGB encoded video or RGB encoded video.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly-embodied computer software or firmware, in computer hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible non-transitory storage medium for execution by, or to control the operation of, data processing apparatus. The computer storage medium can be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. Alternatively or in addition, the program instructions can be encoded on an artificially -generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus.
The term '‘data processing apparatus’’ refers to data processing hardware and encompasses all kinds of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can also be, or further include, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can optionally include, in addition to hardware, code that creates an execution environment for computer programs, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program which may also be referred to or described as a program, software, a software application, an app, a module, a software module, a script, or code) can be written in any form of programming language, including compiled or interpreted languages, or declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data, e.g., one or more scripts stored in a markup language document, in a single file dedicated to the program in question, or in multiple coordinated files, e.g., files that store one or more modules, sub-programs, or portions of code. A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a data communication network. For a system of one or more computers to be configured to perform particular operations or actions means that the system has installed on it software, firmware, hardware, or a combination of them that in operation cause the system to perform the operations or actions. For one or more computer programs to be configured to perform particular operations or actions means that the one or more programs include instructions that, when executed by data processing apparatus, cause the apparatus to perform the operations or actions.
As used in this specification, an ‘'engine,” or “software engine,” refers to a software implemented input/output system that provides an output that is different from the input. An engine can be an encoded block of functionality', such as a library', a platform, a software development kit (“SDK”), or an object. Each engine can be implemented on any appropriate type of computing device, e.g., servers, mobile phones, tablet computers, notebook computers, music players, e-book readers, laptop or desktop computers, PDAs, smart phones, or other stationary or portable devices, that includes one or more processors and computer readable media. Additionally, two or more of the engines may be implemented on the same computing device, or on different computing devices.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flo vs can also be performed by special purpose logic circuitry, e g., an FPGA or an ASIC, or by a combination of special purpose logic circuitry and one or more programmed computers.
Computers suitable for the execution of a computer program can be based on general or special purpose microprocessors or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. The central processing unit and the memory can be supplemented by, or incorporated in, special purpose logic circuitry. Generally, a computer yvill also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, a computer need not have such devices. Moreover, a computer can be embedded in another device, e.g., a mobile telephone, a personal digital assistant (PDA), a mobile audio or video player, a game console, a Global Positioning System (GPS) receiver, or a portable storage device, e.g., a universal serial bus (USB) flash drive, to name just a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory , media and memory7 devices, including by way of example semiconductor memory devices, e.g., EPROM. EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor, for displaying information to the user and a keyboard and pointing device, e.g, a mouse, trackball, or a presence sensitive display or other surface by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user’s device in response to requests received from the web browser. Also, a computer can interact with a user by sending text messages or other forms of message to a personal device, e.g., a smartphone, running a messaging application, and receiving responsive messages from the user in return.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface, a web browser, or an app through which a user can interact with an implementation of the subject matter described in this specification, or any combination of one or more such back-end, middlew are, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet. The computing system can Include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. In some embodiments, a server transmits data, e.g., an HTML page, to a user device, e.g., for purposes of displaying data to and receiving user input from a user interacting with the device, which acts as a client. Data generated at the user device, e.g., a result of the user interaction, can be received at the server from the device.
In addition to the embodiments described above, the following embodiments are also innovative:
Embodiment 1 is a display processing system comprising: a blending unit configured to receive a plurality of layers of data generated from source data, the plurality of layers comprising one or more layers of video data in a first video format, and to generate an output in a second video format, wherein the blending unit comprises a plurality of blending modules that blend the plurality of layers of data in a sequence, wherein the blending unit is configured to generate a modified alpha value for each data block in the output representing a ratio how much video data from the one or more layers of input video data is represented in the data block in the output; and a backend display processing unit (DPU) configured to receive the output from the blending unit and to apply one or more enhancements to the output, wherein for each enhancement, the backend DPU is configured to select a configuration based on the modified alpha value.
Embodiment 2 is the system of embodiment 1, further comprising a frontend DPU configured to read the source data and to generate, from the source data, the plurality of layers of data comprising the one or more layers of video data in the first video format and the one or more layers of non-video data.
Embodiment 3 is the system of any one of embodiments 1-2, wherein the second video format encodes a combination of the modified alpha value and a compressed representation of RGB values.
Embodiment 4 is the system of any one of embodiments 1-3, wherein the backend DPU comprises one or more enhancement units, being configured to: select the configuration indicated by the modified alpha value by (i) interpolating between stored configuration parameters, or (ii) performing a table lookup of particular configuration parameters, and apply an enhancement to received video data in accordance with the selected configuration.
Embodiment 5 is the system of any one of embodiments 1-4, wherein the backend DPU is further configured to output data in a third video format.
Embodiment 6 is the system of embodiment 5, wherein the third video format is (i) ARGB encoded video or (ii) RGB encoded video.
Embodiment 7 is the system of any one of embodiments 1-6, wherein the first video format is ARGB encoded video.
Embodiment 8 is the system of any one of embodiments 1-7, wherein the blending modules are configured to blend the plurality of layers of data in a sequence following a process of alpha blending.
Embodiment 9 is the system of any one of embodiments 1-8, wherein the plurality of layers received by the blending unit further comprises one or more layers of non-video data.
Embodiment 10 is the system of embodiment 9, wherein the blending modules are configured to blend the plurality of lay ers of data in a sequence following a process of alpha blending in which the one or more layers of non-video data are processed in accordance with predefined alpha values.
Embodiment 11 is the system of any one of embodiments 8 or 10, wherein each blending module is configured to determine a respective one of a plurality of blending modes based on the alpha values of the plurality of layers of data.
Embodiment 12 is the system of embodiment 1 1, wherein determining one of a plurality of blending modes comprises performing a table lookup from a table of the plurality of blending modes.
Embodiment 13 is a method comprising performing the operations of any one of embodiments 1-12.
Embodiment 14 is a computer storage medium encoded with instructions that are operable, when executed by data processing apparatus, to cause the data processing apparatus to perform the operations of any one of embodiments 1-12.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or on the scope of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially be claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order show n or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Particular embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain implementations, multitasking and parallel processing may be advantageous.
What is claimed is:

Claims

1. A display processing system comprising: a blending unit configured to receive a plurality of layers of data generated from source data, the plurality of layers comprising one or more layers of video data in a first video format, and to generate an output in a second video format, wherein the blending unit comprises a plurality of blending modules that blend the plurality of layers of data in a sequence, wherein the blending unit is configured to generate a modified alpha value for each data block in the output representing a ratio how much video data from the one or more layers of input video data is represented in the data block in the output; and a backend display processing unit (DPU) configured to receive the output from the blending unit and to apply one or more enhancements to the output, wherein for each enhancement, the backend DPU is configured to select a configuration based on the modified alpha value.
2. The system of claim 1, further comprising a frontend DPU configured to read the source data and to generate, from the source data, the plurality of layers of data comprising the one or more layers of video data in the first video format and the one or more layers of non-video data.
3. The system of any preceding claim, wherein the second video format encodes a combination of the modified alpha value and a compressed representation of RGB values.
4. The system of any preceding claim, wherein the backend DPU comprises one or more enhancement units, being configured to: select the configuration indicated by the modified alpha value by (i) interpolating between stored configuration parameters, or (ii) performing a table lookup of particular configuration parameters, and apply an enhancement to received video data in accordance with the selected configuration.
5. The system of any preceding claim, wherein the backend DPU is further configured to output data in a third video format.
6. The system of claim 5, wherein the third video format is (i) ARGB encoded video or (ii) RGB encoded video.
7. The system of any preceding claim, wherein the first video format is ARGB encoded video.
8. The system of any preceding claim, wherein the blending modules are configured to blend the plurality of layers of data in a sequence following a process of alpha blending.
9. The system of any preceding claim, wherein the plurality7 of layers received by the blending unit further comprises one or more layers of non-video data.
10. The system of claim 9, wherein the blending modules are configured to blend the plurality7 of layers of data in a sequence following a process of alpha blending in which the one or more layers of non-video data are processed in accordance with predefined alpha values.
11. The system of claim 8 or claim 10, wherein each blending module is configured to determine a respective one of a plurality of blending modes based on the alpha values of the plurality of layers of data.
12. The system of claim 11, wherein determining one of a plurality of blending modes comprises performing a table lookup from a table of the plurality of blending modes.
13. A method comprising: receiving, by a blending unit of a display processing system, a plurality of layers of data generated from source data, the plurality7 of layers comprising one or more layers of video data in a first video format; generating, by the blending unit, an output in a second video format including using a plurality of blending modules to blend the plurality of layers of data in a sequence; generating, by the blending unit, a modified alpha value for each data block in the output representing a ratio how much video data from the one or more layers of input video data is represented in the data block in the output; and receiving, by a backend display processing unit (DPU), the output from the blending unit and applying one or more enhancements to the output, including selecting, for each enhancement, a configuration based on the modified alpha value.
14. The method of claim 13, further comprising: reading, by a frontend DPU of the display processing system, the source data; and generating, by the frontend DPU, the plurality of layers of data from the source data, the plurality of layers comprising the one or more layers of video data in the first video format and the one or more layers of non-video data.
15. The method of claim 13 or claim 14, wherein the second video format encodes a combination of the modified alpha value and a compressed representation of RGB values.
16. The method of any one of claims 13-15, wherein the backend DPU comprises one or more enhancement units, the method further comprising: for each enhancement unit of the backend DPU: selecting, by the enhancement unit, the configuration indicated by the modified alpha value by (i) interpolating between stored configuration parameters, or (ii) performing a table lookup of particular configuration parameters, and applying, by the enhancement unit, an enhancement to the output video from the blending unit in accordance with the selected configuration.
17. The method of any one of claims 13-16, further comprising outputting, by the backend DPU, data in a third video format.
18. The method of claim 17, wherein the third video format is (i) ARGB encoded video or (ii) RGB encoded video.
19. The method of any one of claims 13-18, wherein the first video format is ARGB encoded video.
20. The method of any one of claims 13-19, further comprising blending, by the blending modules, the plurality of layers of data in a sequence following a process of alpha blending.
21. The method of any one of claims 13-20, wherein the plurality of layers received by the blending unit further comprises one or more layers of non-video data.
22. The method of claim 21, further comprising: blending, by the blending modules, the plurality of layers of data in a sequence following a process of alpha blending in which the one or more layers of non-video data are processed in accordance with predefined alpha values.
23. The method of claim 20 or claim 22, further comprising: for each blending module: determining, by the blending module, one of a plurality of blending modes based on the alpha values of the plurality of layers of data.
24. The method of claim 23, wherein determining one of a plurality of blending modes comprises performing a table lookup from a table of the plurality of blending modes.
25. A computer storage medium encoded with instructions that are operable, when executed by data processing apparatus, to cause the data processing apparatus to perform operations comprising: receiving, by a blending unit of a display processing system, a plurality of layers of data generated from source data, the plurality of layers comprising one or more layers of video data in a first video format; generating, by the blending unit, an output in a second video format including using a plurality of blending modules to blend the plurality of layers of data in a sequence; generating, by the blending unit, a modified alpha value for each data block in the output representing a ratio how much video data from the one or more layers of input video data is represented in the data block in the output; and receiving, by a backend display processing unit (DPU), the output from the blending unit and applying one or more enhancements to the output, including selecting, for each enhancement, a configuration based on the modified alpha value.
PCT/US2023/035503 2023-10-19 2023-10-19 Video enhancement Pending WO2025085063A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/US2023/035503 WO2025085063A1 (en) 2023-10-19 2023-10-19 Video enhancement
TW113139764A TW202520700A (en) 2023-10-19 2024-10-18 Video enhancement

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2023/035503 WO2025085063A1 (en) 2023-10-19 2023-10-19 Video enhancement

Publications (1)

Publication Number Publication Date
WO2025085063A1 true WO2025085063A1 (en) 2025-04-24

Family

ID=88793320

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2023/035503 Pending WO2025085063A1 (en) 2023-10-19 2023-10-19 Video enhancement

Country Status (2)

Country Link
TW (1) TW202520700A (en)
WO (1) WO2025085063A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1365385A2 (en) * 1998-11-09 2003-11-26 Broadcom Corporation Graphics display system with processing of graphics layers, alpha blending and composition with video data

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1365385A2 (en) * 1998-11-09 2003-11-26 Broadcom Corporation Graphics display system with processing of graphics layers, alpha blending and composition with video data

Also Published As

Publication number Publication date
TW202520700A (en) 2025-05-16

Similar Documents

Publication Publication Date Title
US10114602B2 (en) Dynamic server-side image sizing for fidelity improvements
US10958732B1 (en) Serverless archive file creation and extraction system and serverless, in-browser, cloud storage enabled methods for opening, decompressing, and creating archive files
US10404769B2 (en) Remote desktop video streaming alpha-channel
US10164458B2 (en) Selective rasterization
WO2020251688A1 (en) Selectively enhancing compressed digital content
US9922007B1 (en) Split browser architecture capable of determining whether to combine or split content layers based on the encoding of content within each layer
US8856212B1 (en) Web-based configurable pipeline for media processing
KR101634134B1 (en) System, method, and computer program product for decompression of block compressed images
US20140325349A1 (en) Real-time Representations of Edited Content
US9906626B2 (en) Resource demand-based network page generation
US20150149895A1 (en) Filtering fonts based on a selection of glyphs
US20140327608A1 (en) Transforming visualized data through visual analytics based on interactivity
US20170090705A1 (en) Conversation and version control for objects in communications
CN113760252B (en) Data visualization method, device, computer system and readable storage medium
US20180341787A1 (en) Method and system to maintain the integrity of a certified document while persisting state in a dynamic form
CN112965699A (en) Front-end page generation method, device, computer system and readable storage medium
CN114071190A (en) Cloud application video stream processing method, related device and computer program product
US10067914B2 (en) Techniques for blending document objects
CN111506841B (en) Webpage display method, device, equipment and readable storage medium
WO2025085063A1 (en) Video enhancement
KR20210055278A (en) Method and system for hybrid video coding
WO2025036409A9 (en) Media content processing method and device, and storage medium and program product
EP3345084B1 (en) Modification of graphical command tokens
US10997365B2 (en) Dynamically generating a visually enhanced document
KR20230159605A (en) Method and apparatus for wire formats for segmented media metadata for parallel processing in a cloud platform

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23805753

Country of ref document: EP

Kind code of ref document: A1