[go: up one dir, main page]

HK1194233A - Method and system for temporal frame interpolation with static regions excluding - Google Patents

Method and system for temporal frame interpolation with static regions excluding Download PDF

Info

Publication number
HK1194233A
HK1194233A HK14107485.1A HK14107485A HK1194233A HK 1194233 A HK1194233 A HK 1194233A HK 14107485 A HK14107485 A HK 14107485A HK 1194233 A HK1194233 A HK 1194233A
Authority
HK
Hong Kong
Prior art keywords
static
video
video frames
region
static region
Prior art date
Application number
HK14107485.1A
Other languages
Chinese (zh)
Other versions
HK1194233B (en
Inventor
M.R.格尔姆蒂诺夫
A.韦谢洛夫
Original Assignee
英特尔公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 英特尔公司 filed Critical 英特尔公司
Publication of HK1194233A publication Critical patent/HK1194233A/en
Publication of HK1194233B publication Critical patent/HK1194233B/en

Links

Description

Method and system for temporal frame interpolation with static region exclusion
Background
Video systems have evolved in part to transmit video and multimedia data over a network and display the video for viewing. In some instances, video may be compressed, converted, and otherwise processed to facilitate transmission, reception, and display by various display devices. For a video viewing experience, it is important for the quality of the video displayed for viewing by the user. Where the portions of video processed for display include video artifacts and other visually perceptible irregularities, the user's video viewing experience may be impaired.
Various techniques have been proposed to compensate for motion in video processing by interpolating video frames. In many respects, some motion interpolation techniques have difficulty generating interpolated video frames that accurately represent motion and static areas within the interpolated video frames. Thus, it appears important to improve the effectiveness and efficiency of video interpolation.
Drawings
Aspects disclosed herein are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings. For simplicity and clarity of illustration, and not of limitation, the various aspects illustrated in the figures are not necessarily drawn to scale. Further, where considered appropriate, reference numerals have been repeated among the figures to indicate corresponding or analogous elements.
Fig. 1A and 1B are illustrative depictions of corresponding video base frames according to some embodiments herein.
FIGS. 1C, 1D, and 1E are illustrative depictions of the video frame of FIG. 1A at various processing stages.
Fig. 1F, 1G, and 1H are illustrative depictions of the fig. 1B video frame at various processing stages according to some embodiments herein.
FIG. 2 is a flow diagram of a process according to one embodiment.
FIG. 3 is an illustrative block diagram of a system including a flowchart in accordance with one embodiment.
Fig. 4A-4H are illustrative depictions of corresponding video frames at various processing stages according to some embodiments herein.
Fig. 5 illustrates a system according to some embodiments herein.
Fig. 6 is an illustration of an embodiment of the system of fig. 5 according to an embodiment herein.
Detailed Description
An image processing method, apparatus or system that may support processes and operations for improving the efficiency and accuracy of generating interpolated frames for video is described below. The present disclosure provides numerous specific details regarding systems for implementing these processes and operations. However, it will be appreciated by one skilled in the art that embodiments of the disclosure may be practiced without these specific details. Thus, in some instances, aspects such as control mechanisms and full software instruction sequences have not been shown in detail in order not to obscure other aspects of the disclosure. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.
References in the specification to "one embodiment," "some embodiments," "an embodiment," "an example," "some examples," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to effect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Some embodiments herein may be implemented in hardware, firmware, software, or any combination thereof. Embodiments may also be implemented as executable instructions stored on a machine-readable medium, which may be read and executed by one or more processors. A machine-readable storage medium may include any tangible, non-transitory mechanism for storing information in a form readable by a machine (e.g., a computing device). In some aspects, a machine-readable storage medium may include Read Only Memory (ROM); random Access Memory (RAM); a magnetic disk storage medium; an optical storage medium; a flash memory device; and signals in electronic or optical form. Although firmware, software, routines, and instructions are described herein as performing certain actions, it should be understood that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, and other devices executing the firmware, software, routines, and instructions.
Frame interpolation may be used in a number of different video processes including, for example, frame rate conversion, distributed video coding, and other processes. In general, the motion interpolation process involves identifying existing base frames or key frames and generating intermediate video frames to be inserted between these base frames. In some aspects, playback of a video sequence including base frames and interpolated frames therebetween results in a smoother or more fluid animation of motion in the video.
Fig. 1A and 1B are illustrative depictions of a pair of base frames from a video sequence. As shown in these figures, each figure includes static text regions at the upper and lower edges of the base frame. In the example of fig. 1A and 1B, the static region includes text at the same location (i.e., static) in both video frames of fig. 1A and 1B. In some examples, the static area may include a title, a logo, and the like.
FIG. 1C is an illustrative depiction of a conventional interpolated video frame generated based on the base frames of FIGS. 1A and 1B. As shown, the wording of the static text region is not very clear. In contrast, the static text of FIG. 1C includes video artifacts such as edge blurring, ghosting, and the like. The video artifacts present in fig. 1C may be the result of a motion interpolation process that attempts to interpolate a video frame that includes static regions. Both fig. 1D and 1E include detailed views of some of the static text from fig. 1C, including video artifacts in or near the static area.
Fig. 1F is an illustrative depiction of an interpolated frame based on base frames 1A and 1B, generated according to an interpolation process that will be described in more detail below. As shown in fig. 1F, the static region text in the interpolated frame of 1D is clear without the video artifacts shown in fig. 1C, 1D and 1E. That is, the interpolated video frame of FIG. 1F is the result of a motion interpolation process that accurately renders static areas in a video frame that includes both static and non-static areas. Both fig. 1G and 1H include detailed views of some of the static text from fig. 1F, showing the sharpness of static text regions in interpolated video frames.
FIG. 2 is an illustrative depiction of a flow diagram of an overview of an interpolation process, generally indicated by the reference numeral 200. According to one embodiment herein, process 200 may be used to interpolate video frames. Process 200 may include an operation 205 of detecting static regions in a set of base frames. The base frame may be selected from a video sequence comprising a plurality of video frames, and the set of base frames may comprise at least two base frames. At operation 205, the base frames identified for processing are analyzed to determine whether they include any static regions.
In the event that it is determined in operation 205 that the base frame includes static regions, these static regions are excluded from the base frame in operation 210. After the static region is excluded from the base frame in operation 210, the base frame is temporally interpolated in operation 215 to generate an interpolated frame temporally positioned between the base frames.
In some aspects, where the base frame is determined in operation 205 to not include any detectable static regions, an alternative motion interpolation process may be used to interpolate the base frame. In some aspects, process 200 may be modified to, for example, bypass operations 210 and 220. In another embodiment, process 200 may continue as shown in FIG. 2, where the detected and excluded static regions are logically "empty" or null.
In operation 220, the static region previously excluded from the base frame is combined with the interpolated frame of operation 215 to generate a combined video frame including the interpolated frame and the static region. The output of operation 220 may be used for a frame conversion process or other video process that may use or include a motion interpolation process.
As described above, process 200 is a flow diagram of an overview of an interpolation process according to an embodiment herein. FIG. 3 is an illustrative block diagram of a system that includes a flow through functional blocks of the system. According to embodiments herein, the functional blocks of FIG. 3 may be implemented in various ways using, without limitation, hardware, firmware, software, and combinations thereof.
The input to the system 300 includes a base frame, including a video frame FiAnd Fi+1Wherein i represents time. Thus, each base frame occurs at a different time instant in the video sequence. At block 305, the base frame is analyzed to detect static regions (if any) in the base frame. In some aspects, the static region detection module 305 may be used to compute a binary map of static elements in the base frame. A calculated binary map or other mechanism for representing the detected static regions of the base frame may be stored or recorded in memory. FIGS. 4A and 4B are base frames F, respectivelyiAnd Fi+1Illustrative examples of (a).
The system module or block 310 may be derived from a base frame (F)iAnd Fi+1) Excluding static regions as detected by the static region detection module 305. In some embodiments, module 310 may be used to mark or otherwise indicate the static elements that make up the detected static area. The static elements marked by module 310 may correspond to static elements associated with the binary map generated by module 305. In this example, the static area is marked by highlighting the detected static area with a particular "highlight" color. In some embodiments, other mechanisms and techniques for marking static areas for further processing may be used. FIGS. 4C and 4D are the outputs (e.g., F ') of module 310'iAnd F'i+1) In which static regions in the base frame have been marked or emphasized. The location of the static area marked or indicated by module 310The particular highlighting color or other mechanism of the range may be used by the spatial interpolation module 315 to identify the static region.
The spatial interpolation module 315 is used to "fill in" the excluded region. In some aspects, the "filling" of the excluded static area replaces elements in the static area with elements and objects that are determined to be "likely" to lie below or behind the static area. In some embodiments, one or more image inpainting algorithms may be used to "fill in" the region excluding the static region. In some aspects, the particular image inpainting algorithm for a particular use case may depend on the content of the video frame (e.g., the amount of motion, texture, color, lighting, etc.). FIGS. 4E and 4F are the outputs of block 315 (e.g., F) "iAnd F "i+1) Wherein static regions in the base frame have been excluded and filled in with the determined "possible" background color and/or texture.
In some aspects, the functionality of modules 310 and 315 may be implemented by one module performing both static region exclusion operations and spatial interpolation operations. In some embodiments, shared resources may be used for the functions of modules 310 and 315 (as well as some other operations, but not specifically described).
Module 320 may be used to process video frames (i.e., video frame F) output by module 315 "iAnd F "i+1) Interpolation is performed in time. That is, module 320 interpolates the base frame from which the static region is excluded or removed and from which image inpainting is performed. In this manner, module 320 may provide temporally interpolated video frames (i.e., video frame F) that are temporally located between base frames "i+1/2). FIG. 4G is the output of module 320 (e.g., F) "i+1/2) Wherein the generated temporally interpolated frame does not include a static area, and a region of the static area is subjected to image inpainting.
Module 325 is used to interpolate video frames (i.e., video frame F) generated by module 320 "i+1/2) With prior detection by module 305Are combined. The input to the module 325 includes the generated interpolated frame F in time "i+1/2And a static area from the static area detection module 305. An example of a combined video frame output by module 325 is depicted in fig. 4H, where the video frame includes a temporally interpolated frame and a static region.
Fig. 5 illustrates one embodiment of a system 500. In various embodiments, system 500 may be a media system, but system 500 is not limited in this context. For example, system 500 may be included in a Personal Computer (PC), laptop computer, ultra-laptop computer, tablet, touchpad, portable computer, handheld computer, palmtop computer, Personal Digital Assistant (PDA), cellular telephone, combination cellular telephone/PDA, television, smart device (e.g., smart phone, smart tablet, or smart television), Mobile Internet Device (MID), messaging device, data communication device, and so forth.
In various embodiments, system 500 includes a platform 502 coupled to a display 520. Platform 502 may receive content from a content device, such as content services device 530 or content delivery device 540 or other similar content source. A navigation controller 550, including one or more navigation components, may be used to interact with, for example, platform 502 and/or display 520. Each of these components will be described in more detail below.
In various embodiments, platform 502 may include chipset 505, processor 510, memory 512, storage 514, graphics subsystem 515, applications 516, and/or radio 518, or any combination. Chipset 505 may provide intercommunication among processor 510, memory 512, storage 514, graphics subsystem 515, applications 516 and/or radio 518. For example, chipset 505 may include a storage adapter (not shown) capable of providing intercommunication with storage 514.
Processor 510 may be implemented as Complex Instruction Set Computer (CISC) or Reduced Instruction Set Computer (RISC) processors, x86 instruction set compatible processors, multi-core, or any other microprocessor or Central Processing Unit (CPU). In various embodiments, processor 510 may include dual-core processors, dual-core mobile processors, and so on.
The memory 512 may be implemented as a volatile memory device such as, but not limited to, a Random Access Memory (RAM), a Dynamic Random Access Memory (DRAM), or a static RAM (sram).
Storage 514 may be implemented as a non-volatile storage device such as, but not limited to, a magnetic disk drive, optical disk drive, tape drive, an internal storage device, an auxiliary storage device, flash memory, battery backed-up SDRAM (synchronous DRAM), and/or a network accessible storage device. In various embodiments, storage 514 may include techniques for improving the protection of the storage performance enhancement of valuable digital media, such as when multiple hard disk drives are involved.
Graphics subsystem 515 may perform image processing, such as still or video, for display. Graphics subsystem 515 may be, for example, an image processing unit (GPU) or a Visual Processing Unit (VPU). An analog or digital interface may be used to communicatively couple graphics subsystem 515 and display 520. For example, the interface may be any of a high definition multimedia interface, a display port, wireless HDMI, and/or wireless HD adaptation technology. Graphics subsystem 515 may be integrated into processor 510 or chipset 505. Graphics subsystem 515 may be a stand-alone card communicatively coupled to chipset 505.
The graphics and/or video processing techniques described herein may be implemented in various hardware architectures. For example, graphics and/or video functionality may be integrated within a chipset. Alternatively, separate graphics and/or video processors may be used. As yet another example, graphics and/or video functions may be implemented by a general purpose processor, including a multicore processor. In another embodiment, these functions may be implemented in a consumer electronics device.
Radio 518 may include one or more radios capable of transmitting and receiving signals using various suitable wireless communication techniques. Such techniques may involve communication across one or more wireless networks. Exemplary wireless networks include, but are not limited to, Wireless Local Area Networks (WLANs), Wireless Personal Area Networks (WPANs), Wireless Metropolitan Area Networks (WMANs), cellular networks, and satellite networks. In communicating across these networks, radio 518 may operate according to one or more applicable standards of any version.
In various embodiments. Display 520 may include any television-like monitor or display. Display 520 may include, for example, a computer display screen, touch screen display, video monitor, television-like device, and/or a television. The display 520 may be digital and/or analog. In various embodiments, display 520 may be a holographic display. Also, the display 520 may be a transparent surface that may receive a visual projection. Such projections may convey various forms of information, images, and/or objects. Such a projection may be, for example, a visual overlay for a Mobile Augmented Reality (MAR) application. Under the control of one or more software applications 516, platform 502 may display user interface 522 on display 520.
In embodiments, content services device 530 may be hosted by any national, international, and/or independent service, and thus accessible to platform 502, for example, via the Internet. Content services device 530 may be coupled to platform 502 and/or display 520. Platform 502 and/or content services device 530 may be coupled to network 560 to communicate media information to network 560 and to communicate (e.g., send and/or receive) media information from network 560. Content delivery device 540 may also be coupled to platform 502 and/or display 520.
In various embodiments, content services device 530 may include a cable television box, a personal computer, a network, a telephone, an internet-enabled device or apparatus capable of delivering digital information and/or content, and any other similar device capable of transferring content, either uni-directionally or bi-directionally, between a content provider and platform 502 and/or display 520 via network 560 or directly. It will be appreciated that content may be communicated to and from any of the various components in the system 500 and content providers, unidirectionally and/or bidirectionally, via the network 560. Examples of content may include any media information including, for example, video, music, medical and gaming information, and so forth.
The content services device 530 receives content, such as cable television programming including media information, digital information, and/or other content. Examples of content providers may include any cable or satellite television or radio or internet content provider. The examples provided are not intended to limit the embodiments of the invention.
In various embodiments, platform 502 may receive control signals from a navigation controller 550 having one or more navigation components. For example, the navigation components of the controller 550 may be used to interact with the user interface 522. In various embodiments, navigation controller 550 may be a pointing device, which may be a computer hardware component (specifically, a human interface device) that allows a user to input spatial data (e.g., continuous and multidimensional) to a computer. Many systems, such as Graphical User Interfaces (GUIs) and televisions and monitors, allow a user to use body gestures to control and provide data to the computer or television.
Movement of the navigation components of controller 550 may be reflected on a display (e.g., display 520) by movement of a pointer, cursor, focus ring, or other visual indicator displayed on the display. For example, under the control of software application 516, navigation features located on navigation controller 550 may be mapped to virtual navigation features displayed on, for example, user interface 522. In various embodiments, controller 550 may not be a separate component, but rather integrated onto platform 502 and/or display 520. However, the embodiments are not limited to these elements or the context shown or described herein.
In embodiments, for example, a driver (not shown) may include technology that, when activated, enables a user to touch a button after initial startup to turn the platform 502 on and off immediately like a television. When the platform is "off," the program logic may allow the platform 502 to stream content to a media adapter or other content services device 530 or content delivery device 540. Additionally, for example, chipset 505 may include hardware and/or software support for 5.1 surround sound audio and/or high definition 7.1 surround sound audio. The drivers may include a graphics driver for an integrated graphics platform. In various embodiments, the graphics driver may comprise a Peripheral Component Interconnect (PCI) express graphics card.
In various embodiments, any one or more of the components shown in system 500 may be integrated. For example, platform 502 and content services device 530 may be integrated, or platform 502 and content delivery device 540 may be integrated, or, for example, platform 502, content services device 530, and content delivery device 540 may be integrated. In various embodiments, platform 502 and display 520 may be an integrated unit. For example, the display 520 and the content service device 530 may be integrated, or the display 520 and the content delivery device 540 may be integrated. These examples are not intended to limit the invention.
In various embodiments, system 500 may be implemented as a wireless system, a wired system, or a combination of both. When implemented as a wireless system, system 500 may include components or interfaces suitable for communicating over a wireless shared media, such as one or more antennas, transmitters, receivers, transceivers, amplifiers, filters, control logic, and so forth. Examples of wireless shared media may include portions of a wireless spectrum, such as the RF spectrum, and so forth. When implemented as a wired system, system 500 may include components and interfaces suitable for communicating over wired communications media, such as input/output (I/O) adapters, physical connectors to connect the I/O adapter with a corresponding wired communications medium, a Network Interface Card (NIC), disc controller, video controller, audio controller, and so forth. Examples of wired communications media may include a wire, cable, metal leads, Printed Circuit Board (PCB), backplane, switch fabric, semiconductor material, twisted-pair wire, co-axial cable, fiber optics, and so forth.
Platform 502 may establish one or more logical or physical channels to communicate information. The information may include media information and control information. Media information may refer to any data representing content meant for a user. For example, content examples may include data from a voice conversation, videoconference, streaming video, electronic mail ("email") message, voice mail message, alphanumeric symbols, graphics, image, video, text, and so forth. For example, the data from a voice conversation may be speech information, silence periods, background noise, comfort noise, tones, and the like. Control information may refer to any data representing commands, instructions or control words meant for an automated system. For example, control information may be used to route media information throughout a system or instruct a node to process media information in a predetermined manner. However, the embodiments are not limited in scope to these elements or shown or described in FIG. 5.
As described above, the system 500 may be implemented in different physical styles or form factors. Fig. 6 illustrates various embodiments of a small form factor device 600 in which the system 500 may be implemented. For example, in various embodiments, device 600 may be implemented as a mobile computing device having wireless capabilities. For example, a mobile computing device may refer to any device having a processing system and a mobile power source or power source, such as one or more batteries.
For example, examples of a mobile computing device may include a Personal Computer (PC), a laptop computer, an ultra-laptop computer, a tablet, a touchpad, a portable computer, a handheld computer, a palmtop computer, a Personal Digital Assistant (PDA), a cellular telephone, a combination cellular telephone/PDA, a television, a smart device (e.g., a smart phone, a smart tablet, or a smart television), a Mobile Internet Device (MID), a messaging device, a data communication device, and so forth.
Examples of mobile computing devices may also include computers arranged to be worn by a person, such as wrist computers, finger computers, ring computers, eyeglass computers, buckle computers, arm band computers, shoe computers, clothing computers, and other wearable computers. For example, in various embodiments, a mobile computing device may be implemented as a smartphone capable of executing computer applications as well as voice communications and/or data communications. Although some embodiments are described with a mobile computing device implemented as a smartphone by way of example, it may be appreciated that other embodiments may be implemented using other wireless mobile computing devices. The embodiments are not limited in this context.
As shown in fig. 6, device 600 may include a housing 602, a display 604, an input/output (I/O) device 606, and an antenna 608. Device 600 may also include a navigation component 612. Display 604 may include any suitable display device for displaying information appropriate for a mobile computing device. The I/O device 606 may comprise any suitable I/O device for inputting information into a mobile computing device. Examples of I/O devices 606 may include alphanumeric keyboards, numeric keypads, touch pads, input keys, buttons, switches, rocker switches, microphones, speakers, voice devices, software, and so forth. Information may also be entered into the device 600 through a microphone. This information may be digitized by a speech recognition device. The embodiments are not limited in this context.
Various embodiments may be implemented using hardware elements, software elements, or a combination of both. Examples of hardware elements may include processors, microprocessors, circuits, circuit elements (e.g., transistors, resistors, capacitors, inductors, and so forth), integrated circuits, Application Specific Integrated Circuits (ASIC), Programmable Logic Devices (PLD), Digital Signal Processors (DSP), Field Programmable Gate Array (FPGA), logic gates, registers, semiconductor device, chips, microchips, chip sets, and so forth. Examples of software may include software components, programs, application software, computer programs, application programs, system programs, machine programs, operating system software, middleware, firmware, software modules, routines, subroutines, functions, methods, procedures, software interfaces, Application Program Interfaces (API), instruction sets, computing code, computer code, code segments, computer code segments, words, values, symbols, or any combination thereof. Determining whether an embodiment is implemented using hardware elements and/or software elements may vary in accordance with any number of factors, such as desired computational rate, power levels, thermal tolerances, processing cycle budget, input data rates, output data rates, memory resources, data bus speeds and other design or operational constraints.
One or more aspects of at least one embodiment may be implemented by representative instructions stored on a machine-readable medium which represent various logic in a processor, which when read by a machine, cause the machine to generate logic to perform the techniques described herein. These representations, known as "IP cores" may be stored on a tangible, machine-readable medium and provided to a number of customers or manufacturing facilities to load into the manufacturing machines that actually make the logic or processor.
All of the systems and processes discussed herein can be implemented in program code stored on one or more computer readable media. Such media may include, for example, floppy disks, CD-ROMs, DVD-ROMs, one or more types of "disks," magnetic tape, memory cards, flash drives, solid state drives, and solid state Random Access Memory (RAM) or Read Only Memory (ROM) storage units. Embodiments are thus not limited to any specific combination of hardware and software.
Embodiments are described herein for illustrative purposes only. Those skilled in the art will recognize from this description that embodiments are not limited to the embodiments described, but can be practiced with various modifications and alterations limited only by the spirit and scope of the claims.

Claims (18)

1. A computer-implemented method, the method comprising:
excluding static regions from a set of video frames;
temporally interpolating the set of video frames from which the static region has been excluded to produce interpolated video frames; and
combining the static region with the temporally interpolated video frame to generate a video frame.
2. The method of claim 1, wherein the static region comprises a plurality of regions in the set of video frames.
3. The method of claim 1, further comprising detecting a static region in the set of video frames.
4. The method of claim 3, further comprising:
generating a static map associated with a region of the set of video frames corresponding to the detected static region;
excluding the static region based on the static map; and
combining the static region with the interpolated video frame based on the static map.
5. The method of claim 1, wherein the excluding comprises:
removing the static region from the set of video frames; and
spatially interpolating the set of video frames to fill the region from which the static region was removed.
6. The method of claim 1, wherein the set of video frames comprises a plurality of identified video key frames.
7. A system for generating an interpolated video sequence, the system comprising:
a machine-readable medium having stored thereon processor-executable instructions; and
a processor that executes the instructions to:
excluding static regions from a set of video frames;
temporally interpolating the set of video frames from which the static region has been excluded to produce interpolated video frames; and
combining the static region with the temporally interpolated video frame to generate a video frame.
8. The system of claim 7, wherein the static region comprises a plurality of regions in the set of video frames.
9. The system of claim 7, further comprising detecting a static region in the set of video frames.
10. The system of claim 9, wherein the processor is further instructed to:
generating a static map associated with a region of the set of video frames corresponding to the detected static region;
excluding the static region based on the static map; and
combining the static region with the interpolated video frame based on the static map.
11. The system of claim 7, wherein the excluding comprises:
removing the static region from the set of video frames; and
spatially interpolating the set of video frames to fill the region from which the static region was removed.
12. The system of claim 7, wherein the set of video frames comprises a plurality of identified video key frames.
13. A computer-readable medium having stored thereon processor-executable instructions, the medium comprising:
instructions for excluding static regions from a set of video frames;
instructions for temporally interpolating the set of video frames from which the static region has been excluded to produce interpolated video frames; and
instructions to combine the static region with the temporally interpolated video frame to generate a video frame.
14. The medium of claim 13, wherein the static region comprises a plurality of regions in the set of video frames.
15. The medium of claim 13, further comprising detecting a static region in the set of video frames.
16. The medium of claim 15, further comprising:
instructions to generate a static map associated with a region of the set of video frames corresponding to the detected static region;
instructions to exclude the static region based on the static map; and
instructions to combine the static region with the interpolated video frame based on the static map.
17. The medium of claim 13, wherein the excluding comprises:
removing the static region from the set of video frames; and
spatially interpolating the set of video frames to fill the region from which the static region was removed.
18. The medium of claim 13, wherein the set of video frames comprises a plurality of identified video key frames.
HK14107485.1A 2012-06-29 2014-07-22 Method and system for temporal frame interpolation with static regions excluding HK1194233B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/539,035 2012-06-29

Publications (2)

Publication Number Publication Date
HK1194233A true HK1194233A (en) 2014-10-10
HK1194233B HK1194233B (en) 2019-10-04

Family

ID=

Similar Documents

Publication Publication Date Title
US20140002732A1 (en) Method and system for temporal frame interpolation with static regions excluding
US9158498B2 (en) Optimizing fixed point divide
US20140003662A1 (en) Reduced image quality for video data background regions
US9253524B2 (en) Selective post-processing of decoded video frames based on focus point determination
US9407961B2 (en) Media stream selective decode based on window visibility state
US9600864B2 (en) Skin tone tuned image enhancement
US9525803B2 (en) Object detection using motion estimation
TWI615807B (en) Method, apparatus and system for recording the results of visibility tests at the input geometry object granularity
WO2017112372A1 (en) Adaptive control for denoise filtering
KR101653158B1 (en) Distributed graphics processing
US20130318458A1 (en) Modifying Chrome Based on Ambient Conditions
US20150279089A1 (en) Streaming compression anti-aliasing approach to deferred shading
US9773477B2 (en) Reducing the number of scaling engines used in a display controller to display a plurality of images on a screen
US9019340B2 (en) Content aware selective adjusting of motion estimation
US9183640B2 (en) Method of and apparatus for low-complexity detection of periodic textures orientation
US9055177B2 (en) Content aware video resizing
US8903193B2 (en) Reducing memory bandwidth consumption when executing a program that uses integral images
US9609319B2 (en) Detection, location, and processing of static pixels
HK1194233A (en) Method and system for temporal frame interpolation with static regions excluding
HK1194233B (en) Method and system for temporal frame interpolation with static regions excluding