WO2019217264A1 - Compression fovéale dynamique - Google Patents
Compression fovéale dynamique Download PDFInfo
- Publication number
- WO2019217264A1 WO2019217264A1 PCT/US2019/030822 US2019030822W WO2019217264A1 WO 2019217264 A1 WO2019217264 A1 WO 2019217264A1 US 2019030822 W US2019030822 W US 2019030822W WO 2019217264 A1 WO2019217264 A1 WO 2019217264A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- warped image
- image
- warped
- scaling factors
- portions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Ceased
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/167—Position within a video image, e.g. region of interest [ROI]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/119—Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/21—Server components or server architectures
- H04N21/218—Source of audio or video content, e.g. local disk arrays
- H04N21/21805—Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/631—Multimode Transmission, e.g. transmitting basic layers and enhancement layers of the content over different transmission paths or transmitting with different error corrections, different keys or with different transmission protocols
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/816—Monomedia components thereof involving special video data, e.g 3D video
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/60—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
- H04N19/63—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets
Definitions
- the present disclosure generally relates to image compression, and in particular, to systems, methods, and devices for compressing images for simulated reality with a varying amount of detail.
- a physical setting refers to a world that individuals can sense and/or with which individuals can interact without assistance of electronic systems.
- Physical settings e.g., a physical forest
- physical elements e.g., physical trees, physical structures, and physical animals. Individuals can directly interact with and/or sense the physical setting, such as through touch, sight, smell, hearing, and taste.
- a simulated reality (SR) setting refers to an entirely or partly computer-created setting that individuals can sense and/or with which individuals can interact via an electronic system.
- SR a subset of an individual’s movements is monitored, and, responsive thereto, one or more attributes of one or more virtual objects in the SR setting is changed in a manner that conforms with one or more physical laws.
- a SR system may detect an individual walking a few paces forward and, responsive thereto, adjust graphics and audio presented to the individual in a manner similar to how such scenery and sounds would change in a physical setting. Modifications to attribute(s) of virtual object(s) in a SR setting also may be made responsive to representations of movement (e.g., audio instructions).
- An individual may interact with and/or sense a SR object using any one of his senses, including touch, smell, sight, taste, and sound.
- an individual may interact with and/or sense aural objects that create a multi-dimensional (e.g., three dimensional) or spatial aural setting, and/or enable aural transparency.
- Multi-dimensional or spatial aural settings provide an individual with a perception of discrete aural sources in multi-dimensional space.
- Aural transparency selectively incorporates sounds from the physical setting, either with or without computer-created audio.
- an individual may interact with and/or sense only aural objects.
- a VR setting refers to a simulated setting that is designed only to include computer-created sensory inputs for at least one of the senses.
- a VR setting includes multiple virtual objects with which an individual may interact and/or sense. An individual may interact and/or sense virtual objects in the VR setting through a simulation of a subset of the individual’s actions within the computer-created setting, and/or through a simulation of the individual or his presence within the computer-created setting.
- a MR setting refers to a simulated setting that is designed to integrate computer-created sensory inputs (e.g., virtual objects) with sensory inputs from the physical setting, or a representation thereof.
- a mixed reality setting is between, and does not include, a VR setting at one end and an entirely physical setting at the other end.
- computer-created sensory inputs may adapt to changes in sensory inputs from the physical setting.
- some electronic systems for presenting MR settings may monitor orientation and/or location with respect to the physical setting to enable interaction between virtual objects and real objects (which are physical elements from the physical setting or representations thereof). For example, a system may monitor movements so that a virtual plant appears stationery with respect to a physical building.
- An AR setting refers to a simulated setting in which at least one virtual object is superimposed over a physical setting, or a representation thereof.
- an electronic system may have an opaque display and at least one imaging sensor for capturing images or video of the physical setting, which are representations of the physical setting. The system combines the images or video with virtual objects, and displays the combination on the opaque display.
- An individual using the system, views the physical setting indirectly via the images or video of the physical setting, and observes the virtual objects superimposed over the physical setting.
- image sensor(s) to capture images of the physical setting, and presents the AR setting on the opaque display using those images, the displayed images are called a video pass-through.
- an electronic system for displaying an AR setting may have a transparent or semi-transparent display through which an individual may view the physical setting directly.
- the system may display virtual objects on the transparent or semi-transparent display, so that an individual, using the system, observes the virtual objects superimposed over the physical setting.
- a system may comprise a projection system that projects virtual objects into the physical setting.
- the virtual objects may be projected, for example, on a physical surface or as a holograph, so that an individual, using the system, observes the virtual objects superimposed over the physical setting.
- An augmented reality setting also may refer to a simulated setting in which a representation of a physical setting is altered by computer-created sensory information.
- a portion of a representation of a physical setting may be graphically altered (e.g., enlarged), such that the altered portion may still be representative of but not a faithfully- reproduced version of the originally captured image(s).
- a system may alter at least one of the sensor images to impose a particular viewpoint different than the viewpoint captured by the image sensor(s).
- a representation of a physical setting may be altered by graphically obscuring or excluding portions thereof.
- An AV setting refers to a simulated setting in which a computer-created or virtual setting incorporates at least one sensory input from the physical setting.
- the sensory input(s) from the physical setting may be representations of at least one characteristic of the physical setting.
- a virtual object may assume a color of a physical element captured by imaging sensor(s).
- a virtual object may exhibit characteristics consistent with actual weather conditions in the physical setting, as identified via imaging, weather-related sensors, and/or online weather data.
- an augmented reality forest may have virtual trees and structures, but the animals may have features that are accurately reproduced from images taken of physical animals.
- a head mounted system may have an opaque display and speaker(s).
- a head mounted system may be designed to receive an external display (e.g., a smartphone).
- the head mounted system may have imaging sensor(s) and/or microphones for taking images/video and/or capturing audio of the physical setting, respectively.
- a head mounted system also may have a transparent or semi transparent display.
- the transparent or semi-transparent display may incorporate a substrate through which light representative of images is directed to an individual’s eyes.
- the display may incorporate LEDs, OLEDs, a digital light projector, a laser scanning light source, liquid crystal on silicon, or any combination of these technologies.
- the substrate through which the light is transmitted may be a light waveguide, optical combiner, optical reflector, holographic substrate, or any combination of these substrates.
- the transparent or semi transparent display may transition selectively between an opaque state and a transparent or semi-transparent state.
- the electronic system may be a projection-based system.
- a projection-based system may use retinal projection to project images onto an individual’s retina.
- a projection system also may project virtual objects into a physical setting (e.g., onto a physical surface or as a holograph).
- SR systems include heads up displays, automotive windshields with the ability to display graphics, windows with the ability to display graphics, lenses with the ability to display graphics, headphones or earphones, speaker arrangements, input mechanisms (e.g., controllers having or not having haptic feedback), tablets, smartphones, and desktop or laptop computers.
- Rendering an image for an SR experience can be computationally expensive.
- portions of the image are rendered on a display panel with different resolutions. For example, in various implementations, portions corresponding to a user’s field of focus are rendered with higher resolution than portions corresponding to a user’s periphery. Typical methods of compressing such an image fail to take advantage of the knowledge that different areas of the image include different levels of detail.
- Figure 1 is a block diagram of an example operating environment in accordance with some implementations.
- Figure 2 illustrates an SR pipeline that receives SR content and displays an image on a display panel based on the SR content in accordance with some implementations.
- Figures 3A-3D illustrate various rendering resolution functions in a first dimension in accordance with various implementations.
- Figures 4A D illustrate various two-dimensional rendering resolution functions in accordance with various implementations.
- Figure 5 A illustrates an example rendering resolution function that characterizes a resolution in a display space as a function of angle in a warped space in accordance with some implementations .
- Figure 5B illustrates the integral of the rendering resolution function of
- FIG. 5A in accordance with some implementations.
- Figure 5C illustrates the tangent of the inverse of the integral of the rendering resolution function of Figure 5A in accordance with some implementations.
- Figure 6A illustrates an example rendering resolution function for performing static foveation in accordance with some implementations
- Figure 6B illustrates an example rendering resolution function for performing dynamic foveation in accordance with some implementations.
- Figure 7 is a flowchart representation of a method of rendering an image based on a rendering resolution function in accordance with some implementations.
- Figure 8A illustrates an example image representation, in a display space, of SR content to be rendered in accordance with some implementations.
- Figure 8B illustrates a warped image of the SR content of Figure 8A in accordance with some implementations.
- Figure 9 is a flowchart representation of a method of transmitting an image in accordance with some implementations.
- Figures 10A-10B illustrates an example image and a wavelet image generated by a one-layer wavelet transform of the example image.
- Figure 11 is a flowchart representation of a method of receiving an image with a constrained rendering resolution function in accordance with some implementations.
- Various implementations disclosed herein include devices, systems, and methods for transmitting a warped image.
- the method includes receiving a warped image representing simulated reality (SR) content to be displayed in a display space, the warped image having a plurality of pixels at respective locations uniformly spaced in a grid pattern in a warped space, wherein the plurality of pixels are respectively associated with a plurality of respective pixel values and a plurality of respective scaling factors indicating a plurality of respective resolutions at a plurality of respective locations in the display space.
- SR simulated reality
- the method includes transmitting the warped image over one or more channels such that at least one bandwidth of the one or more channels used by transmission of respective portions of the warped image is based on one or more scaling factors of the plurality of scaling factors corresponding to the respective portions of the warped image.
- Various implementations disclosed herein include devices, systems, and methods for receiving a warped image.
- the method includes receiving at least a subset of a plurality of data packets corresponding to portions of a warped image representing simulated reality (SR) content to be displayed in a display space, the warped image having a plurality of pixels at respective locations uniformly spaced in a grid pattern in a warped space.
- the method includes receiving a plurality of scaling factors indicating respective resolutions of the portions of the warped image at a plurality of respective locations in the display space.
- the method includes detecting an error condition for a particular one of the plurality of data packets.
- the method includes resolving the error condition based on a particular one of the plurality of scaling factors corresponding to the particular one of the plurality of data packets.
- a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein.
- a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein.
- a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
- FIG. 1 is a block diagram of an example operating environment 100 in accordance with some implementations. While pertinent features are shown, those of ordinary skill in the art will appreciate from the present disclosure that various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, the operating environment 100 includes a controller 110 and a head-mounted device (HMD) 120.
- HMD head-mounted device
- the controller 110 is configured to manage and coordinate a simulated reality (SR) experience for the user.
- the controller 110 includes a suitable combination of software, firmware, and/or hardware.
- the controller 110 is a computing device that is local or remote relative to the scene 105.
- the controller 110 is a local server located within the scene 105.
- the controller 110 is a remote server located outside of the scene 105 (e.g., a cloud server, central server, etc.).
- the controller 110 is communicatively coupled with the HMD 120 via one or more wired or wireless communication channels 144 (e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.l6x, IEEE 802.3x, etc.). In another example, the controller 110 is included within the enclosure of HMD 120.
- wired or wireless communication channels 144 e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.l6x, IEEE 802.3x, etc.
- the HMD 120 is configured to present the SR experience to the user.
- the HMD 120 includes a suitable combination of software, firmware, and/or hardware.
- the functionalities of the controller 110 are provided by and/or combined with the HMD 120.
- the HMD 120 provides an SR experience to the user while the user is virtually and/or physically present within the scene 105.
- the HMD 120 is configured to present AR content (e.g., one or more virtual objects) and to enable optical see-through of the scene 105.
- the HMD 120 is configured to present AR content (e.g., one or more virtual objects) overlaid or otherwise combined with images or portions thereof captured by the scene camera of HMD 120.
- the HMD 120 while presenting AV content, the HMD 120 is configured to present elements of the real world, or representations thereof, combined with or superimposed over a user’ s view of a computer-simulated environment.
- the HMD 120 is configured to present VR content.
- the user wears the HMD 120 on his/her head.
- the HMD 120 includes one or more SR displays provided to display the SR content, optionally through an eyepiece or other optical lens system.
- the HMD 120 encloses the field-of-view of the user.
- the HMD 120 is replaced with a handheld device (such as a smartphone or tablet) configured to present SR content in which the user does not wear the HMD 120, but holds the device with a display directed towards the field-of-view of the user and a camera directed towards the scene 105.
- the handheld device can be placed within an enclosure that can be worn on the head of the user.
- the HMD 120 is replaced with an SR chamber, enclosure, or room configured to present SR content, wherein the user does not wear or hold the HMD 120.
- the HMD 120 includes an SR pipeline that presents the SR content.
- Figure 2 illustrates an SR pipeline 200 that receives SR content and displays an image on a display panel 240 based on the SR content.
- the SR pipeline 200 includes a rendering module 210 that receives the SR content (and eye tracking data from an eye tracker 260) and renders an image based on the SR content.
- SR content includes definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), and other information describing content to be represented in the rendered image.
- An image includes a matrix of pixels, each pixel having a corresponding pixel value and a corresponding pixel location.
- the pixel values range from 0 to 255.
- each pixel value is a color triplet including three values corresponding to three color channels.
- an image is an RGB image and each pixel value includes a red value, a green value, and a blue value.
- an image is a YUV image and each pixel value includes a luminance value and two chroma values.
- the image is a YUV444 image in which each chroma value is associated with one pixel.
- the image is a YUV420 image in which each chroma value is associated with a 2x2 block of pixels (e.g., the chroma values are downsampled).
- an image includes a matrix of tiles, each tile having a corresponding tile location and including a block of pixels with corresponding pixel values.
- each tile is a 32x32 block of pixels. While specific pixel values, image formats, and tile sizes are provided, it should be appreciated that other values, format, and tile sizes may be used.
- the image rendered by the rendering module 210 (e.g., the rendered image) is provided to a transport module 220 that couples the rendering module 210 to a display module 230.
- the transport module 220 includes a compression module 222 that compresses the rendered image (resulting in a compressed image), a communications channel 224 that carries the compressed image, and a decompression module 226 that decompresses the compressed image (resulting in a decompressed image).
- the decompressed image is provided to a display module 230 that converts the decompressed image into panel data.
- the panel data is provided to a display panel 240 that displays a displayed image as described by (e.g., according to) the panel data.
- the display module 230 includes a lens compensation module 232 that compensates for distortion caused by an eyepiece 242 of the HMD.
- the lens compensation module 232 predistorts the decompressed image in an inverse relationship to the distortion caused by the eyepiece 242 such that the displayed image, when viewed through the eyepiece 242 by a user 250, appears undistorted.
- the display module 230 also includes a panel compensation module 234 that converts image data into panel data to be read by the display panel 240.
- the eyepiece 242 limits the resolution that can be perceived by the user 250.
- the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function of distance from an origin of the display space.
- the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that varies as a function of an angle between the optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the eyepiece 242.
- the maximum resolution that the eyepiece 242 can support is expressed as an eyepiece resolution function that various as a function an angle between the optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
- the display panel 240 includes a matrix of MxN pixels located at respective locations in a display space. The display panel 240 displays the displayed image by emitting light from each of the pixels as described by (e.g., according to) the panel data.
- the SR pipeline 200 includes an eye tracker 260 that generates eye tracking data indicative of a gaze of the user 250.
- the eye tracking data includes data indicative of a fixation point of the user 250 on the display panel 240.
- the eye tracking data includes data indicative of a gaze angle of the user 250, such as the angle between the current optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
- the rendering module 210 in order to render an image for display on the display panel 240, the rendering module 210 generates MxN pixel values for each pixel of an MxN image.
- each pixel of the rendered image corresponds to a pixel of the display panel 240 with a corresponding location in the display space.
- the rendering module 210 generates a pixel value for MxN pixel locations uniformly spaced in a grid pattern in the display space.
- the rendering module 210 generates a tile of TxT pixels, each pixel having a corresponding pixel value, at M/TxN/T tile locations uniformly spaced in a grid pattern in the display space.
- Rendering MxN pixel values can be computationally expensive. Further, as the size of the rendered image increases, so does the amount of processing needed to compress the image at the compression module 222, the amount of bandwidth needed to transport the compressed image across the communications channel 224, and the amount of processing needed to decompress the compressed image at the decompression module 226.
- foveation e.g. foveated imaging
- Foveation is a digital image processing technique in which the image resolution, or amount of detail, varies across an image.
- a foveated image has different resolutions at different parts of the image.
- Humans typically have relatively weak peripheral vision.
- resolvable resolution for a user is maximum over a field of fixation (e.g., where the user is gazing) and falls off in an inverse linear fashion.
- the displayed image displayed by the display panel 240 is a foveated image having a maximum resolution at a field of focus and a resolution that decreases in an inverse linear fashion in proportion to the distance from the field of focus.
- the foveated image perceptually matches a non- foveated image, e.g., the processing is“lossless.”
- the foveated image is perceptually better than a non-foveated image, e.g., the quality of the image is greater at the gaze location than a non-foveated image of greater size.
- the foveated image is perceptually degraded as compared to a non-foveated image, but more efficient in power/bandwidth, e.g., the processing is“lossy.”
- an MxN foveated image includes less information than an MxN unfoveated image.
- the rendering module 210 generates, as a rendered image, a foveated image.
- the rendering module 210 can generate an MxN foveated image more quickly and with less processing power (and battery power) than the rendering module 210 can generate an MxN unfoveated image.
- an MxN foveated image can be expressed with less data than an MxN unfoveated image.
- an MxN foveated image file is smaller in size than an MxN unfoveated image file.
- compressing an MxN foveated image using various compression techniques results in fewer bits than compressing an MxN unfoveated image.
- a foveation ratio, R can be defined as the amount of information in the MxN unfoveated image divided by the amount of information in the MxN foveated image.
- the foveation ratio is between 1.5 and 10.
- the foveation ratio is 2.
- the foveation ratio is 3 or 4.
- the foveation ratio is constant among images.
- the foveation ratio is selected based on the image being rendered.
- the rendering module 210 in order to render an image for display on the display panel 240, the rendering module 210 generates M/RxN/R pixel values for each pixel of an M/RxN/R warped image. Each pixel of the warped image corresponds to an area greater than a pixel of the display panel 240 at a corresponding location in the display space. Thus, the rendering module 210 generates a pixel value for each of M/RxN/R locations in the display space that are not uniformly distributed in a grid pattern.
- the rendering module 210 generates a tile of TxT pixels, each pixel having a corresponding pixel value, at each of M/(RT)xN/(RT) locations in the display space that are not uniformly distributed in a grid pattern.
- the respective area in the display space corresponding to each pixel value (or each tile) is defined by the corresponding location in the display space (a rendering location) and a scaling factor (or a set of a horizontal scaling factor and a vertical scaling factor).
- the rendering module 210 generates, as a rendered image, a warped image.
- the warped image includes a matrix of M/RxN/R pixel values for M/RxN/R locations uniformly spaced in a grid pattern in a warped space that is different than the display space.
- the warped image includes a matrix of M/RxN/R pixel values for M/RxN/R locations in the display space that are not uniformly distributed in a grid pattern.
- the resolution of the warped image is uniform in the warped space, the resolution varies in the display space. This is described in greater detail below with respect to Figures 8 A and 8B.
- the rendering module 210 determines the rendering locations and the corresponding scaling factors based on a rendering resolution function that generally characterizes the resolution of the rendered image in the displayed space.
- the rendering resolution function, S(x), is a function of a distance from an origin of the display space (which may correspond to the center of the display panel 240).
- the rendering resolution function, X(q), is a function of an angle between an optical axis of the user 250 and the optical axis when the user 250 is looking at the center of the display panel 240.
- the rendering resolution function, X(q) is expressed in pixels per degree (PPD).
- the rendering resolution function (in a first dimension) is defined as:
- S max is the maximum of the rendering resolution function (e.g., approximately 60 PPD)
- S mm is the asymptote of the rendering resolution function, Of,,/ characterizes the size of the field of focus, and w characterizes the width of the rendering resolution function.
- Figure 3 A illustrates a rendering resolution function 310 (in a first dimension) which falls off in an inverse linear fashion from a field of focus.
- Figure 3B illustrates a rendering resolution function 320 (in a first dimension) which falls off in a linear fashion from a field of focus.
- Figure 3C illustrates a rendering resolution function 330 (in a first dimension) which is approximately Gaussian.
- Figure 3D illustrates a rendering resolution function 340 (in a first dimension) which falls off in a rounded stepwise fashion.
- Each of the rendering resolutions functions 310-340 of Figures 3 A-3D is in the form a peak including a peak height (e.g., a maximum value) and a peak width.
- the peak width can be defined in a number of ways.
- the peak width is defined as the size of the field of focus (as illustrated by width 311 of Figure 3A and width 321 of Figure 3B).
- the peak width is defined as the full width at half maximum (as illustrated by width 331 of Figure 3C).
- the peak width is defined as the distance between the two inflection points nearest the origin (as illustrated by width 341 of Figure 3D).
- Figures 3A-3D illustrate rendering resolution functions in a single dimension
- the rendering resolution function used by the rendering module 210 can be a two-dimensional function.
- Figure 4 A illustrates a two-dimensional rendering resolution function 410 in which the rendering resolution function 410 is independent in a horizontal dimension ( Q ) and a vertical dimension (f).
- Figure 4C illustrates a two-dimensional rendering resolution function 430 in which the rendering resolution function 430 is different in a horizontal dimension ( Q ) and a vertical dimension (f).
- Figure 4D illustrates a two- dimensional rendering resolution function 440 based on a human vision model.
- the rendering module 210 As described in a related application (U.S. Prov. Patent App. No. 62/667,723, entitled “DYNAMIC FOVEATED RENDERING,” filed May 7, 2018, and hereby incorporated by reference in its entirety), the rendering module 210 generates the rendering resolution function based on a number of factors, including biological information regarding human vision, eye tracking data, eye tracking metadata, the SR content, and various constraints (such as constraints imposed by the hardware of the HMD).
- Figure 5A illustrates an example rendering resolution function 510, denoted
- the rendering resolution function 510 is a constant (e.g., S max ) within a field of focus (between -Of,,/ and +0f dockf) and falls off in an inverse linear fashion outside this window.
- Figure 5B illustrates the integral 520, denoted II(Q), of the rendering resolution function 510 of Figure 5 A within a field of view, e.g., from -0 f v to +0f ov .
- U Q) f- S(e)c .
- the integral 520 ranges from 0 at -0 fOV to a maximum value, denoted Umax, at
- FIG. 5C illustrates the tangent 530, denoted V(XR), of the inverse of the integral
- V (x R ) ⁇ hh (c b )).
- the tangent 530 illustrates a direct mapping from rendered space, in XR, to display space, in xr>.
- the uniform sampling points in the warped space (equally spaced along the XR axis) corresponding to non-uniform sampling points in the display space (non-equally spaced along the XD axis).
- Scaling factors can be determined by the distances between the non-uniform sampling points in the display space.
- the rendering module 210 uses a rendering resolution function that does not depend on the gaze on the user. However, when performing dynamic foveation, the rendering module 210 uses a rendering resolution function that depends on the gaze of the user. In particular, when performing dynamic foveation, the rendering module 210 uses a rendering resolution function that has a peak height at a location corresponding to a location in the display space at which the user is looking (e.g., the point of fixation as determined by the eye tracker 260).
- Figure 6 A illustrates a rendering resolution function 610 that may be used by the rendering module 210 when performing static foveation.
- the rendering module 210 may also use the rendering resolution function 610 of Figure 6 A when performing dynamic foveation and the user is looking at the center of the display panel 240.
- Figure 6B illustrates a rendering resolution function 620 that may be used by the rendering module when performing dynamic foveation and the user is looking at an angle (0 g ) away from the center of the display panel 240.
- Figure 7 is a flowchart representation of a method 700 of rendering an image in accordance with some implementations.
- the method 700 is performed by a rendering module, such as the rendering module 210 of Figure 2.
- the method 700 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
- the method 700 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
- the method 700 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
- the method 700 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
- the method 700 begins at block 710 with the rendering module obtaining SR content to be rendered into a display space.
- SR content can include definitions of geometric shapes of virtual objects, colors and/or textures of virtual objects, images (such as a see-through image of the scene), or other information describing content to be represented in the rendered image.
- the method 700 continues at block 720 with the rendering module obtaining a rendering resolution function defining a mapping between the display space and a warped space.
- a rendering resolution function defining a mapping between the display space and a warped space.
- Various rendering resolution functions are illustrated in Figures 3A-3D and Figures 4A- 4D. Various methods of generating a rendering resolution function are described further below.
- the rendering resolution function generally characterizes the resolution of the rendered image in the display space.
- the integral of the rendering resolution function provides a mapping between the display space and the warped space (as illustrated in Figures 5A-5C).
- the rendering resolution function, S(x) is a function of a distance from an origin of the display space.
- the rendering resolution function, X(q) is a function of an angle between an optical axis of the user and the optical axis when the user is looking at the center of the display panel.
- the rendering resolution function characterizes a resolution in the display space as a function of angle (in the display space).
- the rendering resolution function, X(q) is expressed in pixels per degree (PPD).
- the rendering module performs dynamic foveation and the rendering resolution function depends on the gaze of the user.
- obtaining the rendering resolution function includes obtaining eye tracking data indicative of a gaze of a user, e.g., from the eye tracker 260 of Figure 2, and generating the rendering resolution function based on the eye tracking data.
- the eye tracking data includes at least one of a data indicative of a gaze angle of the user or data indicative of a fixation point of the user.
- generating the rendering resolution function based on the eye tracking data includes generating a rendering resolution function having a peak height at a location the user is looking at, as indicated by the eye tracking data.
- the method 700 continues at block 730 with the rendering module generating a rendered image based on the SR content and the rendering resolution function.
- the rendered image includes a warped image with a plurality of pixels at respective locations uniformly spaced in a grid pattern in the warped space.
- the plurality of pixels are respectively associated with a plurality of respective pixel values based on the SR content.
- the plurality of pixels are respectively associated with a plurality of respective scaling factors defining an area in the display space based on the rendering resolution function.
- An image that is said to be in a display space has uniformly spaced regions
- the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space.
- the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space.
- the plurality of respective scaling factors (like the rendering resolution function) define a mapping between the warped space and the display space.
- the warped image includes a plurality of tiles at respective locations uniformly spaced in a grid pattern in the warped space and each of the plurality of tiles is associated with a respective one or more scaling factors.
- each tile (including a plurality of pixels) is associated with a single horizontal scaling factor and a single vertical scaling factor.
- each tile is associated with a single scaling factor that is used for both horizontal and vertical scaling.
- each tile is a 32x32 matrix of pixels.
- the rendering module transmits the warped image including the plurality of pixel values in association with the plurality of respective scaling factors. Accordingly, the warped image and the scaling factors, rather than a foveated image which could be generated using this information, is propagated through the pipeline.
- the rendering module 210 generates a warped image and a plurality of respective scaling factors that are transmitted by the rendering module 210.
- the warped image (or a processed version of the warped image) and the plurality of respective scaling factors are received (and used in processing the warped image) by the transport module 220 (and the compression module 222 and decompression module 226 thereof) as described in detail below.
- the warped image (or a processed version of the warped image) and the plurality of respective scaling factors are received (and used in processing the warped image) by the display module 230 (and the lens compensation module 232 and the panel compensation module 234 thereof) as described in U.S. Patent App. No. 62/667,728, entitled“DYNAMIC FOVEATED DISPLAY,” filed on May 7, 2018, and hereby incorporated by reference in its entirety.
- the rendering module generates the scaling factors based on the rendering resolution function.
- the scaling factors are generated based on the rendering resolution function as described above with respect to Figures 5A-5C.
- generating the scaling factors includes determining the integral of the rendering resolution function.
- generating the scaling factors includes determining the tangent of the inverse of the integral of the rendering resolution function.
- generating the scaling factors includes, determining, for each of the respective locations uniformly spaced in a grid pattern in the warped space, the respective scaling factors based on the tangent of the inverse of the integral of the rendering resolution function. Accordingly, for a plurality of locations uniformly spaced in the warped space, a plurality of locations non-uniformly spaced in the display space are represented by the scaling factors.
- Figure 8 A illustrates an image representation of SR content 810 to be rendered in a display space.
- Figure 8B illustrates a warped image 820 generated according to the method 700 of Figure 7.
- different parts of the SR content 810 corresponding to non-uniformly spaced regions (e.g., different amounts of area) in the display space are rendered into uniformly spaced regions (e.g., the same amount of area) in the warped image 820.
- the rendering module 210 provides a rendered image to the transport module 220.
- the transport module 220 includes a compression module 222 that compresses the rendered image (resulting in a compressed image), a communications channel 224 that carries the compressed image, and a decompression module 226 that decompresses the compressed image (resulting in a decompressed image).
- the communications channel 224 is a wired or wireless communications channel (e.g., BLUETOOTH, IEEE 802.1 lx, IEEE 802.16c, IEEE 802.3x, etc.).
- the communications channel 224 couples a first device (e.g., the controller 110 of Figure 1) including the rendering module 210 and the compression module 222 to a second device (e.g., the HMD 120 of Figure 1) including the decompression module 226.
- the communications channel couples two processing units of a single device, e.g., a first processing unit including the compression module 222 and a second processing unit including the decompression module 226.
- the communications channel couples two processing modules of a single processing unit, e.g., the compression module 222 and the decompression module 226.
- the compression module 222 receives, from the rendering module 210, a foveated image having different resolutions at different parts of the image.
- compressing an MxN foveated image using various compression techniques results in fewer bits than compressing an MxN unfoveated image.
- the compression module 222 receives, from the rendering module 210, a warped image having a plurality of pixels at respective locations uniformly spaced in a grid pattern in a warped space.
- the plurality of pixels are respectively associated with a plurality of respective pixel values and a plurality of respective scaling factors indicating a plurality of respective resolutions at a plurality of respective locations in a display space.
- the compression module 222 receives the scaling factors from the rendering module 210. In various implementations, the compression module 222 receives a single scaling factor for each pixel. In various implementations, the compression module 222 receives a horizontal scaling factor and a vertical scaling for each pixel. In various implementations, the compression module 222 receives a single scaling factor for each tile of pixels (e.g., each 32x32 block of pixels). In various implementations, the compression module 222 receives a horizontal scaling factor and a vertical scaling factor for each tile of pixels (e.g., each 32x32 block of pixels).
- the warped image is an image (e.g., a matrix of pixel values)
- various conventional compression techniques can be applied to the image.
- the compression module 222 uses the scaling factors during the compression and/or transmission of the warped image to further reduce the bandwidth of the communication channel 224 used in transporting the warped image.
- Figure 9 is a flowchart representation of a method 900 of transmitting an image in accordance with some implementations.
- the method 900 is performed by a transport module (or portion thereof), such as the transport module 220 or compression module 222 of Figure 2.
- the method 900 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
- the method 900 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
- the method 900 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
- the method 900 is performed by a processor executing instructions (e.g., code) stored in a non- transitory computer-readable medium (e.g., a memory).
- the method 900 begins at block 910 with the transport module receiving a warped image representing simulated reality (SR) content to be displayed in a display space, the warped image having a plurality of pixels at respective locations uniformly spaced in a grid pattern in a warped space, wherein the plurality of pixels are respectively associated with a plurality of respective pixel values and a plurality of respective scaling factors indicating a plurality of respective resolutions at a plurality of respective locations in the display space.
- the plurality of respective scaling factors defines a mapping between the warped space and the display space. For example, in various implementations, different parts of the SR content corresponding to non-uniformly spaced regions in the display space are represented by uniformly spaced regions in the warped space.
- each of the plurality of pixels is respectively associated with a separately received pixel value.
- each of the plurality of pixels is respectively associated with a separately received scaling factor (or set of horizontal scaling factor and vertical scaling factor).
- each of a plurality of tiles of the plurality of pixels is respectively associated with a separately received scaling factor (or set of horizontal scaling factor and vertical scaling factor). Accordingly, a plurality of pixels (e.g., those of a single tile) are associated with a single received scaling factor (or set of horizontal scaling factor and vertical scaling factor).
- the warped image includes a plurality of tiles at respective locations uniformly spaced in a grid pattern in the warped space, wherein each of the plurality of tiles is associated with a respective one or more scaling factors.
- one or more of the plurality of respective scaling factors include a horizontal scaling factor and a vertical scaling factor.
- the method 900 continues at block 920 with the transport module transmitting the warped image over one or more channels such that at least one bandwidth of the one or more channels used by transmission of respective portions of the warped image is based on one or more scaling factors of the plurality of scaling factors corresponding to the respective portions of the warped image.
- the at least one bandwidth of the one or more channels used by transmission of the respective portions is changed by compressing portions of the warped image based on the scaling factors.
- a more compressed portion uses less bandwidth of the channel than a less compressed portion.
- the transport module compresses the respective portions of the warped image based on the one or more scaling factors of the plurality of scaling factors corresponding to the respective portions of the warped image. For example, in various implementations, the transport module compresses low-resolution portions more than high-resolution portions (e.g., by allocating fewer bits at a MAC layer and/or setting a quantization parameter to a lower value [e.g., more quantization]).
- the at least one bandwidth of the one or more channels used by transmission of the respective portions is changed by error-correcting coding portions of the warped image based on the scaling factors. A portion that has been error- correcting coded with a weaker error-correcting code uses less bandwidth than a portion that has been error-correcting coded with a stronger error-correcting code.
- the transport module error-correcting codes the respective portions of the warped image based on the one or more scaling factors of the plurality of scaling factors corresponding to the respective portions of the warped image. For example, in various implementations, the transport module uses a stronger error-correcting code for high-resolution portions than for low-resolution portions (e.g., adding more redundancy at the PHY layer).
- the at least one bandwidth of the one or more channels used by transmission of the respective portions is changed by changing a probability of transmission (or retransmission) of portions of the warped image based on the scaling factors.
- the probability of transmission (or retransmission) of a portion is less, the bandwidth used by transmission of the portion is less.
- a probability of transmission of the respective portions of the warped image is based on the one or more scaling factors of the plurality of scaling factors corresponding to the respective portions of the warped image.
- the probability of transmission is a probability of retransmission in the case of a lost packet. For example, if a data packet corresponding to a portion of the warped image is lost over the channel, the probability of retransmitting the packet can be based on the scaling factor such that data packets corresponding to higher resolution portions are more likely to be retransmitted as compared to data packets corresponding to lower resolution portions.
- the probability of transmission is based on a selected sub-channel. For example, by choosing a sub-channel (e.g., at the PHY layer) with a higher signal-to-noise ratio, the probability of retransmission due to a bit error, packet error, or dropped packet is reduced. Accordingly, in various implementations, the transport module selects a sub-channel for transmission of a respective portion based on the respective scaling factor. In various implementations, the transport module selects a sub-channel with higher signal-to-noise ratio for a higher resolution portion than for a lower resolution portion.
- the probability of transmission is based on a level of a buffer that receives image data associated with the warped image (e.g., a buffer of the decompression module 226 of Figure 2). For example, in various implementations, when the buffer is nearing or has approached overflow, the probability of transmitting a portion of the warped image is decreased, and more so for lower resolution portions of the warped image than for higher resolution portions of the warped image. In various implementations, when the buffer is nearing or has approached overflow, the receiver (e.g., the decompressing module) discards received portions of the warped image based on the scaling factors. For example, in various implementations, the receiver is more likely to discard lower resolution portions than higher resolution portions.
- transmitting the warped image includes transforming the warped image.
- the transport module generates a wavelet warped image by wavelet transforming the warped image.
- the wavelet warped image includes, for each respective portion of the warped image, a plurality of portions of the wavelet warped image corresponding to different frequency bands.
- Figure 10A illustrates an example image 1010.
- Figure 10B illustrates wavelet image 1020 generated by a one-layer wavelet transform of the example image.
- Each portion of the example image 1010 e.g., the top middle block 1011 is represented by a plurality of portions (e.g., blocks l02la-l02ld) of the wavelet image 1020 corresponding to different frequency bands.
- block l02la corresponds to low-frequency in both the horizontal and vertical directions
- block l02lb corresponds to high-frequency in the horizontal direction and low-frequency in the vertical direction
- block l02lc corresponds to low- frequency in the horizontal direction and high-frequency in the vertical direction
- block l02ld corresponds to high-frequency in both the horizontal and vertical directions.
- the wavelet warped image is a generated using a one-layer wavelet function (illustrated by Figures 10A-10B). In various implementations, the wavelet warped image is generated using a two-layer wavelet function.
- the at least one bandwidth of the one or more channels used by transmission of respective portions of the wavelet warped image is based on different frequency bands.
- respective portions of the wavelet warped image associated with lower frequency bands are allocated more bandwidth than portions of the wavelet warped image associated with higher frequency bands.
- the transport module filters the warped image based on the plurality of respective scaling factors. For example, in various implementations, the transport module wavelet-thresholds the warped image. In various circumstances, this de noises the image based on frequency band in order to make it more comprehensible and/or remove some details to provide a continuous stream.
- the at least one bandwidth of the one or more channels used by transmission of the respective portions of the warped image is further based on at least one available bandwidth of the one or more channels. In particular, when the available bandwidth is greater, the bandwidth of the channel used is greater.
- the transport module includes a high-efficiency video coder (HEVC).
- the transport module 220 receives the plurality of respective scaling factors from the rendering module 210. In various implementations, the transport module 220 provides the respective scaling factors to the display module 230. Accordingly, the transport module 220 transports the respective scaling factors over the communications channel 224.
- the method 900 of Figure 9 further includes receiving the plurality of respective scaling factors (e.g., from a rendering module), compressing the plurality of respective scaling factors, transmitting the plurality of respective scaling factors over the channel, decompressing the plurality of respective scaling factors, and/or providing the plurality of respective scaling factors (e.g., to a display module).
- the decompression module 226 receives, via the communications channel 224, the warped image.
- the warped image is compressed, encoded, and/or transformed. Accordingly, in various implementations, the decompression module 226 decompresses, decodes, and/or de-transforms the warped image. Further, in various implementations, the decompression module 226 receives, via the communications channel 224, the plurality of respective scaling factors. In various implementations, the plurality of respective scaling factors are compressed. Accordingly, in various implementations, the decompression module 226 decompresses the plurality of respective scaling factors.
- the decompression module 226 fails to correctly receive a data packet associated with a portion of the warped image associated with a respective scaling factor.
- the decompression module 226 may receive a respective scaling factor, but not receive a corresponding data packet.
- the decompression module 226 may receive a corrupted data packet associated with a respective scaling factor (e.g., as indicated by a parity bit, cyclic redundancy check, or other indicator indicating that the data packet is corrupt).
- the decompression module 226 may discard packets because a buffer of the decompression module 226 is full or nearly full.
- the decompression module 226 sends a request to the compression module 222 to retransmit the data packet that was not correctly received. In various implementations, the decompression module 226 determines whether to send such a retransmission request based on the corresponding scaling factor. For example, in various implementations, the decompression module 226 is more likely to send the retransmission request if the corresponding scaling factor indicates that the resolution of the corresponding portion of the image is higher than if the corresponding scaling factor indicates that the resolution of the corresponding portion of the image is lower.
- the decompression module 226 includes a buffer that stores received data packets. In some circumstances, the buffer may become full, resulting in lost packets. In various implementations, when the buffer is close to full (e.g., exceeds a threshold percentage of fullness), the decompression module 226 determines whether to store a received packet or discard the received packet based on the corresponding scaling factor. For example, in various implementations, the decompression module 226 is more likely to discard a packet if the corresponding scaling factor indicates that the resolution of the corresponding portion of the image is low than if the corresponding scaling factor indicates that the resolution of the corresponding portion of the image is high.
- the decompression module 226 determines whether to store a received packet or discard a received packet based on a continuous function of a buffer fullness (e.g., how full the buffer is) and the corresponding scaling factor. The function may result in the decompression module 226 being more likely to discard a packet as the buffer fullness increases and/or the corresponding scaling factor indicates that the resolution of the corresponding portion of the image is low.
- Figure 11 is a flowchart representation of a method 1100 of receiving an image in accordance with some implementations.
- the method 1100 is performed by a transport module (or portion thereof), such as the transport module 220 or decompression module 226 of Figure 2.
- the method 1100 is performed by an HMD, such as the HMD 100 of Figure 1, or a portion thereof, such as the SR pipeline 200 of Figure 2.
- the method 1100 is performed by a device with one or more processors, non-transitory memory, and one or more SR displays.
- the method 1100 is performed by processing logic, including hardware, firmware, software, or a combination thereof.
- the method 1100 is performed by a processor executing instructions (e.g., code) stored in a non-transitory computer-readable medium (e.g., a memory).
- the method 1100 begins at block 1110 with the transport module receiving at least a subset of a plurality of data packets corresponding to portions of a warped image representing simulated reality (SR) content to be displayed in a display space, the warped image having a plurality of pixels at respective locations uniformly spaced in a grid pattern in a warped space.
- SR simulated reality
- the plurality of respective scaling factors defines a mapping between the warped space and the display space. For example, in various implementations, different parts of the SR content corresponding to non-uniformly spaced regions in the display space are represented by uniformly spaced regions in the warped space.
- the method 1100 continues at block 1120 with the transport module receiving a plurality of scaling factors indicating respective resolutions of the portions of the warped image at a plurality of respective locations in the display space.
- the method 1100 continues at block 1130 with the transport module detecting an error condition for a particular one of the plurality of data packets.
- detecting the error condition includes receiving the particular one of the plurality of scaling factors without receiving the particular one of the plurality of data packets. This may indicate that the packet was lost.
- detecting the error condition includes determining that the particular one of the plurality of data packets is corrupt. For example, in various implementations, a parity bit, cyclic redundancy check, or other indicator indicates that the data packet is corrupt.
- detecting the error condition includes receiving the particular one of the plurality of data packets while a buffer is full or nearly full.
- detecting the error condition includes determining that a buffer is storing at least a threshold percentage of data.
- the method 1100 continues at block 1140 with the transport module resolving the error condition based on a particular one of the plurality of scaling factors corresponding to the particular one of the plurality of data packets.
- resolving the error condition includes determining whether to send a retransmission request for the particular one of the plurality of data packets based on the particular one of the plurality of scaling factors.
- the transport module is more likely to send a retransmission request when the particular one of the plurality of scaling factors indicates that the resolution of the corresponding portion of the warped image is high as compared to when the particular one of the plurality of scaling factors indicates that the resolution of the corresponding portion of the warped image is low.
- resolving the error condition includes determining to send a retransmission request and sending a retransmission request (or determining not to send a retransmission request).
- resolving the error condition includes determining whether to discard or store the particular one of the plurality of data packets based on the particular one of the plurality of scaling factors.
- the transport module is more likely to store a data packet (in a buffer) when the particular one of the plurality of scaling factors indicates that the resolution of the corresponding portion of the warped image is high as compared to when the particular one of the plurality of scaling factors indicates that the resolution of the corresponding portion of the warped image is low.
- resolving the error condition includes determining to store the data packet and storing the data packet (or determining to discard the data packet and not storing the data packet).
- the method 1100 includes decompressing, decoding, and/or detransforming the warped image.
- first,“second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
- a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the“first node” are renamed consistently and all occurrences of the“second node” are renamed consistently.
- the first node and the second node are both nodes, but they are not the same node.
- the term“if’ may be construed to mean“when” or“upon” or “in response to determining” or“in accordance with a determination” or“in response to detecting,” that a stated condition precedent is true, depending on the context.
- the phrase“if it is determined [that a stated condition precedent is true]” or“if [a stated condition precedent is true]” or“when [a stated condition precedent is true]” may be construed to mean “upon determining” or“in response to determining” or“in accordance with a determination” or“upon detecting” or“in response to detecting” that the stated condition precedent is true, depending on the context.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Or Creating Images (AREA)
Abstract
Divers modes de réalisation de l'invention concernent des procédés de compression et/ou de transmission d'une image fovéale. Selon un mode de réalisation, au moins une bande passante du ou des canaux utilisés par la transmission de parties respectives d'une image déformée est fondée sur un ou plusieurs facteurs de mise à l'échelle de la pluralité de facteurs de mise à l'échelle correspondant aux parties respectives de l'image déformée. Selon un mode de réalisation, une condition d'erreur est résolue en fonction d'un facteur de mise à l'échelle particulier de la pluralité de facteurs de mise à l'échelle correspondant au paquet de données particulier de la pluralité de paquets de données.
Applications Claiming Priority (2)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201862667727P | 2018-05-07 | 2018-05-07 | |
| US62/667,727 | 2018-05-07 |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| WO2019217264A1 true WO2019217264A1 (fr) | 2019-11-14 |
Family
ID=66625279
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| PCT/US2019/030822 Ceased WO2019217264A1 (fr) | 2018-05-07 | 2019-05-06 | Compression fovéale dynamique |
Country Status (1)
| Country | Link |
|---|---|
| WO (1) | WO2019217264A1 (fr) |
Cited By (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210142443A1 (en) * | 2018-05-07 | 2021-05-13 | Apple Inc. | Dynamic foveated pipeline |
| US20230419439A1 (en) * | 2019-11-14 | 2023-12-28 | Apple Inc. | Warping an input image based on depth and offset information |
| EP4617833A1 (fr) * | 2024-03-12 | 2025-09-17 | Apple Inc. | Rendu fovéal distribué |
Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6252989B1 (en) * | 1997-01-07 | 2001-06-26 | Board Of The Regents, The University Of Texas System | Foveated image coding system and method for image bandwidth reduction |
| US20110142138A1 (en) * | 2008-08-20 | 2011-06-16 | Thomson Licensing | Refined depth map |
| EP3111640A1 (fr) * | 2014-02-26 | 2017-01-04 | Sony Computer Entertainment Europe Limited | Codage et affichage d'image |
| WO2018041244A1 (fr) * | 2016-09-02 | 2018-03-08 | Mediatek Inc. | Distribution de qualité incrémentale et traitement de composition |
| WO2018200993A1 (fr) * | 2017-04-28 | 2018-11-01 | Zermatt Technologies Llc | Vidéo pipeline |
-
2019
- 2019-05-06 WO PCT/US2019/030822 patent/WO2019217264A1/fr not_active Ceased
Patent Citations (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6252989B1 (en) * | 1997-01-07 | 2001-06-26 | Board Of The Regents, The University Of Texas System | Foveated image coding system and method for image bandwidth reduction |
| US20110142138A1 (en) * | 2008-08-20 | 2011-06-16 | Thomson Licensing | Refined depth map |
| EP3111640A1 (fr) * | 2014-02-26 | 2017-01-04 | Sony Computer Entertainment Europe Limited | Codage et affichage d'image |
| WO2018041244A1 (fr) * | 2016-09-02 | 2018-03-08 | Mediatek Inc. | Distribution de qualité incrémentale et traitement de composition |
| WO2018200993A1 (fr) * | 2017-04-28 | 2018-11-01 | Zermatt Technologies Llc | Vidéo pipeline |
Non-Patent Citations (1)
| Title |
|---|
| GALAN-HERNANDEZ J C ET AL: "Wavelet-Based Foveated Compression Algorithm for Real-Time Video Processing", ELECTRONICS, ROBOTICS AND AUTOMOTIVE MECHANICS CONFERENCE (CERMA), 2010, IEEE, PISCATAWAY, NJ, USA, 28 September 2010 (2010-09-28), pages 405 - 410, XP031852995, ISBN: 978-1-4244-8149-1 * |
Cited By (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20210142443A1 (en) * | 2018-05-07 | 2021-05-13 | Apple Inc. | Dynamic foveated pipeline |
| US11836885B2 (en) * | 2018-05-07 | 2023-12-05 | Apple Inc. | Dynamic foveated pipeline |
| US20240087080A1 (en) * | 2018-05-07 | 2024-03-14 | Apple Inc. | Dynamic foveated pipeline |
| US12131437B2 (en) * | 2018-05-07 | 2024-10-29 | Apple Inc. | Dynamic foveated pipeline |
| US20230419439A1 (en) * | 2019-11-14 | 2023-12-28 | Apple Inc. | Warping an input image based on depth and offset information |
| EP4617833A1 (fr) * | 2024-03-12 | 2025-09-17 | Apple Inc. | Rendu fovéal distribué |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US12131437B2 (en) | Dynamic foveated pipeline | |
| US11288843B2 (en) | Lossy compression of point cloud occupancy maps | |
| EP3744007B1 (fr) | Commande d'affichage d'image par compression en temps réel dans des régions d'image périphériques | |
| US12387381B2 (en) | Image data transfer apparatus, image display system, and image data transfer method | |
| TWI870337B (zh) | 數位內容串流壓縮 | |
| EP4022924B1 (fr) | Transport d'affichage fovéal à flux unique | |
| KR102385365B1 (ko) | 전자 장치 및 전자 장치에서 이미지 데이터를 압축하는 방법 | |
| CN110622124A (zh) | 用于近眼显示器的压缩方法和系统 | |
| WO2019217262A1 (fr) | Rendu par fovéation dynamique | |
| WO2019217260A1 (fr) | Affichage fovéal dynamique | |
| US20160179196A1 (en) | Visual data processing method and visual data processing system | |
| EP4245032B1 (fr) | Codeurs, procédés et appareils d'affichage comprenant compression de la direction du regard | |
| WO2019217264A1 (fr) | Compression fovéale dynamique | |
| EP4617833A1 (fr) | Rendu fovéal distribué | |
| US11233999B2 (en) | Transmission of a reverse video feed | |
| GB2568112A (en) | Method and system for processing display data | |
| KR20180092735A (ko) | 이미지 표시 시스템 및 이를 이용한 이미지 압축 방법 |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| 121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19725470 Country of ref document: EP Kind code of ref document: A1 |
|
| NENP | Non-entry into the national phase |
Ref country code: DE |
|
| 122 | Ep: pct application non-entry in european phase |
Ref document number: 19725470 Country of ref document: EP Kind code of ref document: A1 |