[go: up one dir, main page]

CN120730190B - Image processing method, device, electronic equipment, chip and storage medium - Google Patents

Image processing method, device, electronic equipment, chip and storage medium

Info

Publication number
CN120730190B
CN120730190B CN202511233355.XA CN202511233355A CN120730190B CN 120730190 B CN120730190 B CN 120730190B CN 202511233355 A CN202511233355 A CN 202511233355A CN 120730190 B CN120730190 B CN 120730190B
Authority
CN
China
Prior art keywords
image
color
brightness
mapping
shooting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202511233355.XA
Other languages
Chinese (zh)
Other versions
CN120730190A (en
Inventor
李婉齐
张雪岩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xuanjie Technology Co ltd
Original Assignee
Beijing Xuanjie Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xuanjie Technology Co ltd filed Critical Beijing Xuanjie Technology Co ltd
Priority to CN202511233355.XA priority Critical patent/CN120730190B/en
Publication of CN120730190A publication Critical patent/CN120730190A/en
Application granted granted Critical
Publication of CN120730190B publication Critical patent/CN120730190B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本申请提出一种图像处理方法、装置、电子设备、芯片和存储介质,涉及图像处理领域,其中,方法包括:获取编码图像;其中编码图像的文件结构中包括SDR格式的第一拍摄图像以及附加图,附加图用于指示SDR格式和HDR格式之间的色彩映射关系;对编码图像进行解码,得到第一拍摄图像和附加图;根据附加图和第一拍摄图像进行图像渲染,以显示HDR格式的第二拍摄图像。由此,附加图用于指示从SDR格式到HDR格式转换所需的色彩(如亮度、色调、饱和度)映射关系,利用附加图提供的色彩映射关系,对SDR格式的图像进行渲染,不仅可以确保图像在从SDR格式转换为HDR格式的过程中亮度得到了准确调整,还使得渲染出的图像能够展现出更加丰富的颜色层次感。

This application proposes an image processing method, apparatus, electronic device, chip, and storage medium, relating to the field of image processing. The method includes: acquiring an encoded image; wherein the file structure of the encoded image includes a first captured image in SDR format and an additional image, the additional image indicating the color mapping relationship between SDR and HDR formats; decoding the encoded image to obtain the first captured image and the additional image; and rendering the image based on the additional image and the first captured image to display a second captured image in HDR format. Thus, the additional image indicates the color (such as brightness, hue, and saturation) mapping relationship required for conversion from SDR to HDR format. Utilizing the color mapping relationship provided by the additional image, rendering the SDR format image not only ensures accurate brightness adjustment during the conversion from SDR to HDR format but also enables the rendered image to exhibit richer color gradations.

Description

Image processing method, device, electronic equipment, chip and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing device, an electronic device, a chip, and a storage medium.
Background
The appearance of high dynamic range (HIGH DYNAMIC RANGE, abbreviated as HDR) image technology marks an important breakthrough in the field of digital image processing, breaks through the limitation of standard dynamic range (STANDARD DYNAMIC RANGE, abbreviated as SDR) images in terms of brightness and contrast, can capture and present a wider brightness range than the SDR images, thereby simultaneously retaining rich details of bright parts and dark parts in the same picture and effectively avoiding overexposure or underexposure. HDR technology has been widely used in the fields of film and television production, photography, and consumer electronics, significantly improving the visual quality and realism of images and video.
Disclosure of Invention
The present application proposes an image processing method, apparatus, electronic device, chip and storage medium to solve at least one of the technical problems in the related art to a certain extent.
In one aspect, an embodiment of the present application provides an image processing method, including:
The method comprises the steps of obtaining an encoded image, wherein the file structure of the encoded image comprises a first shooting image in a standard dynamic range SDR format and an additional graph, and the additional graph is used for indicating the color mapping relation between the SDR format of the first shooting image and a high dynamic range HDR format;
Decoding the encoded image to obtain the first shooting image and the additional image;
and performing image rendering according to the additional graph and the first shooting image to display a second shooting image in the HDR format.
Another embodiment of the present application provides an image processing apparatus, including:
The system comprises an acquisition module, a coding module and a display module, wherein the file structure of the coding image comprises a first shooting image in a standard dynamic range SDR format and an additional graph, and the additional graph is used for indicating the color mapping relation between the SDR format of the first shooting image and a high dynamic range HDR format;
The decoding module is used for decoding the coded image to obtain the first shooting image and the additional image;
And the rendering module is used for performing image rendering according to the additional graph and the first shooting image so as to display a second shooting image in the HDR format.
An embodiment of still another aspect of the present application provides an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the image processing method according to the previous aspect when the processor executes the program.
In a further aspect, an embodiment of the application proposes a chip comprising an interface circuit for inputting or outputting a signal and a processing circuit coupled to each other, the processing circuit being configured to perform the image processing method according to the previous aspect.
In a further aspect, an embodiment of the present application proposes a non-transitory computer-readable storage medium, on which computer program instructions are stored, which, when executed by a processor, implement an image processing method as described in the previous aspect.
In a further aspect, the application proposes a computer program product having a computer program stored thereon, which, when being executed by a processor, implements the image processing method according to the preceding aspect.
According to the image processing method, the device, the electronic equipment, the chip and the storage medium, the additional graph is used for indicating the color mapping relation (such as the mapping relation of brightness, tone and saturation) required by converting the SDR format into the HDR format, the first shooting image in the SDR format is rendered by utilizing the color mapping relation provided by the additional graph, the second shooting image in the HDR format is generated, the brightness of the image can be accurately adjusted in the process of converting the SDR format into the HDR format, the overexposure or underexposure phenomenon caused by improper brightness adjustment is avoided, the details of the image are reserved, and meanwhile, the rendered second shooting image in the HDR format can display richer color layering sense, so that the sense of reality of the second shooting image is enhanced, and the visual experience of a user is improved.
Additional aspects and advantages of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the application.
Drawings
The foregoing and/or additional aspects and advantages of the application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment of the present application;
FIG. 2 is a flow chart of another image processing method according to an exemplary embodiment of the present application;
FIG. 3 is a flow chart of yet another image processing method according to an exemplary embodiment of the present application;
fig. 4 is a flowchart of still another image processing method according to an exemplary embodiment of the present application;
Fig. 5 is a flowchart of still another image processing method according to an exemplary embodiment of the present application;
FIG. 6 is a flowchart of another image processing method according to an exemplary embodiment of the present application;
FIG. 7 is a schematic diagram of an implementation of an HDR image provided by an exemplary embodiment of the present application;
fig. 8 is a schematic structural view of an image processing apparatus according to an exemplary embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application;
fig. 10 is a schematic diagram of a chip according to an exemplary embodiment of the present application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present application and should not be construed as limiting the application.
In the video field, HDR technology has formed a mature standard that ensures accurate transmission of high dynamic range video content from the recording end to the display device, thereby effectively rendering the details and colors of the high dynamic scene on various display devices. However, despite the significant advances in the video field of HDR technology, the medium of images has not yet formed a uniform high dynamic range standard due to compatibility and standardization issues. Thus, high quality images of interest to consumers, particularly when shared through social media, often cannot be accurately presented.
In order to solve these problems, the related art proposes a photo-level HDR standard based on a "Gain Map", mainly including the following two schemes, which enable a user to control images in HDR and SDR formats simultaneously, while maintaining compatibility to the greatest extent, while preserving details and dynamic range of the images. Through the gain map, a user can share the HDR image on different devices and platforms and normally present the HDR image on incompatible devices, so that balance between compatibility and effect is provided, and sharing and displaying of the HDR image are more flexible and popular.
First, a single channel HDR additional graph generation scheme. The HDR additional graph is a gray graph, and is used for representing the brightness conversion form required by converting the SDR image into the HDR image, and the adopted methods mainly include the following steps:
A. using a single tone mapping curve based on Luma (brightness), outputting a brightness change result by matching with a local contrast enhancement algorithm, and storing the brightness change result as an additional image in an SDR image (hereinafter referred to as a main image);
B. an HDR fusion technology of an image signal processor (IMAGE SIGNAL Processing is called ISP for short) is used for directly comparing an HDR image with an SDR image, calculating the brightness proportion of the HDR image and the SDR image, generating an additional graph to record the brightness proportion, and storing the additional graph after the main graph;
C. The scale of the SDR image and the content encoded by a perceptual quantizer (Perceptual Quantizer, abbreviated PQ, a color transfer function for HDR content encoding)/Hybrid Log-Gamma (HLG, abbreviated color transfer function for HDR content encoding) curve is calculated and stored after the main graph in an additional graph.
In this scheme, the main role of the gain map (i.e., the HDR additional map) is to process the luminance information, stored by converting the luminance mapping relationship between the HDR image and the SDR image into an additional map form. The method focuses on luminance conversion, aimed at accurately presenting the dynamic range of the image.
However, this solution only focuses on the adjustment of the luminance relationship, and fails to fully consider the color impression difference caused by the change of the luminance dynamic range, so that the converted HDR image may have defects in color performance, such as color richness and low saturation, and particularly, the HDR image may be obviously represented in a highlight region (or referred to as a highlight region), which affects the due detail and color depth of the HDR image.
The second is a picture HDR display scheme managed by an International color Consortium Profile (International Color Consortium Profile, abbreviated as ICC). This scheme borrows from the approach of video color management, using a wide color gamut + high dynamic range to display a normal HDR image. Almost all formats can support this function, and the efficient encoding of the AV1 image file Format based on h.265 (AV 1 IMAGE FILE Format, abbreviated as AVIF) further improves the encoding efficiency. This HDR picture realization technique is much closer to "single frame HDR", somewhat mimicking video HDR. For a normal version of the picture (e.g., PNG/JPG/TIFF), the display HDR effect relies primarily on the CICP field (color base color (Color Primaries, abbreviated CP, defining the base color coordinates of the color space)/transfer characteristics (TRANSFER CHARACTERISTICS, abbreviated TC, defining how the image data is converted from linear light intensity to non-linear encoded values, such as gamma or logarithmic curves)/matrix coefficients (Matrix Coefficients, abbreviated MC, defining how the RGB color components are converted to YCbCr or other color space coefficients)) in the color managed ICC file, following the h.273 rules of video. AVIF can directly use h.265 coding to input the above color management information, but to obtain better compatibility, ICC information is typically inserted, and image content can be displayed on a display supporting h.265 single frame decoding.
In the scheme, HDR image content is mainly presented through standard coding formats and color management, but the problems that 1, H.265-based image formats (such as AVIF) cannot be decoded normally on most devices, so that the wide application of the HDR image content is limited, 2, different manufacturers have different supporting of ICC files, and ICC information is easy to lose in the streaming media transmission process are solved. Once the color management is wrong, the image cannot accurately present the intention of an creator, even the image quality problem can occur, and the impression effect is seriously influenced.
Accordingly, the present application has been made keeping in mind at least one of the above problems occurring in the related art, and an image processing method, apparatus, electronic device, chip, and storage medium.
An image processing method, apparatus, electronic device, chip, and storage medium of an embodiment of the present application are described below with reference to the accompanying drawings. Before describing the embodiments of the present application in detail, for ease of understanding, the general technical words are first introduced:
The RGB color space is an additive color model for generating various colors based on three primary colors of Red (R for short), green (G for short) and Blue (B for short). Wherein each color channel is typically represented by 8 bits, ranging from 0 to 255:
R (red channel) is used for controlling the intensity of red in the image, wherein the value range corresponding to the intensity is 0-255, wherein 0 represents no red component, and 255 represents the maximum red intensity;
and G (green channel) controlling the intensity of green in the image, wherein the value range corresponding to the intensity is 0-255. 0 indicates no green component, 255 indicates maximum green intensity;
and B (blue channel) controlling the intensity of blue in the image, wherein the corresponding value range of the intensity is 0-255, wherein 0 indicates no blue component, and 255 indicates the maximum blue intensity.
YCbCr color space, which is a color space based on luminance (Luma) and chrominance (Chroma) separation, decomposes an image into luminance (Y) and two color components (Cb and Cr), each of which typically ranges from 0 to 255 in terms of 8 bits:
y (luminance component) representing luminance information of an image, typically ranging from 0 to 255;
Cb (blue color difference component) represents a blue difference with respect to luminance, typically ranging from 0 to 255;
cr (red difference) represents a red difference with respect to luminance, and is generally in the range of 0 to 255.
XYZ color space, a standard color space, defines colors by simulating the response of the human eye to light. X, Y, Z three components represent different spectral response curves, respectively:
the Y component is mainly used for representing the brightness of the color and is the only factor directly influencing the brightness of the image, and is similar to the Y component in the YCbCr;
The two components X and Z, together determine the hue (hue) and saturation (saturation) of the color, and are not directly related to brightness. They are used to describe specific properties of colors, such as different proportions of red, green, blue, etc.
Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment of the present application.
It should be noted that, the image processing method according to the embodiments of the present application may be applied to an image processing apparatus, and in some possible embodiments, the image processing apparatus may be configured in an electronic device or a chip, so that the electronic device or the chip may perform an image processing function. In addition, in some possible embodiments, the image processing apparatus may also be software in an electronic device, and so on.
In any of the embodiments of the present application, the chip may be integrated into an electronic device. The Chip includes a central Processing unit (Central Processing Unit, abbreviated as CPU), an image signal Processor (IMAGE SIGNAL Processing, abbreviated as ISP), an Application-specific integrated Circuit (ASIC), a microprocessor (DIGITAL SIGNAL Processor, abbreviated as DSP), a Field-Programmable gate array (FPGA), a System On Chip (SOC), a reduced instruction set computer (Reduced Instruction Set Computer, abbreviated as RISC), and the like, which are not listed here.
The electronic device includes, but is not limited to, a terminal, a personal computer, etc., where the terminal is an entity on the user side for receiving or transmitting signals, such as a mobile phone. A terminal may also be referred to as a terminal equipment (terminal), a User Equipment (UE), a Mobile Station (MS), a mobile terminal equipment (MT), and the like. The terminal may be an automobile with communication function, a smart car, a mobile phone, a wearable device, a tablet (Pad), a computer with wireless transceiving function, a Virtual Reality (VR) terminal, an augmented reality (augmented reality, AR) terminal, a wireless terminal in industrial control (industrial control), a wireless terminal in unmanned-driving (self-driving), a wireless terminal in teleoperation (remote medical surgery), a wireless terminal in smart grid (SMART GRID), a wireless terminal in transportation security (transportation safety), a wireless terminal in smart city (SMART CITY), a wireless terminal in smart home (smart home), and the like. The embodiment of the application does not limit the specific technology and the specific equipment form adopted by the terminal.
As shown in fig. 1, the image processing method may include the following steps S101 to S103:
Step S101, obtaining an encoded image, wherein the file structure of the encoded image comprises a first shot image in an SDR format and an additional graph, and the additional graph is used for indicating the color mapping relation between the SDR format and the HDR format of the first shot image.
The first shot image, or called SDR image, may be a picture obtained in response to a shooting operation, where the shooting operation includes, but is not limited to, a shooting operation, and a triggering form of the shooting operation includes, but is not limited to, a physical key trigger, a gesture recognition trigger, a voice command trigger, a biological feature trigger, and so on. The physical key trigger refers to triggering the camera to shoot by triggering a physical key (such as a side key of a mobile phone, a main key and the like), the gesture recognition trigger refers to triggering the camera to shoot by recognizing a gesture of a user through the camera or a sensor, the voice command trigger refers to triggering the camera to shoot by a voice command (such as opening the camera) and the biological feature trigger refers to triggering the camera to shoot by recognizing biological features (such as face features, sound features, fingerprint features and the like).
The first shot image may be an original image acquired or shot by the camera, or may be a new image obtained by performing image processing on the original image, which is not limited in the embodiment of the present application. The first shot image includes, but is not limited to, a shot picture and the like. The cameras comprise, but are not limited to, front cameras and rear cameras.
Colors include, but are not limited to, brightness, hue, saturation, and the like.
Wherein the additional map is generated from the first captured image in the SDR format and the second captured image in the HDR format for indicating a color mapping relationship between the SDR format and the HDR format.
For example, additional graphs may be used to indicate the color mapping relationship of a first captured image in SDR format and a second captured image in HDR format over multiple color channels. The plurality of color channels is, for example, R, G, B of three color channels.
Wherein the second captured image, or HDR image, is obtained by image processing the first captured image.
The encoded image is obtained by encoding the first photographed image and the additional image by using an image encoding technology, wherein the file structure of the encoded image includes the first photographed image (or referred to as a main image) and the additional image. For example, the first photographed image and the additional image may be embedded into the same file through a Multi-Picture Format (MPF) mechanism of an image encoding technique.
Among them, the image encoding technique is not limited, and exemplary image encoding techniques include, but are not limited to, JPG MPF (JPEG Multi-Picture Format) encoding technique, and the like. The JPG MPF coding technology writes the first shooting image into a file according to a JPEG format, and embeds an additional picture in the JPEG format in the file.
As an example, the first captured image is stored as a base image (or master image) in the beginning of the file, while the additional image provides the color mapping needed for conversion from the first captured image in SDR format to the second captured image in HDR format. Additional figures are embedded into the same file, typically at specific locations in the file, by the MPF mechanism of JPEG, and indicate their presence and properties by metadata markers.
It should be noted that, the method for obtaining the encoded image is not limited in the present application, and the encoded image may be an image generated by the electronic device itself, for example, the electronic device may respond to a photographing operation to obtain a first photographed image, perform image processing on the first photographed image to obtain a second photographed image, generate an additional image according to the first photographed image and the second photographed image, and encode the first photographed image and the additional image by using an image encoding technology to obtain the encoded image, or the encoded image may be an image shared or transmitted by other users, or the encoded image may be an image collected online, or the like.
Step S102, decoding the coded image to obtain a first shooting image and an additional image.
In the embodiment of the application, under the condition that the electronic equipment has the function of decoding the coded image, the electronic equipment can decode the coded image to obtain the first shooting image and the additional image.
Step S103, performing image rendering according to the additional graph and the first captured image to display the second captured image in the HDR format.
In the embodiment of the application, the electronic device can perform image rendering according to the additional graph and the first photographed image in the SDR format to display the second photographed image in the HDR format. That is, the first captured image is color-mapped based on the color mapping relationship indicated by the additional graph, resulting in the second captured image.
According to the image processing method, the additional graph is used for indicating the color mapping relation (such as the mapping relation of brightness, tone and saturation) required for converting the SDR format into the HDR format, the first shot image in the SDR format is rendered by utilizing the color mapping relation provided by the additional graph, and the second shot image in the HDR format is generated, so that the brightness of the image can be accurately adjusted in the process of converting the SDR format into the HDR format, the overexposure or underexposure phenomenon caused by improper brightness adjustment is avoided, the details of the image are reserved, and meanwhile, the rendered second shot image in the HDR format can show richer color layering, so that the sense of reality of the second shot image is enhanced, and the visual experience of a user is improved.
As a possible implementation manner, fig. 2 is a schematic flow chart of another image processing method provided by an exemplary embodiment of the present application.
It should be noted that the image processing method may be performed alone, or may be performed in combination with any one of the embodiments of the present application or a possible implementation of the embodiment, or may be performed in combination with any one of the technical solutions of the related art, and the embodiment of the present application is not limited thereto.
As shown in fig. 2, the image processing method may include the following steps S201 to S204:
and step S201, acquiring an encoded image, wherein the file structure of the encoded image comprises a first shooting image in an SDR format and an additional picture embedded after the first shooting image.
Wherein the additional graph is used to indicate a color mapping relationship between the SDR format and the HDR format of the first captured image.
It should be noted that, the explanation of step S201 may be referred to the related description in any embodiment of the present application, and will not be repeated here.
Step S202, determining whether the decoding function for the encoded image is available, if yes, executing step S203, and if no, executing step S204.
It should be noted that, the two implementations of the step S203 and the step S204 are parallel, and may be alternatively executed.
Step S203, decoding the encoded image to obtain a first photographed image and an additional image, and performing image rendering according to the additional image and the first photographed image to display a second photographed image in HDR format.
In the embodiment of the application, under the condition that the electronic equipment has the function of decoding the coded image, the coded image can be decoded to obtain the first shooting image and the additional image in the SDR format, and the image rendering is carried out according to the additional image and the first shooting image so as to display the second shooting image in the HDR format. The principle of the present application can be referred to in any embodiment of the present application, and will not be described herein.
And step S204, performing image rendering according to the coded image to display a first shot image in the SDR format.
In the embodiment of the application, under the condition that the electronic equipment does not have the decoding function of the coded image, the image rendering can be directly carried out according to the coded image so as to display the first shooting image in the SDR format.
The image processing method of the embodiment of the application uses the first shot image in the SDR format as the main image, stores the main image in front of the coded image, and embeds the additional image into the main image.
As a possible implementation manner, fig. 3 is a schematic flow chart of still another image processing method provided by an exemplary embodiment of the present application.
It should be noted that the image processing method may be performed alone, or may be performed in combination with any one of the embodiments of the present application or a possible implementation of the embodiment, or may be performed in combination with any one of the technical solutions of the related art, and the embodiment of the present application is not limited thereto.
As shown in fig. 3, on the basis of any embodiment of the present application, the additional graph may be obtained by using the following steps S301 to S302:
Step S301, determining the pixel value gain of each pixel point according to the pixel value of each pixel point in the first photographed image in the SDR format and the pixel value of the corresponding pixel point in the second photographed image in the HDR format, wherein the pixel value gain comprises gain values of a plurality of color channels, and the gain values are used for indicating the color mapping relation of the corresponding color channels.
As an example, taking an image in which the first captured image and the second captured image are both RGB color space, the following formula may be used to calculate gain values of each pixel point in a plurality of color channels:
;(1)
wherein, the Refers to the gain value of the R channel,Refers to the pixel value (or channel value) of a certain pixel point in the R channel in the second captured image in the HDR format,Refers to the pixel value of the corresponding pixel point in the R channel in the first shooting image in SDR format,Refers to the offset (a relatively small value) of the R channel in the HDR format set in advance,Refers to the offset (a relatively small value) of the R channel in the pre-set SDR format.
In the same way, the processing method comprises the steps of,Refers to the gain value of the G channel,Refers to the pixel value of a certain pixel point in the G channel in the second photographed image in the HDR format,Refers to the pixel value of the corresponding pixel point in the G channel in the first shooting image in SDR format,Refers to the offset (a relatively small value) of the G channel in the HDR format set in advance,Refers to the offset (a relatively small value) of the G channel in the pre-set SDR format.
Refers to the gain value of the B channel,Refers to the pixel value of a certain pixel point in the B channel in the second photographed image in the HDR format,Refers to the pixel value of the corresponding pixel point in the B channel in the first shooting image in SDR format,Refers to the offset (a relatively small value) of the B channel in the HDR format set in advance,Refers to the offset (a relatively small value) of the B channel in the pre-set SDR format.
Step S302, according to the lowest brightness and the highest brightness supported by the display, the pixel value gain of each pixel point is encoded to obtain an additional graph.
As an example, the gain value of each pixel in the plurality of color channels may be converted into a log domain code, and the lowest brightness and the maximum brightness supported by the display may be read, a color mapping relationship (i.e., a color scaling coefficient) of each pixel on the plurality of color channels may be determined according to the log domain code, the lowest brightness and the maximum brightness of each pixel in the plurality of color channels, and an additional graph may be generated according to the color mapping relationship of each pixel on the plurality of color channels, where each pixel in the additional graph is used to indicate the color mapping relationship of the corresponding pixel on the plurality of color channels in the first captured image in the SDR format and the second captured image in the HDR format.
As a possible implementation manner, the color mapping relationship (i.e., color scaling coefficient) of each pixel point on multiple color channels may be further limited in value (clip), that is, limited in a suitable value interval, and scaled to the standard range of the 8-bit image.
Therefore, in the application, an image coding technology can be adopted, and the additional picture is embedded into the rear of the first shooting image in the SDR format for storage, so as to obtain a coded image. In this way, the second photographed image in the HDR format can be correctly displayed according to the encoded image on the software and the display device supporting the decoding function, and the dynamic range of the second photographed image is ensured to be fully displayed, and simultaneously, the first photographed image in the SDR format is correctly displayed on the device not supporting the decoding function.
In any of the embodiments of the present application, in order to save the occupation of storage resources, in the present application, the size of the additional picture may be smaller than the size of the first captured image in SDR format, for example, the size of the additional picture may be compressed to 1/4 or even 1/16 of the size of the first captured image, thereby greatly saving the storage space.
As an example, the pixel value gains of the pixels may be encoded according to the minimum brightness and the maximum brightness supported by the display, to obtain an initial additional graph, and the initial additional graph may be downsampled to obtain an additional graph.
And each pixel point in the initial additional diagram is used for indicating the color mapping relation of the corresponding pixel point in the first shooting image and the second shooting image on a plurality of color channels.
For the step S103, in the present application, interpolation calculation may be performed on the additional map by using bilinear interpolation technology to obtain an additional map with the same size as the first captured image, and color mapping may be performed on corresponding pixel points in the first captured image based on the color mapping relationship of each pixel point in the additional map on multiple color channels to obtain a second captured image in the HDR format.
According to the image processing method provided by the embodiment of the application, each pixel point in the additional graph is used for indicating the color mapping relation of the corresponding pixel point in the first shot image in the SDR format and the second shot image in the HDR format on a plurality of color channels (such as R, G, B channels), and the color mapping relation of the plurality of color channels provided by the additional graph is utilized, so that finer and richer color layering can be realized in the process of converting the image from the SDR format to the HDR format, the overall visual effect of the image is enhanced, and color transition is more natural and smoother. In addition, by utilizing the color mapping relation of the plurality of color channels provided by the additional graph, the value of the pixel point in each color channel in the first photographed image can be accurately adjusted, so that the color expression of the second photographed image in the HDR format is effectively improved, the accurate adjustment ensures the color accuracy and consistency, the color reproduction can be kept at high quality even under the extreme illumination condition, and the adjusted second photographed image in the HDR format can show real colors in high-light and shadow areas.
As a possible implementation manner, fig. 4 is a schematic flow chart of still another image processing method provided by an exemplary embodiment of the present application.
It should be noted that the image processing method may be performed alone, or may be performed in combination with any one of the embodiments of the present application or a possible implementation of the embodiment, or may be performed in combination with any one of the technical solutions of the related art, and the embodiment of the present application is not limited thereto.
As shown in fig. 4, on the basis of any embodiment of the present application, the second captured image may be obtained by using the following steps S401 to S403:
In step S401, in response to the photographing operation, a first photographed image in SDR format is acquired, and the first photographed image is converted into a linear domain, so as to obtain a first intermediate image.
It should be noted that, the explanation of the shooting operation and the first shot image in the foregoing embodiment is also applicable to this embodiment, and will not be repeated here.
In the embodiment of the application, the first shooting image in the SDR format can be converted into the linear domain to obtain the first intermediate image.
As an example, taking the first captured image as an image of the RGB color space as an example, assume that the first captured image isAnd (2) andIs a Gamma-encoded color signal, whose value ranges between [0,1], it can be converted to the linear domain using the following formula:
;(2)
Where cctf is an abbreviation for color component transfer function (Colour Component Transfer Function), cctf _decoding is a display device dependent Gamma curve, typically converted using the results of a standard RGB color space (STANDARD RED GREEN Blue, abbreviated sRGB):
;(3)
And step S402, performing crosstalk elimination processing on a plurality of color channels of the first intermediate image according to a crosstalk matrix to obtain a second intermediate image, wherein the crosstalk matrix is determined according to the color coupling degree among the plurality of color channels.
In the embodiment of the application, a crosstalk matrix can be adopted to eliminate color interference among a plurality of color channels of the first intermediate image, ensure the independence of each color channel and keep highlight information in the image. For example, the crosstalk matrix may be multiplied with the first intermediate image to obtain the second intermediate image.
As an example, assume that crosstalk between color channels (e.g., R, G, B color channels) is symmetrical, and that the degree of color coupling between each color channel is determined by a parameterControl, the crosstalk matrix can be expressed as:
;(4)
wherein, the Is a parameter controlling the intensity of crosstalk, which can be manually adjusted according to the image effect. The element of the ith row and jth column in the crosstalk matrix is used to indicate the color coupling relationship between the ith color channel and the jth color channel. Wherein the elements on the diagonal line are [ ]) Representing the color information duty cycle of each color channel itself (i.e., the "per se" information of that color channel), minus the cross-talk fraction, off-diagonal elementsIndicating the degree of color coupling between each color channel and the other color channels, i.e. how much of the color information of each color channel is affected by the other color channels.
In step S403, inverse tone mapping processing is performed on the second intermediate image to obtain a second captured image in SDR format.
In the embodiment of the present application, the inverse tone mapping process may be performed on the second intermediate image to expand luminance information of the processed image (referred to as a second captured image in the present application) and to maintain image details.
The image processing method of the embodiment of the application converts the first shot image in the SDR format into the linear space, eliminates the color interference among a plurality of color channels of the image through the crosstalk matrix in the linear space, ensures the independence of each color channel, reduces the saturation of a highlight region under the condition of ensuring the unchanged tone, avoids the loss of color information in the operation process after the highlight part is expanded in the image processing process, and can expand the brightness information of the image and maintain the image detail by applying inverse tone mapping.
As a possible implementation manner, fig. 5 is a schematic flow chart of a further image processing method according to an exemplary embodiment of the present application.
It should be noted that the image processing method may be performed alone, or may be performed in combination with any one of the embodiments of the present application or a possible implementation of the embodiment, or may be performed in combination with any one of the technical solutions of the related art, and the embodiment of the present application is not limited thereto.
As shown in fig. 5, on the basis of any embodiment of the present application, the second captured image may be obtained by using the following steps S501 to S506:
In step S501, in response to the photographing operation, a first photographed image in SDR format is obtained, and the first photographed image is converted into a linear domain, so as to obtain a first intermediate image.
Step S502, performing crosstalk elimination processing on a plurality of color channels of the first intermediate image according to a crosstalk matrix to obtain a second intermediate image, wherein the crosstalk matrix is determined according to the color coupling degree among the plurality of color channels.
It should be noted that, the explanation of steps S501 to S502 may be referred to the related description in any embodiment of the present application, and will not be repeated here.
Step S503, converting the second intermediate image from the first color space to the second color space based on the color gamut of the first captured image, to obtain a third intermediate image.
The first color space is a color space to which the second intermediate image or the first photographed image belongs. Illustratively, the first color space is, for example, an RGB color space.
Wherein the second color space is a tone mapped reference color space. The second color space is illustratively an XYZ color space.
In the embodiment of the application, the second intermediate image can be converted from the first color space to the second color space based on the color gamut of the first shooting image, so as to obtain a third intermediate image in the second color space.
As an example, a mapping relationship between different color gamuts in a first color space and a conversion matrix for converting an image from the first color space to a second color space may be preconfigured. Therefore, in the application, the mapping relation can be queried according to the color gamut of the first shooting image so as to determine the conversion matrix matched with the color gamut of the first shooting image, and the second intermediate image is converted from the first color space to the second color space by adopting the conversion matrix.
Taking the first color space as an example of an RGB color space, the color gamut includes, but is not limited to, sRGB, display P3 (a wide color gamut color space, i.e., P3 gamut is wider than sRGB gamut, especially in the green and red regions, which enables it to display a wider variety of vivid colors), etc.
Illustratively, taking the color gamut of the first captured image as the sRGB color gamut and the second color space as the XYZ color space as an example, the following formula may be used to convert the second intermediate image in SDR format from the RGB color space to the XYZ color space:
;(5)
wherein, the Refers to a conversion matrix that is adapted to the sRGB color gamut.
As one possible implementation, the x and y chromaticity components of the third intermediate image in XYZ color space may be expressed as:
;(6)
and step S504, determining a low-brightness area and a high-brightness area from the third intermediate image, wherein the first color component corresponding to the brightness in the low-brightness area is smaller than or equal to the brightness threshold value, and the first color component in the high-brightness area is larger than the brightness threshold value.
Wherein the luminance threshold is determined from the neutral gray luminance (gray_luminance) of the display. Illustratively, the brightness threshold may beWherein, the method comprises the steps of,For adjusting the image brightness in the HDR format, it may be set to neutral gray brightness (gray_brightness)/c 1 of the display (e.g., an HDR display). Wherein c 1 is the first mapping parameter.
As an example, taking the second color space as the XYZ color space, the first color component may be the Y component, and the Y component of the low-luminance region) Less than or equal toHighlighting areasGreater than the brightness threshold
In step S505, tone mapping is performed on the first color component in the low-brightness region using the first mapping parameter, and tone mapping is performed on the first color component in the high-brightness region using at least one second mapping parameter, so as to obtain a fourth intermediate image.
In the embodiment of the present application, the tone mapping process may be performed on the first color component in the low-light area in the third intermediate image using the first mapping parameter.
Taking the second color space as an example of the XYZ color space, the first color component may be a Y component, and the following formula may be used to adjust the Y component in the low-light region:
;(7)
Wherein c 1 is a parameter for controlling mapping of the low-brightness region, and can be set as a constant with a value smaller than 1, so as to keep details of the low-brightness region. That is, in the low-light region, the small luminance values in the third intermediate image of the SDR format should be enlarged to reveal more detail, this partial mapping being done directly through c 1.
In the embodiment of the present application, at least one second mapping parameter may be further used to perform tone mapping processing on the first color component in the highlight region in the third intermediate image, so as to obtain a fourth intermediate image.
The at least one second mapping parameter may determine a brightness adjustment factor according to a ratio of the neutral gray brightness to the first mapping parameter, and determine the at least one second mapping parameter according to the brightness adjustment factor, the first mapping parameter, and the neutral gray brightness.
Taking the second color space as an example of the XYZ color space, the first color component may be a Y component, and the following formula may be used to adjust the Y component in the highlight region:
;(8)
wherein, the ;
Wherein c 2 determines the steepness of the logarithmic mapping curve in the formula, c 2 with smaller value represents steeper curve, c 3 controls the reference brightness of the logarithmic mapping curve, determines the compression degree of the highlight part, c 3 with larger value, the whole brightness value is raised, the highlight part of the image looks more natural but may lose part of detail, on the contrary, c 3 with smaller value can enhance the contrast of the highlight part and display more detail, c 4 controls the starting point of logarithmic mapping, determines when to start logarithmic mapping, c 4 with smaller value can cause logarithmic mapping to start influencing the highlight part earlier, and more brightness range is mapped to the highlight region (or highlight region) possibly resulting in reduction of dark part detail.
Wherein, c 1 and c 3 in the mapping parameters can be manually adjusted, for example, the value range is less than 1, so that the dynamic range can be better maintained, and the transition smoothness of the curve is ensured.
In any embodiment of the present application, the first mapping parameter may be used to perform tone mapping processing on the first color component in the low-brightness area, and at least one second mapping parameter may be used to perform tone mapping processing on the first color component in the high-brightness area, so as to obtain a mapped image.
Illustratively, taking the second color space as an example of the XYZ color space, the first color component may be a Y component, the tone mapping process may be implemented by using the above formulas (7) and (8), to obtain a mapped image, and the following formulas are used to correct the second color components (i.e., the X component and the Z component) except for the first color component in the mapped image, to obtain a fourth intermediate image:
;(9)
in summary, the brightness information of the fourth intermediate image can be expanded, the details of the fourth intermediate image are kept, and the visual effect of the fourth intermediate image is improved.
Step S506, converting the fourth intermediate image from the second color space to the first color space to obtain a second photographed image in the HDR format.
In an embodiment of the present application, the inverse transformation matrix may be used to transform the fourth intermediate image from the second color space to the first color space to obtain the second captured image in the HDR format.
Illustratively, taking the first color space as RGB color space and the second color space as XYZ color space as an example, the following formula (10) can be adopted to obtain the above formulas (7), (8) and (9)Conversion to RGB color space:
;(10)
wherein, the For inverse transformation matrix, the transformation matrix in the above formula (5) is usedAnd (5) inverting to obtain the product.
According to the image processing method provided by the embodiment of the application, the second color space (such as the XYZ color space) is used as a reference color space for tone mapping, and the image is converted into the second color space for inverse tone mapping, so that the brightness information of the image can be effectively expanded, and the image details can be kept.
As a possible implementation manner, fig. 6 is a schematic flow chart of another image processing method provided by an exemplary embodiment of the present application.
It should be noted that the image processing method may be performed alone, or may be performed in combination with any one of the embodiments of the present application or a possible implementation of the embodiment, or may be performed in combination with any one of the technical solutions of the related art, and the embodiment of the present application is not limited thereto.
As shown in fig. 6, on the basis of any embodiment of the present application, the second captured image may be obtained by using the following steps S601 to S608:
In step S601, in response to the photographing operation, a first photographed image in SDR format is obtained, and the first photographed image is converted into a linear domain, so as to obtain a first intermediate image.
Step S602, performing crosstalk elimination processing on a plurality of color channels of the first intermediate image according to a crosstalk matrix to obtain a second intermediate image, wherein the crosstalk matrix is determined according to the color coupling degree among the plurality of color channels.
Step S603, converting the second intermediate image from the first color space to the second color space based on the color gamut of the first captured image, to obtain a third intermediate image.
And step S604, determining a low-brightness area and a high-brightness area from the third intermediate image, wherein the first color component corresponding to the brightness in the low-brightness area is smaller than or equal to the brightness threshold value, and the first color component in the high-brightness area is larger than the brightness threshold value.
Step S605 performs tone mapping processing on the first color component in the low-brightness region using the first mapping parameter, and performs tone mapping processing on the first color component in the high-brightness region using at least one second mapping parameter to obtain a fourth intermediate image.
It should be noted that, the explanation of steps S601 to S605 may be referred to the related description in any embodiment of the present application, and will not be repeated here.
In step S606, the fourth intermediate image is converted from the second color space to the first color space, resulting in a fifth intermediate image.
In the embodiment of the application, the inverse transformation matrix can be adopted to transform the fourth intermediate image from the second color space to the first color space, so as to obtain a fifth intermediate image in the first color space.
Illustratively, the first color space is taken as an RGB color space, the second color space is taken as an XYZ color space, and the above formula (10) can be adopted to obtain the following formulas (7), (8) and (9)And converting to an RGB color space to obtain a fifth intermediate image.
Step S607, performing inverse crosstalk elimination processing on the plurality of color channels of the fifth intermediate image according to an inverse crosstalk matrix, to obtain a sixth intermediate image, wherein the inverse crosstalk matrix is obtained by inverting the crosstalk matrix.
Illustratively, the fifth intermediate image is labeled as an example of the first color space as the RGB color spaceThe sixth intermediate image isThe following steps are:
;(11)
wherein, the
In order to maintain the color consistency and natural feel of the image, the transformation of chromaticity needs to be coordinated with the change of brightness (Luma) information, and when the color expression of the highlight region of the image is low, the chromaticity information can be expanded in one step, so that the highlight information is better displayed, and the color consistency of the image is improved. Therefore, in any embodiment of the present application, the fifth intermediate image may be further color-expanded, and the color-expanded image may be subjected to an inverse crosstalk cancellation process according to the inverse crosstalk matrix, to obtain a sixth intermediate image. That is, the sixth intermediate image may be obtained by using the following steps a to E:
And A, converting the fifth intermediate image from the first color space to the third color space to obtain a first converted image.
Wherein the third color space is a color expanded color space. Illustratively, the third color space is, for example, the YCbCr color space.
In the present application, the fifth intermediate image may be converted from the first color space to the third color space based on the conversion relationship between the first color space and the third color space, resulting in the first converted image.
And B, determining a chromaticity adjusting factor according to the ratio of the first color component in the third intermediate image to the first color component in the fourth intermediate image.
Illustratively, taking the first color component as the Y component as an example, the following formula may be employed to calculate the chroma adjustment factor (or referred to as the chroma expansion factor):
;(12)
The setting of 1.095 is an empirical value, and can be set according to actual application requirements.
And C, carrying out smooth adjustment on the chromaticity of the first converted image according to the chromaticity adjustment factor to obtain an adjustment image.
Illustratively, taking the third color space as the YCbCr color space as an example, the factor may be adjusted according to chromaticityThe Cb and Cr components are adjusted:
;(13)
as a possible implementation manner, the following adaptive calculation process can be further added to the expansion strength of the HDR content according to the actual application requirements:
;(14)
;(15)
wherein, the The value range of R can be between [0,2.03],May be set to a small amount of fluctuation, for example, to 0.15.
In summary, the chromaticity of the highlight region can be more strongly expanded, while the chromaticity of the low-light region is less expanded. Wherein, the Is a smoothing function, ensures that the chromaticity adjustment factor is adjusted to smooth the transition between the highlight and dark areas, and avoids abrupt chromaticity changes.Controlling the smoothness of transition, largerThe transition is smoother, the severe change is reduced, and the image quality is improved.
And D, converting the adjustment image from the third color space to the first color space to obtain a second conversion image.
In the embodiment of the application, the adjustment image can be converted from the third color space to the first color space based on the conversion relation between the third color space and the first color space, so as to obtain the second conversion image.
And E, performing inverse crosstalk elimination processing on a plurality of color channels of the second conversion image according to the inverse crosstalk matrix to obtain a sixth intermediate image.
Illustratively, the above formula (11) may be adopted to perform the inverse crosstalk cancellation processing on the plurality of color channels of the second conversion image to obtain a sixth intermediate image
To sum up, in order to maintain the color consistency and natural feel of the image, the transformation of chromaticity needs to be coordinated with the change of brightness (Luma) information, and when the color expression of the highlight region of the image is low, the chromaticity information is expanded in one step, so that the highlight information can be better displayed, and the color consistency of the image is improved.
In step S608, according to the brightness range supported by the display, a brightness mapping operation is performed on the sixth intermediate image, so as to obtain a second photographed image in the HDR format.
Illustratively, taking the first color space as the RGB color space as an example, the sixth intermediate image may beOOTF (Opto-Optical Transfer Function, light-to-light transfer function) operations are performed to complete the luminance mapping operation, for example, the following formula may be used to perform the luminance mapping operation on the sixth intermediate image:
first, the set empirical index is used (Constant greater than 1), pairAnd (3) adjusting:
;(16)
Then, to Standard OOTF conversion is performed, mapping to the brightness supported by the display:
;(17)
Wherein OETF refers to photoelectric conversion function (Opto-Electronic Transfer Function), EOTF refers to photoelectric conversion function (Electro-Optical Transfer Function), BT.709 refers to International telecommunication Union radio communication department (ITU-R) BT.709 (high definition television Standard), OETF_BT709 is used for converting brightness and color of image to adapt to specific display standard, BT.1886 refers to ITU-R BT.1886 (display gamma standard), EOTF_BT1886 is used for converting electric signal into corresponding brightness value to further regulate brightness expression of image, scale is scaling factor; is a coefficient calculated from the range of luminance supported by the display, for example, when the luminance of the display reaches a maximum luminance 10000 nit (nit) supported by a perceptual quantization (Perceptual Quantization, abbreviated as PQ) curve, Is 31.2379. If the brightness range or other parameters supported by the display are changed, the display can be used for displaying the display according to the actual conditionsAnd performing corresponding scaling adjustment to ensure that the mapped image has better display effect on different displays.
According to the image processing method provided by the embodiment of the application, through a double processing mode of the inverse crosstalk matrix and the color mapping, the additional graph can effectively reserve the color information of the highlight region (or the highlight region), so that the color of the highlight part is not excessively weakened in the process of brightness change in the image rendering stage, and a more natural and accurate highlight effect is presented.
In any one of the embodiments of the present application, compared with the first scheme in the related art, the present application uses the additional graph to respectively perform color mapping on the plurality of color channels in the first captured image, and compared with the direct mapping manner of brightness, the present application increases the corresponding adaptation of color when the brightness dynamic range changes, and the image color information presentation of high dynamic range is more abundant. Compared with the second scheme in the related art, the application uses the first shot image in the SDR format as the main picture, the additional picture is embedded after the main picture, and for the equipment which does not support decoding the coding format, the equipment can still display according to the main picture in the SDR format, the compatibility is good, and even if the additional picture is discarded, the display result is still normal.
The technical scheme provided by the application is at least used for solving the problem that the color appearance experiences the influence of brightness change under single-channel brightness mapping, solving the problem of compatibility in image display and streaming media transmission, ensuring that images can still be correctly displayed under the condition that a decoder does not support additional image decoding, solving the problem that the color of a highlight region shows inaccuracy in the inverse tone mapping process, and solving the problem of rapid adaptation of an additional image during the image debugging of an SDR format.
As an example, by generating additional maps of three color channels and embedding the additional maps into the master map by way of encoding, the master map is stored to display the desired luminance and color adjustment information on the HDR display. The implementation principle can be shown as fig. 7, and mainly comprises the following steps:
1. RGB images in SDR format are input, and are denoted as first shot images (i.e., main images) in the present application.
2. The main map is converted to the linear domain (linear space).
Assume that the input RGB image signal in SDR format isThey are Gamma-encoded color signals with values ranging between [0,1], which can be converted into linear domains using the following formula:
;
wherein cctf _decoding is a display device dependent Gamma curve, typically converted using the results of the sRGB standard:
;
3. In the linear space, interference among R, G, B color channels of the main graph is eliminated through a crosstalk matrix, so that the independence of each color channel is ensured, and highlight information is reserved.
The purpose of crosstalk elimination is to process an image per se in a linear domain, and meanwhile, in order to eliminate the cross influence among different color channels in the subsequent conversion process, a certain amount of other color channels are mixed into each color channel, so that the saturation of a highlight region is reduced under the condition that the color tone is unchanged, and color information is prevented from being lost in the operation process after the highlight part is expanded in the image processing process.
In the present application, it is assumed that crosstalk between a plurality of color channels is symmetrical, and the degree of color coupling between each color channel is determined by a parameterControl, therefore, the crosstalk matrix can be expressed as:;
4. The master map is converted from the RGB color space to the XYZ color space as a reference for tone mapping.
The inverse tone mapping conversion process may be defined in XYZ color space, taking the primary map with gamut sRGB as an example, the following formula may be used to convert the primary map from RGB color space to XYZ color space:
;
Wherein the x sdr and y sdr chrominance components are expressed as follows:
;
5. inverse tone mapping is applied to convert the main map in XYZ color space back to the HDR range, expanding the luminance information of the image, and preserving detail.
The purpose of inverse tone mapping is to map the low-light regions in the SDR image into the HDR space through a certain conversion, so that the high-light portions in the image are preserved, and the dynamic range of the image is enhanced. The inverse tone mapping function in the present application processes the low-and high-bright areas in the image by two segments according to the difference of the input brightness.
Wherein the inverse tone mapping comprises when luminance values in the SDR imageLess than or equal to the brightness thresholdWhen using linear mode mapping to convert: when the brightness value in SDR image Greater than the brightness thresholdWhen the color mapping of the highlight region is realized by using a logarithmic mapping mode:
As a possible implementation, the following formula can also be adopted, according to UpdatingAnd:
;
6. Converting the image from XYZ color space to RGB color space to obtain an initial image in HDR format, denoted as a fifth intermediate image in the present application
The following formula may be used, for example, to apply the method obtained in step 5 aboveAndConversion to RGB color space:
;
7. The (optional) original image in HDR format is color-extended from RGB color space to YCbCr color space, and then from YCbCr color space to RGB color space.
In order to maintain the color consistency and natural feel of an image, the transformation of chromaticity needs to be coordinated with the change of brightness (Luma) information, when the color expression of a highlight region of the image is low, the chromaticity information can be expanded in one step, so that the highlight information can be better displayed, wherein the chromaticity expansion can be performed on a Ycbcr color space, and the color expansion can be converted back to an RGB color space after the expansion:
first, a chroma adjustment factor (or referred to as a chroma expansion factor) may be calculated:
;
thereafter, the factor can be adjusted according to the chromaticity The Cb and Cr components are adjusted:
;
as a possible implementation manner, the following adaptive calculation process can be further added to the expansion strength of the HDR content according to the actual application requirements:
Wherein, the method comprises the steps of, ;
Wherein, the The value range of R can be between [0,2.03],May be set to a small amount of fluctuation, for example, to 0.15.
The above operation can ensure that the chromaticity of the highlight region is more strongly expanded, while the chromaticity of the low-light region is less expanded. Wherein, the Is a smoothing function, ensures that the chromaticity adjustment factor is adjusted to smooth the transition between the highlight and dark areas, and avoids abrupt chromaticity changes.Controlling the smoothness of transition, largerThe transition is smoother, the severe change is reduced, and the image quality is improved.
8. The independence of RGB channels is restored through inverse crosstalk matrix processing, accurate color reconstruction is ensured, and a final image in an HDR format is obtained through OOTF conversion and Gamma correction, and the final image is recorded as a second shooting image in the application.
Since there is an operation of the inverse crosstalk mainly for high light saturation at the beginning of the process flow, it should be considered that the operation of the inverse crosstalk is reacted out, i.e. the crosstalk matrix of the preceding is inverted, at the end of the process.
;
Thereafter, the pair can be based on the inverse crosstalk matrixAnd (3) performing inverse crosstalk elimination processing:;
The above processing operation is sufficient to map a 100nit related content onto 203nit, and the following operation is performed in step OOTF to complete the mapping operation to the actual peak brightness of the screen in the following manner: ;
wherein, the Is an empirical value for the setting.
Then, after one-step standard OOTF conversion, mapped to the brightness supported by the display:
;
9. An additional map is calculated from the first captured image and the second captured image. The additional graph stores color change proportion information of three color channels, namely a second photographed image in an HDR format and a first photographed image in an SDR format, R, G, B. As a possible implementation manner, in order to further save storage space, the size of the additional graph may be 1/4 or 1/16 of the size of the main graph, and the test may completely meet the application requirement when interacting with the main graph through bilinear interpolation.
The purpose of this step is to make the gain values of the three color channels of the image compatible with the HDR format, and to encode these gain values in the form of an additional map. The implementation is that first, the pixel value gain in SDR format and HDR format for each color channel is calculated, and then converted to a format suitable for storage by applying logarithmic coding to the gain values. These encoded gain values are subject to appropriate constraints (clips) in order to ensure that they can be presented within the 8-bit standard image display range. Finally, on a display device supporting an HDR format, the SDR image can be displayed in a highlighted and correct mode, and the dynamic range and detail display of the common image are improved.
The gain value of each pixel point in a plurality of color channels is calculated in the following manner:
;
And then, converting the gain value of each color channel into a log domain code, reading the lowest brightness and the maximum brightness supported by the display, and calculating the color mapping relation (namely the color proportionality coefficient) of each pixel point in a plurality of color channels according to the information.
As a possible implementation manner, the color mapping relationship (i.e., color scaling coefficient) of each pixel point on multiple color channels may be further limited in value (clip), that is, limited in a suitable value interval, and scaled to the standard range of the 8-bit image.
10. The additional picture coding is added to the main picture MPF information.
The additional pictures may be stored, for example, by embedding them behind the main picture in SDR format by JPG MPF coding techniques. Thus, on the software supporting the decoding function and the display device, the HDR image can be correctly displayed according to the encoded image, the dynamic range of the HDR image is fully displayed, and meanwhile, on the device not supporting decoding the additional image, the HDR image is correctly displayed in the main image of the SDR format.
In summary, the solution provided by the application has at least the following advantages:
(1) The additional graph comprises a color mapping relation of three channels, and on the basis that the brightness is correctly mapped, the additional graph realizes richer color layering through the color mapping of the three channels. The method can effectively improve the color expression of the image, avoid the influence of brightness adjustment on the color, and enable the HDR image to have more realism.
(2) Color preservation of the highlight region, namely, the additional graph can effectively preserve color information of the highlight region through a double processing mode of crosstalk matrix and color mapping. This method ensures that the color of the highlight is not excessively impaired when the brightness is changed, thereby exhibiting a more natural and accurate highlight effect.
(3) The gain values of the three channels are used as additional graph information, and the design enables local dynamic range adjustment to be performed according to color information, so that the expandability of the algorithm is improved.
(4) The additional map enables adaptive adjustment of the HDR format of the image content based on the SDR format of the image content. This means that the user can make optimal adjustments for the SDR image while the HDR image will change rapidly in sync, thereby reducing the visual gap between the two, providing a consistent viewing experience.
(5) The saving of storage space and the compatibility guarantee that in image coding, a main picture is an SDR image and an additional picture is an HDR gain picture. The size of the additional graph can be compressed to 1/4 or even 1/16 of the size of the main graph, so that the storage space is greatly saved, and the image can be normally displayed on the device which does not support HDR. The method has the advantages of considering storage efficiency and compatibility, and ensuring that images can be adaptively presented on different platforms and devices.
In order to achieve the above embodiments, the embodiments of the present application further provide an image processing apparatus.
Fig. 8 is a schematic structural view of an image processing apparatus according to an exemplary embodiment of the present application.
As shown in fig. 8, the image processing apparatus 800 may include an acquisition module 810, a decoding module 820, and a rendering module 830.
The obtaining module 810 is configured to obtain an encoded image, where a file structure of the encoded image includes a first captured image in a standard dynamic range SDR format and an additional graph, and the additional graph is configured to indicate a color mapping relationship between the SDR format of the first captured image and a high dynamic range HDR format;
a decoding module 820, configured to decode the encoded image to obtain a first captured image and an additional image;
the rendering module 830 is configured to perform image rendering according to the additional graph and the first captured image, so as to display the second captured image in the HDR format.
In one implementation of the embodiment of the present application, the additional map is embedded after the first captured image, and the rendering module 830 is further configured to perform image rendering according to the encoded image to display the first captured image in SDR format in response to not having a decoding function on the encoded image.
In one implementation manner of the embodiment of the present application, the additional graph is obtained by using the following modules:
The system comprises a determining module, a first shooting module and a second shooting module, wherein the determining module is used for determining pixel value gains of all pixel points according to the pixel values of all pixel points in a first shooting image and the pixel values of corresponding pixel points in a second shooting image, and the pixel value gains comprise gain values of a plurality of color channels, wherein the gain values are used for indicating the color mapping relation of the corresponding color channels;
and the coding module is used for coding the pixel value gain of each pixel point according to the lowest brightness and the highest brightness supported by the display so as to obtain an additional graph.
In one implementation manner of the embodiment of the application, the encoding module is used for encoding the pixel value gain of each pixel point according to the lowest brightness and the highest brightness supported by the display to obtain an initial additional graph, wherein each pixel point in the initial additional graph is used for indicating the color mapping relation of the corresponding pixel point in the first shooting image and the second shooting image on a plurality of color channels, and downsampling is carried out on the initial additional graph to obtain the additional graph.
In one implementation manner of the embodiment of the present application, the second shot image is obtained by using the following modules:
The conversion module is used for responding to shooting operation, acquiring a first shooting image, and converting the first shooting image into a linear domain to obtain a first intermediate image;
The device comprises a crosstalk elimination module, a crosstalk elimination module and a crosstalk judgment module, wherein the crosstalk elimination module is used for performing crosstalk elimination processing on a plurality of color channels of a first intermediate image to obtain a second intermediate image according to a crosstalk matrix, and the crosstalk matrix is determined according to the color coupling degree among the plurality of color channels;
And the mapping module is used for carrying out inverse tone mapping processing on the second intermediate image so as to obtain a second shooting image.
In one implementation of the embodiment of the application, the mapping module is used for converting a second intermediate image from a first color space to a second color space based on the color gamut of a first shooting image to obtain a third intermediate image, determining a low-brightness area and a high-brightness area from the third intermediate image, wherein a first color component corresponding to brightness in the low-brightness area is smaller than or equal to a brightness threshold value, the first color component in the high-brightness area is larger than the brightness threshold value, performing tone mapping processing on the first color component in the low-brightness area by adopting a first mapping parameter, performing tone mapping processing on the first color component in the high-brightness area by adopting at least one second mapping parameter to obtain a fourth intermediate image, and converting the fourth intermediate image from the second color space to the first color space to obtain a second shooting image.
In one implementation of the embodiment of the application, the brightness threshold is determined according to the neutral gray brightness of the display, the mapping module is further used for determining a brightness adjustment factor according to the ratio of the neutral gray brightness to the first mapping parameter, and determining at least one second mapping parameter according to the brightness adjustment factor, the first mapping parameter and the neutral gray brightness.
In one implementation manner of the embodiment of the application, the mapping module is used for performing tone mapping processing on the first color component in the low-brightness area by adopting a first mapping parameter, performing tone mapping processing on the first color component in the high-brightness area by adopting at least one second mapping parameter to obtain a mapping image, and correcting the second color component except the first color component in the mapping image according to the first color component in the mapping image to obtain a fourth intermediate image.
In one implementation manner of the embodiment of the application, the mapping module is used for converting the fourth intermediate image from the second color space to the first color space to obtain a fifth intermediate image, performing inverse crosstalk elimination processing on a plurality of color channels of the fifth intermediate image according to an inverse crosstalk matrix to obtain a sixth intermediate image, wherein the inverse crosstalk matrix is obtained by inverting the crosstalk matrix, and performing brightness mapping operation on the sixth intermediate image according to a brightness range supported by a display to obtain a second shooting image.
In one implementation of the embodiment of the application, the mapping module is used for converting the fifth intermediate image from the first color space to the third color space to obtain a first converted image, determining a chromaticity adjustment factor according to the ratio of the first color components in the third intermediate image to the fourth intermediate image, performing smooth chromaticity adjustment on the first converted image according to the chromaticity adjustment factor to obtain an adjustment image, converting the adjustment image from the third color space to the first color space to obtain a second converted image, and performing inverse crosstalk elimination processing on a plurality of color channels of the second converted image according to an inverse crosstalk matrix to obtain a sixth intermediate image.
It should be noted that the foregoing explanation of the embodiment of the image processing method is also applicable to the image processing apparatus of this embodiment, and will not be repeated here.
In the image processing device of the embodiment of the application, the additional graph is used for indicating the color mapping relation (such as the mapping relation of brightness, tone and saturation) required for converting the SDR format into the HDR format, the first shooting image of the SDR format is rendered by utilizing the color mapping relation provided by the additional graph, and the second shooting image of the HDR format is generated, so that the brightness of the image can be accurately regulated in the process of converting the SDR format into the HDR format, the overexposure or underexposure phenomenon caused by improper brightness regulation is avoided, the details of the image are reserved, and the rendered second shooting image of the HDR format can display richer color layering, thereby enhancing the sense of reality of the second shooting image and improving the visual experience of a user.
In order to implement the above embodiments, the present application further proposes an electronic device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method according to any of the foregoing embodiments when executing the program.
Fig. 9 is a schematic structural diagram of an electronic device according to an exemplary embodiment of the present application. For example, electronic device 900 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, exercise device, personal digital assistant, and the like.
Referring to FIG. 9, an electronic device 900 can include one or more of a processing component 902, a memory 904, a power component 906, a multimedia component 908, an audio component 910, an Input/Output (I/O) interface 912, a sensor component 914, and a communication component 916.
The processing component 902 generally controls overall operation of the electronic device 900, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 902 may include one or more processors 920 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 902 can include one or more modules that facilitate interaction between the processing component 902 and other components. For example, the processing component 902 can include a multimedia module to facilitate interaction between the multimedia component 908 and the processing component 902.
The memory 904 is configured to store various types of data to support operations at the electronic device 900. Examples of such data include instructions for any application or method operating on the electronic device 900, contact data, phonebook data, messages, pictures, videos, and so forth. The Memory 904 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as static random access Memory (Static Random Access Memory, abbreviated SRAM), electrically erasable programmable Read-Only Memory (ELECTRICALLY ERASABLE PROGRAMMABLE READ-Only Memory, abbreviated EEPROM), erasable programmable Read-Only Memory (Erasable Programmable Read-Only Memory, abbreviated EPROM), programmable Read-Only Memory (Programmable Read-Only Memory, abbreviated PROM), read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk, or optical disk.
The power component 906 provides power to the various components of the electronic device 900. Power components 906 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for electronic device 900.
The multimedia component 908 comprises a screen between the electronic device 900 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) CRYSTAL DISPLAY and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 908 includes a front-facing camera and/or a rear-facing camera. When the electronic device 900 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front camera and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 910 is configured to output and/or input audio signals. For example, the audio component 910 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 900 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may be further stored in the memory 904 or transmitted via the communication component 916. In some embodiments, the audio component 910 further includes a speaker for outputting audio signals.
The I/O interface 912 provides an interface between the processing component 902 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to, a home button, a volume button, an activate button, and a lock button.
The sensor assembly 914 includes one or more sensors for providing status assessment of various aspects of the electronic device 900. For example, the sensor assembly 914 may detect an on/off state of the electronic device 900, a relative positioning of the components, such as a display and keypad of the electronic device 900, the sensor assembly 914 may also detect a change in position of the electronic device 900 or a component of the electronic device 900, the presence or absence of a user's contact with the electronic device 900, an orientation or acceleration/deceleration of the electronic device 900, and a change in temperature of the electronic device 900. The sensor assembly 914 may include a proximity sensor configured to detect the presence of nearby objects without any physical contact. The sensor assembly 914 may also include a photosensor, such as a complementary metal Oxide Semiconductor (Complementary Metal-Oxide-Semiconductor, abbreviated CMOS) or Charge-Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 914 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 916 is configured to facilitate communication between the electronic device 900 and other devices, either wired or wireless. The electronic device 900 may access a wireless network based on a communication standard, such as WiFi,4G, or 5G, or a combination thereof. In one exemplary embodiment, the communication component 916 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the Communication component 916 further includes a Near Field Communication (NFC) module to facilitate short range Communication. For example, in the NFC module, it may be implemented based on radio frequency identification (Radio Frequency Identification, abbreviated as RFID) technology, infrared data Association (INFRARED DATA Association, abbreviated as IrDA) technology, ultra-Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 900 may be implemented by one or more Application-specific integrated circuits (ASICs), digital signal processors (DIGITAL SIGNAL processors, DSPs), digital signal processing devices (DIGITAL SIGNAL Processing Device, DSPDs), programmable logic devices (Programmable Logic Device, PLDs), field-Programmable gate arrays (Field-Programmable GATE ARRAY, FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the methods described above.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory 904 including instructions executable by the processor 920 of the electronic device 900 to perform the above-described method. For example, the non-transitory computer readable storage medium may be Read-Only Memory (ROM), random access Memory (Random Access Memory RAM), compact disc Read-Only Memory (Compact Disc Read-Only Memory, CD-ROM), magnetic tape, floppy disk, optical data storage device, and the like.
In order to implement the above embodiments, the present application also proposes a chip, wherein the chip comprises an interface circuit and a processing circuit interface circuit coupled to each other for inputting or outputting signals, the processing circuit being configured to perform the image processing method as provided in any of the previous embodiments.
Fig. 10 is a schematic diagram of a chip according to an exemplary embodiment of the present application. Reference may be made to the schematic structure of the chip 1000 shown in fig. 10, but is not limited thereto.
The chip 1000 includes a processing circuit 1001, the processing circuit 1001 being configured to perform any of the above image processing methods.
In some embodiments, chip 1000 also includes one or more interface circuits 1002. As one possible implementation, interface circuit 1002 is coupled to memory 1003, interface circuit 1002 may be configured to receive signals from memory 1003 or other devices, and interface circuit 1002 may be configured to transmit signals to memory 1003 or other devices. For example, the interface circuit 1002 may read an instruction stored in the memory 1003 and send the instruction to the processing circuit 1001.
In some embodiments, the interface circuit 1002 performs at least one of the communication steps of sending and/or receiving, etc. in the methods described above, and the processing circuit 1001 performs the other steps.
In some embodiments, the terms interface circuit, interface, transceiver pin, transceiver, etc. may be interchanged.
In some embodiments, chip 1000 also includes one or more memories 1003 for storing instructions. As one possible implementation, all or part of memory 1003 may be off-chip 1000.
In order to achieve the above-described embodiments, the present application also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements an image processing method according to any of the foregoing method embodiments.
In order to achieve the above-described embodiments, the present application also proposes a computer program product having a computer program stored thereon, which, when being executed by a processor, implements an image processing method according to any of the method embodiments described above.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "plurality" means at least two, for example, two, three, etc., unless specifically defined otherwise.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and additional implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order from that shown or discussed, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the embodiments of the present application.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include an electrical connection (an electronic device) having one or more wires, a portable computer diskette (a magnetic device), a random access Memory (Random Access Memory, simply referred to as RAM), a Read-Only Memory (ROM), an erasable programmable Read-Only Memory (EPROM or flash Memory), an optical fiber device, and a portable compact disc Read-Only Memory (Compact Disc Read-Only Memory, simply referred to as CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. If implemented in hardware as in another embodiment, may be implemented using any one or more combinations of discrete logic circuits having logic gates for implementing logic functions on data signals, application specific integrated circuits having appropriate combinational logic gates, programmable Gate Arrays (PGA) GATE ARRAY, field-Programmable gate arrays (FPGA) GATE ARRAY, and the like, as is known in the art.
Those of ordinary skill in the art will appreciate that all or a portion of the steps carried out in the method of the above-described embodiments may be implemented by a program to instruct related hardware, where the program may be stored in a computer readable storage medium, and where the program, when executed, includes one or a combination of the steps of the method embodiments.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules may also be stored in a computer readable storage medium if implemented in the form of software functional modules and sold or used as a stand-alone product.
The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, or the like. While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (15)

1. An image processing method, comprising:
The method comprises the steps of obtaining an encoded image, wherein the file structure of the encoded image comprises a first shooting image in a standard dynamic range SDR format and an additional graph, and the additional graph is used for indicating the color mapping relation between the SDR format of the first shooting image and a high dynamic range HDR format;
Decoding the encoded image to obtain the first shooting image and the additional image;
performing image rendering according to the additional graph and the first shooting image to display a second shooting image in the HDR format;
the additional graph is obtained by the following steps:
Determining pixel value gains of all pixel points according to pixel values of all pixel points in the first shooting image and pixel values of corresponding pixel points in the second shooting image, wherein the pixel value gains comprise gain values of a plurality of color channels, and the gain values are used for indicating color mapping relations of the corresponding color channels;
Converting gain values of the pixel points in a plurality of color channels into log domain codes, and determining color mapping relations of the pixel points in the plurality of color channels according to the log domain codes of the pixel points in the plurality of color channels and the lowest brightness and the highest brightness supported by a display;
Generating an initial additional graph according to the color mapping relation of each pixel point on a plurality of color channels, wherein each pixel point in the initial additional graph is used for indicating the color mapping relation of the corresponding pixel point in the first shooting image and the second shooting image on the plurality of color channels;
and carrying out downsampling processing on the initial additional graph to obtain the additional graph.
2. The method of claim 1, wherein the additional map is embedded after the first captured image, the method further comprising:
and in response to not having a decoding function for the encoded image, performing image rendering according to the encoded image to display a first photographed image in the SDR format.
3. The method of claim 1, wherein the second captured image is obtained by:
Responding to shooting operation, acquiring the first shooting image, and converting the first shooting image into a linear domain to obtain a first intermediate image;
Performing crosstalk elimination processing on a plurality of color channels of the first intermediate image according to a crosstalk matrix to obtain a second intermediate image, wherein the crosstalk matrix is determined according to the color coupling degree among the plurality of color channels;
And performing inverse tone mapping processing on the second intermediate image to obtain the second photographed image.
4. A method according to claim 3, wherein said inverse tone mapping said second intermediate image to obtain said second captured image comprises:
converting the second intermediate image from a first color space to a second color space based on the color gamut of the first photographed image, to obtain a third intermediate image;
Determining a low-brightness area and a high-brightness area from the third intermediate image, wherein a first color component corresponding to brightness in the low-brightness area is smaller than or equal to a brightness threshold value, and the first color component in the high-brightness area is larger than the brightness threshold value;
performing tone mapping processing on the first color component in the low-brightness area by adopting a first mapping parameter, and performing tone mapping processing on the first color component in the high-brightness area by adopting at least one second mapping parameter to obtain a fourth intermediate image;
The fourth intermediate image is converted from the second color space to the first color space to obtain the second captured image.
5. The method of claim 4, wherein the brightness threshold is determined based on neutral gray brightness of the display;
the at least one second mapping parameter is determined by:
determining a brightness adjustment factor according to the ratio of the neutral gray brightness to the first mapping parameter;
and determining at least one second mapping parameter according to the brightness adjustment factor, the first mapping parameter and the neutral gray brightness.
6. The method of claim 4, wherein tone mapping the first color components in the low-luminance region using the first mapping parameter and tone mapping the first color components in the high-luminance region using the at least one second mapping parameter to obtain a fourth intermediate image, comprising:
performing tone mapping processing on the first color component in the low-brightness area by adopting a first mapping parameter, and performing tone mapping processing on the first color component in the high-brightness area by adopting at least one second mapping parameter to obtain a mapped image;
and correcting a second color component except the first color component in the mapping image according to the first color component in the mapping image to obtain the fourth intermediate image.
7. The method of claim 4, wherein said converting the fourth intermediate image from the second color space to the first color space to obtain the second captured image comprises:
Converting the fourth intermediate image from the second color space to the first color space to obtain a fifth intermediate image;
performing inverse crosstalk elimination processing on a plurality of color channels of the fifth intermediate image according to an inverse crosstalk matrix to obtain a sixth intermediate image, wherein the inverse crosstalk matrix is obtained by inverting the crosstalk matrix;
And performing brightness mapping operation on the sixth intermediate image according to the brightness range supported by the display to obtain the second shooting image.
8. The method of claim 7, wherein performing the inverse crosstalk cancellation process on the plurality of color channels of the fifth intermediate image according to the inverse crosstalk matrix to obtain a sixth intermediate image comprises:
converting the fifth intermediate image from the first color space to a third color space to obtain a first converted image;
determining a chromaticity adjustment factor according to a ratio of the first color component in the third intermediate image to the fourth intermediate image;
According to the chromaticity adjusting factor, performing chromaticity smooth adjustment on the first converted image to obtain an adjusted image;
Converting the adjustment image from the third color space to the first color space to obtain a second conversion image;
and performing inverse crosstalk elimination processing on a plurality of color channels of the second conversion image according to the inverse crosstalk matrix to obtain the sixth intermediate image.
9. An image processing apparatus, comprising:
The system comprises an acquisition module, a coding module and a display module, wherein the file structure of the coding image comprises a first shooting image in a standard dynamic range SDR format and an additional graph, and the additional graph is used for indicating the color mapping relation between the SDR format of the first shooting image and a high dynamic range HDR format;
The decoding module is used for decoding the coded image to obtain the first shooting image and the additional image;
a rendering module for performing image rendering according to the additional graph and the first photographed image to display a second photographed image in the HDR format;
the additional graph is obtained by adopting the following modules:
the system comprises a determining module, a determining module and a judging module, wherein the determining module is used for determining the pixel value gain of each pixel point according to the pixel value of each pixel point in the first shooting image and the pixel value of the corresponding pixel point in the second shooting image, wherein the pixel value gain comprises gain values of a plurality of color channels, and the gain values are used for indicating the color mapping relation of the corresponding color channels;
The image processing device comprises an encoding module, a color adding module and a downsampling processing module, wherein the encoding module is used for converting gain values of pixel points in a plurality of color channels into log domain codes, determining color mapping relations of the pixel points in the plurality of color channels according to the log domain codes of the pixel points in the plurality of color channels and the lowest brightness and the highest brightness supported by a display, generating an initial additional image according to the color mapping relations of the pixel points in the plurality of color channels, wherein each pixel point in the initial additional image is used for indicating the color mapping relations of corresponding pixel points in the first shooting image and the second shooting image in the plurality of color channels, and downsampling the initial additional image to obtain the additional image.
10. The apparatus of claim 9, wherein the additional map is embedded after the first captured image, the rendering module further to:
and in response to not having a decoding function for the encoded image, performing image rendering according to the encoded image to display a first photographed image in the SDR format.
11. The apparatus of claim 9, wherein the second captured image is obtained using the following module:
the conversion module is used for responding to shooting operation, acquiring the first shooting image and converting the first shooting image into a linear domain to obtain a first intermediate image;
The device comprises a first intermediate image, a second intermediate image, a crosstalk elimination module, a first image acquisition module and a second image acquisition module, wherein the first intermediate image is acquired by performing crosstalk elimination processing on a plurality of color channels of the first intermediate image according to a crosstalk matrix, and the crosstalk matrix is determined according to the color coupling degree among the plurality of color channels;
and the mapping module is used for carrying out inverse tone mapping processing on the second intermediate image so as to obtain the second shooting image.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method according to any one of claims 1 to 8 when the program is executed.
13. A non-transitory computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the steps of the method of any of claims 1 to 8.
14. A chip comprising interface circuitry for inputting or outputting signals and processing circuitry coupled to each other for implementing the method of any of claims 1 to 8.
15. A computer program product comprising a computer program which, when executed by a processor, implements the steps of the method of any one of claims 1 to 8.
CN202511233355.XA 2025-08-29 2025-08-29 Image processing method, device, electronic equipment, chip and storage medium Active CN120730190B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202511233355.XA CN120730190B (en) 2025-08-29 2025-08-29 Image processing method, device, electronic equipment, chip and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202511233355.XA CN120730190B (en) 2025-08-29 2025-08-29 Image processing method, device, electronic equipment, chip and storage medium

Publications (2)

Publication Number Publication Date
CN120730190A CN120730190A (en) 2025-09-30
CN120730190B true CN120730190B (en) 2025-11-18

Family

ID=97170772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202511233355.XA Active CN120730190B (en) 2025-08-29 2025-08-29 Image processing method, device, electronic equipment, chip and storage medium

Country Status (1)

Country Link
CN (1) CN120730190B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170033471A (en) * 2015-09-16 2017-03-27 광운대학교 산학협력단 Method and apparatus for decoding hdr and sdr video with supplemental information of tone-mapping
CN120339435A (en) * 2025-04-10 2025-07-18 Oppo广东移动通信有限公司 Image processing method, device, equipment, storage medium and program product

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2009011492A1 (en) * 2007-07-13 2009-01-22 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding stereoscopic image format including both information of base view image and information of additional view image
WO2017158463A1 (en) * 2016-03-14 2017-09-21 Koninklijke Philips N.V. Saturation processing specification for dynamic range mappings
WO2024096931A1 (en) * 2022-10-31 2024-05-10 Google Llc High dynamic range image format with low dynamic range compatibility
CN118071659A (en) * 2022-11-21 2024-05-24 西安欧珀通信科技有限公司 Image processing method, apparatus, electronic device, and computer-readable storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20170033471A (en) * 2015-09-16 2017-03-27 광운대학교 산학협력단 Method and apparatus for decoding hdr and sdr video with supplemental information of tone-mapping
CN120339435A (en) * 2025-04-10 2025-07-18 Oppo广东移动通信有限公司 Image processing method, device, equipment, storage medium and program product

Also Published As

Publication number Publication date
CN120730190A (en) 2025-09-30

Similar Documents

Publication Publication Date Title
US11183143B2 (en) Transitioning between video priority and graphics priority
TWI559779B (en) Extended image dynamic range
KR102367205B1 (en) Method and device for encoding both a hdr picture and a sdr picture obtained from said hdr picture using color mapping functions
KR102358368B1 (en) Method and device for encoding high dynamic range pictures, corresponding decoding method and decoding device
CN113810641B (en) Video processing method and device, electronic equipment and storage medium
CN105874786B (en) Image processing apparatus, image processing method and computer readable recording medium storing program for performing
CN113810642B (en) Video processing method, device, electronic device and storage medium
CN119919330A (en) Image processing method and device
AU2016212243A1 (en) A method and apparatus of encoding and decoding a color picture
CN114449199A (en) Video processing method, apparatus, electronic device and storage medium
KR102385726B1 (en) Method and apparatus for encoding/decoding a high dynamic range picture into a coded bitstream
CN115294945A (en) Object display method and device, and color lookup table generation method and device
CN115706870B (en) Video processing method, device, electronic equipment and storage medium
CN113228694A (en) Method, device, electronic component, system, computer-readable program product and computer-readable storage medium for processing audio and/or video content and corresponding signals
CN116017171B (en) Image processing method, device, electronic equipment, chip and storage medium
CN113472997B (en) Image processing method and device, mobile terminal and storage medium
WO2023241339A1 (en) Color cast correction method and apparatus, device, storage medium and program product
CN120730190B (en) Image processing method, device, electronic equipment, chip and storage medium
KR20170124554A (en) Adaptive color grade interpolation method and device
CN120235789A (en) Color enhancement method, device, equipment and storage medium
CN117499800A (en) Image processing method and device
CN118197196A (en) Screen correction method, device, electronic device, chip and medium
US12425535B2 (en) Video processing method and apparatus, electronic device, and storage medium
CN112188179B (en) Image thumbnail display method, image thumbnail display device, and storage medium
US20240303791A1 (en) Method and apparatus for processing image, electronic device, chip and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant