[go: up one dir, main page]

CN120475272A - Pixel, image sensor, imaging system, operating method, device and medium - Google Patents

Pixel, image sensor, imaging system, operating method, device and medium

Info

Publication number
CN120475272A
CN120475272A CN202510886893.2A CN202510886893A CN120475272A CN 120475272 A CN120475272 A CN 120475272A CN 202510886893 A CN202510886893 A CN 202510886893A CN 120475272 A CN120475272 A CN 120475272A
Authority
CN
China
Prior art keywords
spatial
pixel
incident light
signals
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202510886893.2A
Other languages
Chinese (zh)
Inventor
维克多·连钦科夫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Ruishi Zhixin Technology Co ltd
Original Assignee
Shenzhen Ruishi Zhixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Ruishi Zhixin Technology Co ltd filed Critical Shenzhen Ruishi Zhixin Technology Co ltd
Priority to CN202510886893.2A priority Critical patent/CN120475272A/en
Publication of CN120475272A publication Critical patent/CN120475272A/en
Pending legal-status Critical Current

Links

Landscapes

  • Solid State Image Pick-Up Elements (AREA)

Abstract

本申请涉及一种像素、图像传感器、成像系统、操作方法、设备和介质,该像素包括:光电转换模块,包括呈阵列形式相邻排布的n个光电转换器,其中,n为大于或者等于3的正整数;不对称光学模块,不对称光学模块设置在光电转换模块上,其中,不对称光学模块用于对入射光进行聚焦衍射以产生三维不对称光强图案,进而在光电转换模块中的n个光电转换器上一一对应记录形成n个不同的空间信号;其中,像素被配置为:根据光电转换模块中n个空间信号之间的空间相关性确定n个空间信号的来源,以排除内部噪声的干扰。该方法通过对光电转换器的空间信号的空间相关性进行噪声分析,有助于降低噪声对图像质量的影响,提升图像的质量。

The present application relates to a pixel, an image sensor, an imaging system, an operating method, an apparatus, and a medium, wherein the pixel comprises: a photoelectric conversion module comprising n photoelectric converters arranged adjacent to each other in an array, wherein n is a positive integer greater than or equal to 3; an asymmetric optical module disposed on the photoelectric conversion module, wherein the asymmetric optical module is configured to focus and diffract incident light to produce a three-dimensional asymmetric light intensity pattern, thereby recording n different spatial signals in a one-to-one correspondence on the n photoelectric converters in the photoelectric conversion module; wherein the pixel is configured to determine the source of the n spatial signals based on the spatial correlation between the n spatial signals in the photoelectric conversion module to eliminate interference from internal noise. This method helps reduce the impact of noise on image quality and improve image quality by performing noise analysis on the spatial correlation of the spatial signals of the photoelectric converters.

Description

Pixel, image sensor, imaging system, operating method, device and medium
Technical Field
The present application relates to the field of optoelectronic technology, and in particular, to a pixel, an image sensor, an imaging system, an operating method, an apparatus, and a medium.
Background
In a conventional image sensor, each pixel includes at least a microlens, a filter, and a photoelectric converter (e.g., a photodiode). The micro lens is used for converging light, so that the light can effectively penetrate through the optical filter and then irradiate the photoelectric converter, and the photoelectric converter generates charges or current according to the incident light. However, in the photoelectric conversion process, the electric signal is inevitably disturbed by noise, thereby affecting the quality of the image and the subsequent processing effect.
Disclosure of Invention
Embodiments of the present application provide a pixel, an image sensor, an imaging system, an operating method, an apparatus, and a medium to solve the above-described problems.
In order to achieve the above object, according to a first aspect of the present application, there is provided a pixel including:
The photoelectric conversion module comprises n photoelectric converters which are adjacently arranged in an array form, wherein n is a positive integer greater than or equal to 3;
The asymmetric optical module is arranged on the photoelectric conversion module and is used for carrying out focusing diffraction on incident light to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed on n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence mode;
wherein the pixel is configured to determine the source of n of the spatial signals from their spatial correlation in the photoelectric conversion module to exclude interference of internal noise.
Optionally, the pixel is further configured to perform unfolding decomposition on the incident light with n calibration color bands under the condition that internal noise interference is eliminated, and perform analysis calibration on the incident light through pre-calibration parameters obtained by pre-calibration of the n calibration color bands and the n space signals, so as to obtain a spectrum component of the incident light under each calibration color band.
Optionally, the pixel is further configured to superpose and sum n spatial signals to obtain a spatial superposition signal, and detect a brightness change of the incident light based on the spatial superposition signal, in a case where internal noise interference is eliminated.
Optionally, the asymmetric optical module includes:
a background structural layer composed of a first material having a first refractive index;
A diffraction structure layer embedded in the background structure layer, the diffraction structure layer comprising a plurality of elements constructed of a second material having a second refractive index to focus and diffract the incident light, the first refractive index being lower than the second refractive index.
Optionally, the first material and the second material are inorganic materials.
Optionally, the assembly includes at least two diffraction cylinders of different sizes and at least two diffraction annular cylinders of different sizes, and a plurality of the diffraction cylinders and a plurality of the diffraction annular cylinders are staggered and asymmetrically arranged, so that the diffraction of the diffraction structure layer on the incident light is asymmetric.
According to a second aspect of the present application, an embodiment of the present application further provides an image sensor, the image sensor including a plurality of pixels arranged in an array, the pixels including:
The photoelectric conversion module comprises n photoelectric converters which are adjacently arranged in an array form, wherein n is a positive integer greater than or equal to 3;
The asymmetric optical module is arranged on the photoelectric conversion module and is used for carrying out focusing diffraction on incident light to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed on n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence mode;
wherein the pixel is configured to determine the source of n of the spatial signals from their spatial correlation in the photoelectric conversion module to exclude interference of internal noise.
Optionally, the image sensor includes at least two different structures of the pixels, and each of the pixels in the image sensor is arranged in a regular array.
Optionally, the pixel is further configured to perform unfolding decomposition on the incident light with n calibration color bands under the condition that internal noise interference is eliminated, and perform analysis calibration on the incident light through pre-calibration parameters obtained by pre-calibration of the n calibration color bands and the n space signals, so as to obtain a spectrum component of the incident light under each calibration color band.
Optionally, the pixel is further configured to superpose and sum n spatial signals to obtain a spatial superposition signal, and detect a brightness change of the incident light based on the spatial superposition signal, in a case where internal noise interference is eliminated.
Optionally, the pixel further comprises a back-illuminated silicon substrate, the photoelectric conversion module is arranged on the back-illuminated silicon substrate, and a groove is formed on the back-illuminated silicon substrate around the photoelectric conversion module.
According to a third aspect of the present application, an embodiment of the present application further provides an imaging system including:
the imaging lens is used for converging and imaging light rays of the object to form incident light;
the image sensor is used for carrying out focusing diffraction on the incident light based on pixels in the image sensor so as to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed on n photoelectric converters in the pixels in a one-to-one correspondence mode, wherein n is a positive integer greater than or equal to 3;
wherein the pixel is configured to:
determining the sources of n spatial signals according to the spatial correlation among the n spatial signals so as to eliminate the interference of internal noise;
Under the condition that internal noise interference is eliminated, carrying out unfolding decomposition on the incident light by n calibration color bands, and carrying out analysis calibration on the incident light through pre-calibration parameters obtained by pre-calibration of the n calibration color bands and n space signals to obtain spectral components of the incident light under each calibration color band;
and/or under the condition that the internal noise interference is eliminated, carrying out superposition summation on n spatial signals to obtain a spatial superposition signal, and detecting the brightness change of the incident light based on the spatial superposition signal.
According to a fourth aspect of the present application, an embodiment of the present application further provides a method for operating an image sensor, applied to an image sensor composed of a plurality of pixels, the method including:
Acquiring an imaging mode of the image sensor, wherein the imaging mode comprises a spectrum imaging mode, an event imaging mode and a fusion imaging mode;
based on the imaging mode, controlling the image sensor to acquire corresponding incident light by taking the pixels as units to obtain n different spatial signals corresponding to each pixel;
Judging the source of the space signal by taking the pixel as a unit so as to eliminate the interference of internal noise;
Detecting in units of the pixels based on the imaging mode with interference of the internal noise being excluded, detecting a color of the incident light from n of the spatial signals within the pixels, and/or detecting a brightness variation of the incident light from n of the spatial signals within the pixels;
Wherein the pixel includes:
The photoelectric conversion module comprises n photoelectric converters which are adjacently arranged in an array form, wherein n is a positive integer greater than or equal to 3;
The asymmetric optical module is arranged on the photoelectric conversion module, wherein the asymmetric optical module is used for carrying out focusing diffraction on the incident light to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed on n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence recording mode.
Optionally, the determining the source of the spatial signal based on the pixel unit to eliminate the interference of the internal noise includes:
For each of the pixels in question, acquiring n corresponding different spatial signals;
For each pixel, analyzing the spatial correlation among n spatial signals according to the signal values of the n spatial signals;
Determining, for each of said pixels, the source of n of said spatial signals from said spatial correlation;
For each pixel, in the case that n spatial signals corresponding to the pixel are derived from the internal noise, removing the n spatial signals corresponding to the pixel, and/or performing compensation correction on the n spatial signals corresponding to the pixel to eliminate interference of the internal noise.
Optionally, the spatial correlation includes a first correlation, a second correlation, and a third correlation, and the analyzing, for each of the pixels, the spatial correlation between the n spatial signals according to the signal values of the n spatial signals includes:
constructing a spatial signal matrix based on signal values of n spatial signals for each of the pixels;
For each pixel, analyzing the spatial correlation among n spatial signals based on the numerical value and the distribution rule of each signal value in the spatial signal matrix;
Wherein, under the condition that n signal values in the spatial signal matrix are different and are all non-zero values, the first correlation is presented among the n spatial signals;
in the case that at least one row or column of the signal values in the spatial signal matrix are the same non-zero value, the second correlation is presented among n spatial signals;
In the case where n of the signal values in the spatial signal matrix are partly zero, partly non-zero and randomly distributed, the third correlation is present between n of the spatial signals.
Optionally, said determining, for each of said pixels, the source of n of said spatial signals from said spatial correlation comprises:
Determining that n spatial signals corresponding to the pixels originate from the incident light, in the case that the first correlation is present between the n spatial signals;
Determining that n spatial signals corresponding to the pixels originate from the internal noise and the internal noise is row stripe noise or column stripe noise when the second correlation is presented among the n spatial signals;
in the case that the third correlation is present among the n spatial signals, it is determined that the n spatial signals corresponding to the pixel originate from the internal noise, and the internal noise is random noise.
Optionally, the detecting the color of the incident light according to n spatial signals in the pixel includes:
selecting n calibration color bands for each pixel to perform unfolding decomposition on the incident light;
For each pixel, performing pre-calibration based on n calibration color bands to obtain pre-calibration parameters;
for each pixel, acquiring n different spatial signals obtained by photoelectrically acquiring the incident light by the pixel;
And for each pixel, analyzing and calibrating the incident light through the pre-calibration parameters and n spatial signals, and calculating the spectral components of the incident light under each calibration color band.
Optionally, the detecting the brightness change of the incident light according to n spatial signals in the pixel includes:
Carrying out superposition summation on n spatial signals to obtain spatial superposition signals;
And detecting brightness change of the incident light based on the spatial superposition signal.
According to a fifth aspect of the present application, an embodiment of the present application further provides an optoelectronic device, including:
A memory having a computer program stored thereon;
a processor for executing the computer program in the memory to implement the steps of the method according to any one of the embodiments of the present application.
According to a sixth aspect of the application, the embodiments of the application also provide a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the method of any of the embodiments of the application.
The method and the device have the advantages that based on the analysis of the spatial correlation between the spatial signals in each pixel, the internal noise sources of the image sensor, such as dark current noise and the like, can be accurately identified and quantified, interference of the internal noise of the sensor is eliminated, the spatial superposition signals generated by superposition of noise intensities are eliminated, the intensity of an external light source is accurately distinguished, the accuracy of incident light brightness detection and color detection is improved, the sensing capability of a remote object is improved, and powerful support is provided for high-performance imaging of the image sensor in a complex environment.
Additional features and advantages of the application will be set forth in the detailed description which follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the description of the embodiments will be briefly described below. It is evident that the drawings in the following description are only some embodiments of the application and that other drawings may be obtained from these drawings without inventive effort for a person skilled in the art.
For a more complete understanding of the present application and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts throughout the following description.
Fig. 1 is a schematic view of a pixel structure in a conventional image sensor;
fig. 2 is a schematic structural diagram of a conventional RGGB image sensor;
FIG. 3 is a graph showing quantum efficiency (Quantum Efficiency, QE) curves of a monochromatic image sensor and a conventional RGGB image sensor in the visible spectrum;
FIG. 4 is a schematic diagram of a related art imaging system for sensing light from a remote object point;
FIG. 5 is a schematic diagram of a related art imaging system;
FIG. 6 is a graph of the chief ray angle distribution of an imaging lens of an imaging system in the related art;
FIG. 7 is a schematic diagram of a process flow of a conventional RGGB image sensor;
FIG. 8 is a schematic diagram of a pixel structure in an alternative embodiment of the application;
FIG. 9 is a three-dimensional view of a pixel in an alternative embodiment of the application;
FIG. 10 is a diffraction and self-interference diagram of an imaging system in an alternative embodiment of the present application;
FIG. 11 is a schematic diagram of imaging noise of an image sensor and an external light source by a pixel in an alternative embodiment of the application;
FIG. 12 is a schematic diagram of an imaging system imaging image sensor noise and external light sources in an alternative embodiment of the application;
FIG. 13 is a schematic diagram of an imaging flow of an imaging system with noise identification in an alternative embodiment of the application;
FIG. 14 is a schematic diagram of a matrix determinant of spatial signals of response to monochromatic band calibration with 610nm red light in an alternative embodiment of the present application;
FIG. 15 is a schematic diagram of a matrix determinant of spatial signals of response to monochromatic band calibration with 550nm green light in an alternative embodiment of the present application;
FIG. 16 is a schematic diagram of a matrix determinant of spatial signals of response to monochromatic band calibration with 490nm green light in an alternative embodiment of the present application;
FIG. 17 is a schematic diagram of a matrix determinant of spatial signals of response to monochromatic band calibration with 430nm blue light in an alternative embodiment of the present application;
FIG. 18 is a schematic diagram of noise identification based on a2×2 spatial signal matrix in an alternative embodiment of the application;
FIG. 19 is a schematic diagram of noise identification based on a 2×2 spatial signal matrix in an alternative embodiment of the application;
FIG. 20 is a schematic diagram of asymmetric diffraction of an asymmetric optical module for different incident lights in an alternative embodiment of the present application;
FIG. 21 is a schematic illustration of 4 calibration bands selected in an alternative embodiment of the present application;
FIG. 22 is a schematic diagram of a calibration flow for pixels in an alternative embodiment of the application;
FIG. 23 is a flow chart of a method of operating an image sensor in an alternative embodiment of the application;
FIG. 24 is a schematic diagram of an imaging system in an alternative embodiment of the application;
FIG. 25 is a graph of the chief ray angle distribution of an imaging lens of an imaging system according to an alternative embodiment of the present application;
Fig. 26 is a schematic structural diagram of an optoelectronic device according to an alternative embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present application. It will be apparent that the described embodiments are only some, but not all, embodiments of the application. All other embodiments, which can be made by a person skilled in the art without any inventive effort, are intended to be within the scope of the present application based on the embodiments of the present application.
In order to facilitate understanding of the embodiments provided in the examples of the present application, description will be made of the pixel, the image sensor, the imaging system, the operation method of the image sensor, the optoelectronic device, and the relevant application background of the computer-readable storage medium provided in the examples of the present application.
As shown in fig. 1, in a conventional image sensor, each pixel includes at least a microlens, a filter, and a photoelectric converter (e.g., a photodiode). The micro lens is used for converging light, so that the light can be ensured to effectively penetrate through the optical filter and then irradiate the photoelectric converter, and the light collection efficiency of the photoelectric converter is improved. The photoelectric converter generates electric charges or electric currents according to incident light. Wherein, in order to produce a color image, each pixel is provided with one of a red filter, a green filter or a blue filter, which transmits the corresponding color band and blocks the remaining light in the visible spectrum. For example, the filter may be repeatedly arranged in the image sensor in the form of an RGGB pattern (also referred to as bayer pattern) as shown in fig. 2.
Since each filter transmits only a narrow spectral band, the remaining light is blocked. Thus, only one of the color band signals transmitted by the filter can be recorded at any given photoelectric converter location or pixel location. In an alternative embodiment of the present application, a single-color image sensor without any color filters is compared with a conventional RGGB image sensor to obtain quantum efficiency (Quantum Efficiency, QE) curves of the two in the visible spectrum as shown in fig. 3. As can be seen from fig. 3, the spectral filtering of the conventional RGGB image sensor reduces the broadband sensitivity of white light by about 3-4 times compared to a monochromatic image sensor without any color filters.
Meanwhile, in order to infer the block color signal at any given pixel position in the RGGB image sensor, the signals of the pixels of adjacent colors must be interpolated (also referred to as demosaicing) to obtain the block color signal at the given position, which reduces the spectral resolution of the color image sensor.
Furthermore, in event imaging modes of operation, the photosensitivity of the image pixels is particularly important, which detects the continuity of the event based on an evaluation of the temporal variation of the acquired signal, with high photosensitivity (for any given illumination level, no loss due to absorption by the pixel filters). The conventional microlens provides only unstructured light intensity information to the photoelectric converter, as shown in fig. 4, when an object is placed at a long distance (e.g., 50 meters or more) using the conventional pixel microlens, the dark signal is particularly difficult to distinguish from the bright signal, the conventional pixel microlens has a low probability of collecting light from a remote object point, and the background noise or sensor intrinsic noise (i.e., sensor intrinsic noise such as signal readout noise, dark current noise, etc.) at this time is relatively high, and the collected low light signal is difficult to distinguish from the background noise, sensor intrinsic noise. In this case, it is desirable to provide a more robust and versatile method to verify the source of the signal, effectively verifying whether the source of the identification signal is external incident light or sensor internal noise, in order to eliminate interference from the sensor internal noise.
As shown in fig. 5-6, in many existing application scenarios, imaging systems are equipped with multi-element imaging lenses that transmit imaged incident light to an image sensor at very high angles (up to 35 degrees chief ray angle), whereas in related art image sensors pixels have a pixel stack height of 2-4 μm between the top surface of the microlens and the photosensitive substrate interface, light incident to the microlens at high angles is focused by the microlens and undergoes a large displacement during propagation through the pixel stack. Therefore, in each pixel, the microlens and the filter need to be shifted with respect to the photoelectric converter to collect light into the appropriate photoelectric converter.
In particular, as can be seen from fig. 5 and 6, the displacement of the micro-lenses and the optical filters relative to the photoelectric converter depends on many factors, such as the pixel stack height, the number of pixel stack layers, the optical properties of the material, the pixel size, the image sensor's imaging lens chief ray angle distribution curve, the focal length f of the imaging lens. Thus, offset designs for particular imaging lenses and microlenses and filters in image sensors are often very lengthy and expensive processes. Any modification of the imaging lens design, pixel size, or pixel stack height also requires additional costs because the manufacturing process is subject to substantial variation and requires verification of the extremely high stability of the new process for producing high yield parts.
Therefore, by reducing the pixel stack, and by ideally avoiding the necessity of shifting any portion of the pixel stack, it would be highly advantageous to significantly reduce the cost of the image sensor. During the imaging system assembly stage, the multi-element imaging lens needs to be aligned in three dimensions with the image sensor array to achieve lateral coincidence of the imaging lens and image sensor optical centers. In addition, it is also critical to prevent the imaging lens from tilting relative to the sensor plane so that an image sensor having offset microlenses and filters and aligned with the multi-element imaging lens can sense the correct incident angle of the optical signal. It is therefore desirable to provide an image sensor that is immune to misalignment problems of the imaging lens and the image sensor to reduce process complexity in subsequent imaging system assembly.
In addition, the deposition process of the non-CMOS based organic polymer material microlenses and color filters requires additional non-CMOS production equipment for disposing absorptive filters on the image sensor array. In an alternative embodiment of the present application, the number of process steps, production cycle and production yield of the existing RGGB image sensor are shown in fig. 7.
Exemplary, as shown in fig. 7, the addition of the non-CMOS process prolongs the production process of the image sensor, and the yield of the non-CMOS production equipment is low, so the addition of the non-CMOS process increases the cost of the image sensor. Meanwhile, conventional polymer-based absorptive filters are also subject to weathering, which has a shorter lifetime than inorganic materials used in CMOS production facilities. Therefore, it is desirable to incorporate the color imaging functionality of an image sensor into a CMOS fabrication process to reduce the cost of the image sensor and to increase the yield of the image sensor.
In view of the above, the embodiment of the application provides an image sensor based on a pure CMOS process, which is designed by taking pixels as a unit, designs an asymmetric optical module shared by n photoelectric converters based on inorganic materials with at least two different refractive indexes in each pixel, focuses and diffracts incident light to generate a three-dimensional asymmetric light intensity pattern, and further records and forms different space signals on the n photoelectric converters based on the three-dimensional asymmetric light intensity pattern, wherein the three-dimensional asymmetric light intensity pattern is formed by light diffraction and subsequent self-interference generated on the asymmetric optical module, and the light interference property enables the detected space signals to be highly correlated, that is, each pixel has high spatial correlation among a plurality of different space signals acquired by the incident light.
The internal noise of the sensor can be mainly divided into two major types of random noise and fixed noise, wherein the random noise refers to noise which randomly appears, is irregular and cannot be reproduced, such as dark current noise, thermal noise and the like, when the source of a spatial signal in a pixel is not external incident light but random noise in the sensor, the spatial correlation among the spatial signals in the pixel is not present, the random distribution of partial spatial signals with values and partial spatial signals with no values is presented, the fixed noise can be reproduced, and is usually represented as stripe or regional brightness variation, stripe noise or regional noise is presented, and when the source of the spatial signal in the pixel is not external incident light but is fixed noise in the sensor, a certain spatial correlation exists among the spatial signals in the pixel, the partial spatial signals with values, the partial spatial signals with no values are presented, or the distribution of all spatial signals with values is presented, and a plurality of spatial signals with values are presented in rows, columns or are presented in regional distribution and the signal values are consistent.
That is, under the pixel structure of the present application, when the spatial signals in the pixels originate from three different signal sources, that is, the external incident light, the random noise in the sensor, and the fixed noise in the sensor, there is a large difference in spatial correlation between the corresponding plurality of spatial signals, so that in each pixel, the source of the spatial signals can be identified based on the spatial correlation between the respective spatial signals, and it is determined whether the spatial signals originate from the external incident light, the random noise in the sensor, or the fixed noise in the sensor, so as to eliminate the interference of the noise in the sensor.
Under the condition that the interference of internal noise is eliminated, for detecting the brightness variation or the brightness variation of external incident light, as a plurality of space signals in the pixel are obtained by photoelectric induction after the same incident light is subjected to asymmetric focusing diffraction, the space signals have high space correlation, at the moment, the space signals in the pixel can be directly overlapped and combined to obtain a space overlapped signal with higher intensity, and the space overlapped signal is used as an event sensing signal to detect the brightness variation or the brightness variation of the incident light, so that the event sensing capability of a remote object is improved.
Under the condition that internal noise interference is eliminated, for color sensing detection of external incident light, n space signals obtained by n photoelectric converters sharing the same asymmetric optical module in a pixel in a one-to-one correspondence manner are different and combined to form a characteristic pattern of specific incident light colors and spectrums, so that n different space signal component distributions are formed on the n photoelectric converters by narrow spectral bands (color bands for short) of various colors, n calibration color bands with representativeness can be selected in a visible spectrum range, incident light of unknown colors is unfolded and decomposed into the sum of components of the n calibration color bands, and then an n-element linear equation set is constructed based on the space signal responses of the n photoelectric converters by the n calibration color bands after being unfolded and decomposed, wherein the n-element linear equation set is composed of a calibration coefficient matrix, a color band fraction vector and a space signal vector, and the calibration coefficient matrix can be obtained by pre-calibrating independent space signal responses of the various calibration color bands.
Therefore, on the basis of an n-element linear equation set, each color band component of the incident light can be solved based on a calibration coefficient matrix obtained by pre-calibration and n spatial signals measured in real time, the incident light is analyzed into spectrum components corresponding to n calibration color bands, and the brightness detection of the incident light is realized and the color measurement of the incident light is realized. The asymmetric optical module and the polychromatic band calibration technical means are combined to replace micro lenses and optical filters in the existing pixels, so that the production cost and the process complexity of the image sensor are reduced, and the sensitivity and the spectral resolution of incident light detection are improved.
Wherein n is a positive integer greater than or equal to 3, and selecting various representative calibration color bands within the whole visible spectrum range. For example, representative calibration bands can be selected from the narrow spectral bands corresponding to different colors, and at least three calibration bands of red, green and blue are exemplified, i.e. the selected calibration bands at least include red spectral bands, green spectral bands and blue spectral bands, so as to meet the requirement of unfolding and decomposing the incident light of unknown colors.
First, an embodiment of the present application provides a pixel, including:
The photoelectric conversion module comprises n photoelectric converters which are adjacently arranged in an array form, wherein n is a positive integer greater than or equal to 3;
The asymmetric optical module is arranged on the photoelectric conversion module and is used for carrying out focusing diffraction on incident light to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed by recording on n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence manner;
Wherein the pixel is configured to determine the source of the n spatial signals based on the spatial correlation between the n spatial signals in the photoelectric conversion module to exclude interference of internal noise.
It should be noted that, the three-dimensional asymmetric light intensity pattern means that after the incident light is diffracted by the asymmetric focusing of the asymmetric optical module, a three-dimensional asymmetric light intensity pattern is formed in the three-dimensional space on the photoelectric converters, the three-dimensional asymmetric light intensity pattern is projected and mapped onto the photosensitive plane of the photoelectric converters, two-dimensional asymmetric light intensity distribution is formed on each photoelectric converter at different spatial positions in the photosensitive plane, the light intensity distribution collected on the photoelectric converters at different positions is different, and then n space signals with different sizes are formed on n photoelectric converters in a one-to-one corresponding recording mode based on the photoelectric effect.
As shown in fig. 8-9, the value of n is 4, the asymmetric optical module is disposed on the photoelectric conversion module formed by 4 photoelectric converters, and the 4 photoelectric converters in the photoelectric conversion module are disposed in a2×2 regular array, and the asymmetric optical module focuses and diffracts the incident light to generate a three-dimensional asymmetric light intensity pattern, so that 4 different spatial signals are formed on the 2×2 photoelectric converters in the photoelectric conversion module in a one-to-one correspondence manner.
In some embodiments, the asymmetric optical module comprises:
A background structural layer composed of a first material having a first refractive index;
A diffraction structure layer embedded in the background structure layer, the diffraction structure layer comprising a plurality of elements of a second material having a second refractive index for focusing and diffracting incident light, the first refractive index being lower than the second refractive index.
It can be appreciated that the geometry, dimensions, material properties, and arrangement of the components can be flexibly designed according to practical requirements to achieve asymmetric focused diffraction of incident light. The incident light may be the emitted light directly emitted by the light source, or the reflected light after the light emitted by the light source irradiates the object, which is not described herein.
In some embodiments, the assembly may include a cylindrical shape, a rectangular shape, a V-shape, a ring shape, etc.
In some embodiments, the components may be arranged in a periodic, quasi-periodic, or random manner.
In some embodiments, the assembly includes at least two different sized diffraction cylinders and at least two different sized diffraction annular cylinders, and the plurality of diffraction cylinders are staggered and asymmetrically disposed with the plurality of diffraction annular cylinders such that diffraction of the diffraction structure layer is an asymmetric diffraction of incident light.
As illustrated in fig. 8 to 9, the asymmetric optical module includes:
a background structural layer having a first refractive index;
an asymmetric diffractive structure layer having a second refractive index, the asymmetric diffractive structure layer being embedded in the background structure layer, and the second refractive index being different from the first refractive index.
As shown in fig. 8-9, the asymmetric diffraction structure layer is a transparent structure layer and is embedded at the bottom of the background structure layer, the asymmetric diffraction structure layer comprises a plurality of diffraction cylinders and a plurality of diffraction circular columns, and the diffraction cylinders with different sizes and the diffraction circular columns with different sizes are arranged in a staggered manner, so that the asymmetric diffraction structure layer diffracts incident light in an asymmetric manner.
For example, as shown in fig. 8 to 9, the diameters of the plurality of diffraction cylinders are different from each other, the inner diameters and the outer diameters of the plurality of diffraction ring cylinders are different from each other, and the heights of the diffraction cylinders and the diffraction ring cylinders are smaller than the height of the background structure layer.
In some embodiments, the first material and the second material are inorganic materials.
It should be noted that, the asymmetric optical module is made entirely based on inorganic materials, and compared with the traditional image sensor using organic materials, the asymmetric optical module does not involve the CMOS process flow of organic materials and the like, so that the whole image sensor can be manufactured on a CMOS wafer factory, and the cost of the image sensor is reduced. Meanwhile, the traditional image sensor with micro lenses and optical filters has lower yield of about 80 percent due to the processing precision and the harsh requirements of organic materials, but the CMOS image sensor with all inorganic materials in the embodiment of the application has very high yield which is close to 100 percent, and the compatibility between inorganic materials is better, and the possibility of dark current can be further reduced by eliminating non-CMOS materials, so that the image sensor with all CMOS provides longer service life and higher reliability.
The background structure layer may be made of a low refractive index material (i.e., a first material), such as silicon oxide, with a refractive index of 1.46, such as air, with a refractive index of 1, and the asymmetric diffraction structure layer may be made of a high refractive index material (i.e., a second material), such as silicon nitride, titanium oxide, tantalum oxide, silicon carbide, silicon, etc., with a refractive index in the range of 1.8-4.
It will be appreciated that in other alternative embodiments, the asymmetric optical module may be composed of three or more inorganic materials, and is not limited herein, such as a first background layer with a lower refractive index, a second background layer with a medium refractive index, and an asymmetric diffraction structure layer with a higher refractive index, which are sequentially nested from inside to outside, and further such as a first background layer with a lower refractive index, a first asymmetric diffraction structure layer with a medium refractive index embedded in the first background layer, a second asymmetric diffraction structure layer with a higher refractive index embedded in the first background layer, and so on.
In some embodiments, the photoelectric conversion module includes only one size of photoelectric converter, and n photoelectric converters in the photoelectric conversion module are arranged in a regular array.
As shown in fig. 8-9, the 4 photoelectric converters in the photoelectric conversion module corresponding to the pixel have the same size, are square photoelectric converters, the corresponding photosensitive surfaces are square, and the 4 photoelectric converters are arranged in a2×2 regular array.
In some embodiments, the regular array arrangement may be a linear arrangement, a two-dimensional matrix arrangement, or the like.
As shown in fig. 8 to 9, the pixels outlined by the dashed lines in fig. 8 correspond to 4 photoelectric converters of the same size in the photoelectric conversion module and are arranged in a2×2 two-dimensional matrix.
It is understood that the size and shape of the photoelectric converter in the pixel are not limited to square, and in other alternative embodiments, the size and shape of the photoelectric converter may be various shapes such as rectangle, diamond, triangle, etc., the corresponding photosensitive surfaces are designed in the shape of rectangle, diamond, triangle, etc., and the plurality of photoelectric converters corresponding to the same size may be linearly arranged along the horizontal direction, the vertical direction or other oblique directions, which is not limited herein.
In some embodiments, the photoelectric conversion module includes at least two different sizes of photoelectric converters, and n photoelectric converters in the photoelectric conversion module are arranged in an irregular array.
The in-pixel photoelectric conversion module includes 1 first rectangular photoelectric converter and 2 second rectangular photoelectric converters, 3 photoelectric converters are distributed in an irregular array, and the in-pixel photoelectric conversion module includes 1 first rectangular photoelectric converter, 3 second rectangular photoelectric converters and 3 square photoelectric converters, and 7 photoelectric converters are distributed in an irregular array.
It should be noted that the specific size or irregular arrangement of the individual photoelectric converters in the pixel may be customized according to specific requirements, for example, arranging larger-sized photoelectric converters in some areas to increase local sensitivity, or using smaller-sized photoelectric converters in other areas to achieve higher spatial resolution.
In some embodiments, the pixel further comprises a back-illuminated silicon substrate on which the photoelectric conversion module is disposed, and a trench is provided on the back-illuminated silicon substrate around the periphery of the photoelectric conversion module.
Illustratively, as shown in fig. 9, the pixel further includes a Back-illuminated silicon substrate (BSI Si) on which the photoelectric conversion modules are disposed, the asymmetric optical modules are disposed on the photoelectric conversion modules, and the pixel further includes a unit deep trench Isolation structure (CELL DEEP TRENCH Isolation, CDTI) disposed in the Back-illuminated silicon substrate, the unit deep trench Isolation structure being disposed around 2×2 photoelectric converters within the photoelectric conversion modules to physically isolate the 2×2 photoelectric converters under each asymmetric optical module from light crosstalk adjacent to the asymmetric optical modules.
The cell deep trench isolation structure may be a material such as silicon oxide, which is not limited herein.
In some embodiments, the pixel is further configured to unwrap the incident light with n calibration bands with internal noise interference removed, and to parse the incident light with pre-calibration parameters resulting from pre-calibration of the n calibration bands and n spatial signals to obtain spectral components of the incident light under each calibration band. For more on the spectral components of the incident light under each calibration band, see the relevant description below.
In some embodiments, the pixel is further configured to superpose and sum the n spatial signals with internal noise interference removed, to obtain a spatial superposition signal, and to detect a brightness change of the incident light based on the spatial superposition signal. For more on detecting brightness variations of the incident light, see the relevant description below.
Secondly, the embodiment of the application also provides an image sensor, which comprises a plurality of pixels arranged in an array;
The pixel includes:
The photoelectric conversion module comprises n photoelectric converters which are adjacently arranged in an array form, wherein n is a positive integer greater than or equal to 3;
The asymmetric optical module is arranged on the photoelectric conversion module and is used for carrying out focusing diffraction on incident light to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed by recording on n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence manner;
Wherein the pixel is configured to determine the source of the n spatial signals based on the spatial correlation between the n spatial signals in the photoelectric conversion module to exclude interference of internal noise.
As illustrated in fig. 8 to 9, the image sensor includes:
the photoelectric conversion array comprises a plurality of photoelectric conversion modules arranged in an array;
The optical system comprises an asymmetric optical array, wherein the asymmetric optical array is arranged on a photoelectric conversion array, the asymmetric optical array comprises a plurality of asymmetric optical modules which are arranged in an array, each asymmetric optical module is arranged on a corresponding photoelectric conversion module, the value of n is 4, each photoelectric conversion module comprises 4 photoelectric converters which are arranged in a 2 x 2 regular array, the asymmetric optical modules focus and diffract incident light to generate three-dimensional asymmetric light intensity patterns, and 4 different spatial signals are recorded and formed on 2 x 2 photoelectric converters in the corresponding photoelectric conversion modules.
As illustrated in fig. 8 to 9, the photoelectric conversion array includes a plurality of photoelectric converters arranged in a regular array, the asymmetric optical array includes a plurality of asymmetric optical modules arranged in a regular array, one photoelectric conversion module is constituted per 2×2 photoelectric converters, and one pixel (may also be referred to as a pixel unit) is constituted per 2×2 photoelectric converters and one asymmetric optical module thereon, so that the image sensor includes a plurality of the above-described pixels arranged in a regular array.
In each pixel, the incident light forms a three-dimensional asymmetric light intensity pattern in each pixel based on the asymmetric optical module, further 4 different space signals are formed on the 4 photoelectric converters in a one-to-one correspondence mode, and then 4 single-color calibration color bands in the subsequent visible spectrum range are combined, so that brightness detection and color detection can be effectively carried out on the incident light in each pixel, and the incident light is analyzed into spectral components corresponding to the 4 calibration color bands.
It should be noted that, compared with the conventional RGGB sensor, the image sensor does not filter out multiple color components through the color filter and then transmits the remaining single color band component to the photoelectric converter, but transmits full spectrum color band components based on the asymmetric optical module, and the photosensitivity of the image sensor in the visible spectrum range is improved by 2-3 times. Meanwhile, the brightness detection and the color detection of the incident light are carried out based on the pre-calibrated calibration of the polychromatic components and the actually measured space signals, interpolation calculation based on adjacent pixels is not needed, and the spectral resolution of the image sensor is improved.
It can be understood that, since the pixel in the embodiment of the present application can combine the asymmetric optical module and the multi-color band calibration technology to replace the microlens and the optical filter in the existing pixel, the pixel in the embodiment of the present application is not limited by the structural dimensions of the microlens and the optical filter, especially not limited by the structural dimensions of the microlens, and the corresponding pixel size can be adaptively reduced.
It should be noted that, in addition to the above photoelectric conversion array and the above asymmetric optical array, the image sensor further includes other structures such as a pixel processing circuit, a controller, and an image processor, and details can be found in the prior art, and are not repeated here.
In some embodiments, the image sensor includes only pixels of one configuration, that is, the configuration of each pixel in the image sensor is identical, and each pixel in the image sensor is arranged in a regular array.
As shown in fig. 8, the image sensor includes a plurality of pixels, each of which has the same structure, that is, the photoelectric conversion modules in each of the pixels have the same structure, and the asymmetric optical modules in each of the pixels have the same structure, and the pixels in the image sensor are arranged in a regular two-dimensional matrix.
In some embodiments, the image sensor comprises at least two pixels of different structures, the structures of the at least two pixels being different, i.e. the number and/or size of photosensors comprised by the at least two pixels being different, and/or the structures of asymmetric optical modules within the at least two pixels being different. For example, at least one of the shape, size, material of the components within different asymmetric optical modules is different.
In the image sensor, n has only one value, but the array arrangement structure of n photoelectric converters in at least two photoelectric conversion modules is different, for example, the number of photoelectric converters included in each pixel is the same, the structure of the asymmetric optical module in each pixel is the same, but the structure of the photoelectric conversion modules in part of the pixels is different, that is, the arrangement structure of a plurality of photoelectric converters in part of the pixels is different, and the size and arrangement manner of the corresponding photoelectric converters are different.
In the image sensor, n has at least two different values, so that the array arrangement structures of n photoelectric converters in at least two photoelectric conversion modules are different, for example, the number of photoelectric converters included in at least two pixels is different, and the array arrangement structures of a plurality of photoelectric converters in the two pixels are different, even if the structures of the asymmetric optical modules in each pixel are the same, on the premise that the sizes of the photoelectric conversion modules are the same, the arrangement structures of the photoelectric converters in the photoelectric conversion modules in the corresponding pixels are necessarily different due to the different numbers of the photoelectric converters included in part of the pixels.
In the image sensor, n has only one value, and the array arrangement structure of n photoelectric converters in each photoelectric conversion module is the same, but the asymmetric optical modules in at least two pixels are different, for example, the asymmetric optical modules in part of the pixels shown in fig. 8 are rotated 45 ° to the left in the vertical projection plane of the incident light, and the focusing diffraction orientation of the asymmetric optical modules after rotation is different from that of the asymmetric optical modules not rotated, so that two different asymmetric optical modules are obtained, or the structure of the asymmetric optical modules in part of the pixels shown in fig. 8 is changed directly based on the shape of the internal component and the transformation of the material.
In some embodiments, as shown in fig. 8, the number n of photoelectric converters in a pixel is 4, a pixel is formed based on a photoelectric conversion module formed by 2×2 photoelectric converters and an asymmetric optical module, and total 4 calibration color bands of R, G, G2 and B are correspondingly and uniformly selected in the visible spectrum range, and pre-calibration is performed based on spatial signal response corresponding to the single-color calibration color bands, so that incident light of unknown color can be subsequently resolved into spectral components corresponding to the 4 calibration color bands, and the method is suitable for an image sensor in which a single photoelectric converter is arranged in a square shape and a plurality of photoelectric converters are arranged in an array rule, and the 4×6 photoelectric conversion array shown in fig. 8 comprises 24 photoelectric converters arranged in a 4×6 array, which can be configured to form 2×3 pixels and an asymmetric optical module on the 2×2 photoelectric converters correspondingly.
In addition, the 4×6 photoelectric conversion array shown in fig. 8 may be configured to have 1×2 pixels, and one asymmetric optical module is formed on each of the 3×3 photoelectric converters to form one pixel, and one asymmetric optical module is formed on each of the 4×4, 5×5, and n×n photoelectric converters to form one pixel as the photoelectric conversion array expands.
Correspondingly, 3×3, 4×4, 5×5, and n×n calibration color bands are selected correspondingly in the visible spectrum range, and the calibration color bands at least cover three primary colors of red, green, and blue, so as to effectively meet the requirements of unfolding, decomposing and detecting the incident light of various unknown colors. Wherein N is an integer greater than or equal to 2.
It can be understood that the more uniform the color distribution and the more the number of the calibration color bands, the finer the color decomposition analysis of the incident light, the higher the accuracy of the decomposition analysis, and the more accurate the result of the decomposition analysis, but the larger the corresponding calculation amount can be selected according to the actual situation.
The embodiment of the application also provides an operation method of the image sensor, which is applied to the image sensor formed by a plurality of pixels, and comprises the following steps:
s1, acquiring an imaging mode of an image sensor, wherein the imaging mode comprises a spectrum imaging mode, an event imaging mode and a fusion imaging mode;
s2, based on an imaging mode, controlling an image sensor to acquire corresponding incident light by taking pixels as units to obtain n different spatial signals corresponding to each pixel;
S3, judging the source of the spatial signal by taking the pixel as a unit so as to eliminate the interference of internal noise;
s4, detecting by pixel unit based on imaging mode under the condition of eliminating internal noise interference, detecting the color of incident light according to n space signals in the pixel, and/or detecting the brightness change of the incident light according to n space signals in the pixel;
Wherein the pixel includes:
The photoelectric conversion module comprises n photoelectric converters which are adjacently arranged in an array form, wherein n is a positive integer greater than or equal to 3;
The asymmetric optical module is arranged on the photoelectric conversion module and is used for carrying out focusing diffraction on incident light to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed on n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence recording mode.
In the photoelectric conversion module, the spatial signal is a signal obtained by measuring a three-dimensional asymmetric light intensity pattern formed by the incident light of an unknown color through the asymmetric focusing diffraction of the asymmetric optical module on each photoelectric converter, and after the light intensity pattern is projected onto the photosensitive plane of each photoelectric converter in the pixel, the light intensity patterns on the photosensitive planes of each photoelectric converter have differences, and the size of the spatial signal formed by corresponding photoelectric induction is different, so that the spatial signal reflects the distribution characteristic of the incident light in space.
Because each pixel in the image sensor can be used for detecting the color of the incident light and also can be used for detecting the brightness change of the incident light, in the step S1, the imaging mode of the image sensor formed by corresponding multiple pixels is flexible and selectable, and can be a spectrum imaging mode in which all working pixels detect the color of the incident light, an event imaging mode in which all working pixels detect the brightness change of the incident light, or a fusion imaging mode in which part of the working pixels detect the brightness change of the incident light and part of the working pixels detect the color of the incident light.
The working pixels refer to pixels that can enter a working state, and in various imaging modes of the image sensor, all pixels can be used as the working pixels to perform image acquisition, and pixels in a partial area can be used as the working pixels to perform image acquisition, which is not described herein.
When the detection purpose of the pixels is different, the corresponding photoelectric converter works in a first mode, when the pixels detect the incident light color, the corresponding pixel circuits sequentially need to enter a reset emptying state, an exposure integration state and a readout state, when the exposure integration is performed, the photoelectric converter generates charges based on the incident light, the corresponding acquired space signals are integrated voltages, when the pixels detect the incident light brightness change, the photoelectric converter works in a second mode, the corresponding pixel circuits only have one working state, the photoelectric converter generates current based on the incident light, and the corresponding acquired space signals are logarithmic voltages.
Therefore, in step S2, the image sensor needs to be controlled to collect the corresponding incident light in units of pixels based on the imaging mode, so that the photoelectric converter in the pixel for detecting the color of the incident light is operated in the first mode, the photoelectric converter in the pixel for detecting the change of the brightness of the incident light is operated in the second mode, and n corresponding different spatial signals are collected for each pixel.
In some embodiments, the step S3 of determining the source of the spatial signal in units of pixels to eliminate the interference of the internal noise further includes:
s31, acquiring corresponding n different spatial signals for each pixel;
S32, for each pixel, analyzing the spatial correlation among n spatial signals according to the signal values of the n spatial signals;
s33, determining sources of n spatial signals according to the spatial correlation for each pixel;
S34, for each pixel, in the case where n spatial signals corresponding to the pixel originate from internal noise, removing the n spatial signals corresponding to the pixel, and/or performing compensation correction on the n spatial signals corresponding to the pixel to eliminate interference of the internal noise.
In the photoelectric conversion module, the distribution of the spatial signal is affected by various factors such as the characteristics of the light source, the characteristics of the optical element, the characteristics of the photoelectric converter, noise, and the like. By analyzing the spatial correlation between the spatial signals of the plurality of photoelectric converters, the source of the spatial signals can be deduced.
The spatial correlation between the spatial signals refers to the correlation between the spatial signals output by different photoelectric converters in terms of numerical value, time characteristics or variation trend. For example, the signal values of the spatial signals of different photoelectric converters have a certain proportional relationship or linear relationship in numerical value. For another example, when the intensity or wavelength of the incident light changes, the spatial signals of different photoelectric converters show similar trend of change, and so on.
In some embodiments, correlation coefficients (e.g., cross-correlation functions, etc.) for a plurality of spatial signals within a pixel may be calculated, and spatial correlation between the plurality of spatial signals determined.
In some embodiments, a matrix of n spatial signals within a pixel is normalized, and a covariance matrix of the matrix is calculated, and whether noise exists in the plurality of spatial signals is determined by analyzing a variance contribution rate and time-frequency characteristics of a principal component.
In some embodiments, the source of the spatial signal may include external incident light, random noise inside the sensor, fixed noise inside the sensor, where the random noise inside the sensor may be dark current noise, readout noise, shot noise, etc. of the photoelectric converter, the fixed noise inside the sensor may be noise caused by distortion of the lens, non-ideal characteristics of the optical elements such as non-uniformity of the mirror, etc., and the fixed noise inside the sensor may also be noise caused by size differences of the photoelectric converter, variation of transistor characteristics, or non-uniformity of amplifier gain.
The dark current noise refers to weak current generated by the photoelectric converter in the absence of illumination.
It should be noted that, in general pixels and image sensors, color detection of incident light is more focused, but in fig. 4, the conventional image sensor has low light of collected light of a remote object point, even if the image sensor is replaced by an image sensor based on an asymmetric optical module as shown in fig. 8, after light absorption is removed, loss of imaging light is effectively reduced, but the collected light of the remote object point is still relatively low, and interference of background noise or noise inside the sensor is still not effectively eliminated, which is very likely to cause false detection of signals.
At this time, considering that n photoelectric converters in the pixel and the image sensor of the above embodiment share the same asymmetric optical module, the spatial signals of the n photoelectric converters in the pixel are formed by the same imaging light diffraction and self-interference, so that each spatial signal detected on the n photoelectric converters is highly correlated, as shown in fig. 10, if the spatial signals come from external incident light (i.e., external light source), when the light starts from the same light source, after diffraction and self-interference, a pattern with a specific light intensity distribution is formed in the space, that is, correlated spatial signals are generated on the photoelectric converters at different positions in the space, each spatial signal reflects the light intensity variation at the position, and if the spatial signals come from the sensor internal noise (i.e., internal dark source, such as dark current noise, readout noise, etc.), the corresponding optical signals do not exist, and there is no higher spatial correlation restrained by the asymmetric optical module between the corresponding n spatial signals, but rather, the spatial correlation based on the fixed noise inside the sensor is lower, or the random spatial correlation based on the sensor is not completely.
Therefore, in some embodiments, for each pixel, whether the acquired spatial signal originates from an external light source or from sensor noise can be determined according to the spatial correlation between the spatial signals on the n photoelectric converters, so as to eliminate sensor noise interference and improve the accuracy of signal detection.
The spatial correlation between the spatial signals of the n photoelectric converters in the pixel can be determined by the signal values of the n spatial signals. For example, if the spatial signal is derived from an external light source, there is a high spatial correlation between the n spatial signals, the corresponding n signal values are typically non-zero values of varying magnitudes, if the spatial signal is due to sensor noise, due to occasional or partially fixed patterns of noise (e.g., stripe noise), there is a low spatial correlation or complete uncorrelation between the n spatial signals, some of the corresponding n signal values are zero (no external light source and no internal noise), some are non-zero (no external light source but random noise), or some of the corresponding n signal values are uniformly sized non-zero signal values are in rows, columns, or in blocks (no external light source but fixed noise).
In the presence of noise inside the sensor, the signal values of the n spatial signals are not all non-zero.
In an embodiment of the present application, as shown in fig. 11, the 4 photoelectric converters in the pixel share the same asymmetric optical module and are arranged in a2×2 array, when no external light source exists, the dark current only acts in 1 photoelectric converter due to the local random characteristic of the dark current, and only three signals in the corresponding 4 space signals are zero, and the other signal is not zero, namely s1=s3=s4=0, S2>0, when the external light source of 610nm exists, the mode of asymmetric space signals is formed on the 2×2 photoelectric converter through diffraction and self-interference of light, so that 4 space signals which are interrelated and are not zero, namely S1>0, S2>0, S3>0, and S4>0 are obtained.
In some embodiments, the spatial correlation includes a first correlation, a second correlation, and a third correlation, and the step S32 of analyzing, for each pixel, the spatial correlation between n spatial signals according to the signal values of the n spatial signals further includes:
s321, constructing a spatial signal matrix based on signal values of n spatial signals for each pixel;
s322, for each pixel, analyzing the spatial correlation among n spatial signals based on the numerical value and the distribution rule of each signal value in the spatial signal matrix;
Under the condition that n signal values in the spatial signal matrix are different and are all non-zero values, first correlation is presented among the n spatial signals;
In the case that at least one row or one column of signal values in the spatial signal matrix are the same non-zero value, a second correlation is presented among n spatial signals;
In the case where n signal values in the spatial signal matrix are partially zero, partially non-zero and randomly distributed, a third correlation is present between the n spatial signals.
In step S321, the spatial signal matrix is a matrix constructed based on the signal values of n spatial signals in the pixel, and each element in the spatial signal matrix represents the signal value of the spatial signal corresponding to a specific position.
Meanwhile, in step S322, the first correlation is a higher spatial correlation, which means that n signal values are different non-zero values, there is no repetition or zero value, the corresponding source is external incident light, the second correlation is a lower spatial correlation, which means that at least one column or row of signal values are repeated non-zero values, the signal values with consistent or basically consistent size are distributed in current, column or block concentration, the corresponding source is fixed noise inside the sensor, and the third correlation is that there is no spatial correlation at all, which means that zero values and non-zero values exist in a plurality of signal values, and the non-zero values are irregularly distributed randomly, and the corresponding source is random noise inside the sensor.
In some embodiments, for each pixel, the step S33 of determining the source of n spatial signals from the spatial correlation further comprises:
s331, under the condition that first correlation is presented among n space signals, determining that n space signals corresponding to pixels are derived from incident light;
S332, under the condition that second correlation is presented among n space signals, determining that n space signals corresponding to pixels are derived from internal noise, wherein the internal noise is fixed noise such as row stripe noise, column stripe noise or regional noise;
s333, determining that n spatial signals corresponding to the pixel originate from internal noise and the internal noise is random noise when the third correlation is present among the n spatial signals.
In an embodiment of the present application, as shown in fig. 12, when the pixel-level sensing is performed based on the 2×2 photoelectric converter sharing the same asymmetric optical module, if the signal value of at least one spatial signal is zero (one spatial signal is zero, two spatial signals are zero, or three spatial signals are zero), it may be determined that the spatial signal in the pixel is caused by the internal noise of the sensor, such as a dark current signal, and there is no external incident light event, and if the signal values of the four spatial signals are all different and different, it may be determined that the source of the spatial signal in the pixel is an external light source, and there is an external incident light event.
The signal values of the four space signals are zero, and default is that no external light source is incident or noise interference inside the sensor exists.
In some embodiments, for a2×2 photoelectric converter sharing the same asymmetric optical module within a pixel, on the basis of obtaining a2×2 spatial signal matrix composed of corresponding 4 spatial signals, correlation between the spatial signals may be determined directly based on determinant values of the 2×2 spatial signal matrix to determine the source of the spatial signals, such as from an external light source, or from internal noise of a sensor.
As shown in fig. 13, in the case where the elements of the 2×2 spatial signal matrix are not all zero, it is determined whether the determinant is zero to determine the source of the spatial signal, if the determinant is zero, that is, three elements are zero, two elements in a certain row are zero, two elements in a certain column are zero, two elements in each row are equal, two elements in each column are equal, or four elements are completely equal, the source of the spatial signal is the internal noise (can be recorded as a dark signal) of the sensor, the corresponding inverse matrix does not exist, and there is no associated incident spectrum, if the determinant is not zero, that is, the four elements are all non-zero and have different sizes, two elements in a certain diagonal are not zero, or one element is zero and the other three elements are not zero, at this time, it is necessary to further determine whether there is zero in the 2×2 spatial signal matrix, if there is no zero element in the spatial signal, the source of the spatial signal is the external light source (can be recorded as an optical signal), the corresponding inverse matrix does not exist, if there is the correlated incident spectrum, the blue signal is further analyzed based on the color component of the blue signal in the internal spectrum, if the color component is the blue signal, and the color component is displayed in the internal spectrum, and the color signal is further analyzed based on the spectral signal, and the spectral signal is the internal spectrum, and the color component is displayed.
When the pixel performs induction acquisition on external incident light, because of asymmetric focusing diffraction and self-interference of the asymmetric optical module on the incident light, high spatial correlation exists between four corresponding spatial signals, the signal values of the four spatial signals are non-zero values with different magnitudes, and the determinant of the corresponding 2×2 spatial signal matrix is generally not zero.
In an exemplary embodiment of the present application, when a pixel inductively captures red light in the 610nm band, the corresponding 2×2 spatial signal matrix has a determinant value of 16.7x14.8-12.7x12.7=86, which is non-zero, when a pixel inductively captures green light in the 550nm band, as shown in fig. 14, the corresponding 2×2 spatial signal matrix has a determinant value of 20.2x17.5-14 x16=130, which is non-zero, and when a pixel inductively captures green light in the 490nm band, as shown in fig. 16, the corresponding 2×2 spatial signal matrix has a determinant value of 19.8x17.3-15 x 15.4=112, which is non-zero, as shown in fig. 17, which is 18.1x19.9-16.9x14.4=102, which is non-zero.
Illustratively, in the embodiment of the present application, as shown in fig. 18, by a2×2 spatial signal matrix composed of corresponding 4 spatial signals, the source of the spatial signals is determined based on determinant values of the spatial signal matrix, such as from an external light source, or from sensor noise, and there are at least the following cases:
Case a, where only 1 spatial signal is not zero and the remaining 3 spatial signals are all zero, the corresponding determinant is zero, the spatial signals are caused by internal noise of the sensor (i.e., dark signals), and further by random noise inside the sensor;
in case B, 4 spatial signals are all different and different from each other, and the corresponding determinant is not zero, the spatial signals are caused by an external light source (i.e., an optical signal);
In the case C, the 2 spatial signals in the first column of the spatial signal matrix are all zero, the 2 spatial signals in the second column are all non-zero, and the corresponding determinant is zero, so that the spatial signals are caused by internal noise of the sensor, at this time, the values of the 2 spatial signals in the second column can be further compared, if the two values are identical, the spatial signals are caused by column stripe noise inside the sensor, and if the two values are different, the spatial signals are caused by random noise inside the sensor;
in case D, the two spatial signals of the first row of the spatial signal matrix are all zero, the 2 spatial signals of the second row are all non-zero, and the corresponding determinant is zero, so that the spatial signals are caused by internal noise of the sensor, at this time, the values of the 2 spatial signals of the second row may be further compared, if the two values are identical, the spatial signals are caused by line stripe noise inside the sensor, and if the two values are different, the spatial signals are caused by random noise inside the sensor.
It should be noted that other situations exist for the corresponding 2×2 spatial signal matrix.
For example, as shown in fig. 19, when all of the 4 spatial signals are not zero but some of the spatial signals are equal, a determinant may be made to be zero, at this time, it may be determined that the spatial signals are caused by internal noise of the sensor, and the type of noise signal may be further determined by the signal value distribution of the spatial signals. As shown in 2200a of fig. 19, the two spatial signals on each column are equal and the two column signals are not equal in size, and the corresponding determinant is zero, and the spatial signals are caused by column stripe noise, and as shown in 2200b of fig. 19, the two spatial signals on each row are equal and the two row signals are not equal in size, and the corresponding determinant is zero, and the spatial signals are caused by row stripe noise.
For example, on the basis of fig. 19, there is a special case where the sizes of the 4 spatial signals are uniform so that the determinant is also zero, and at this time, it can be determined that the spatial signals are caused by the internal noise of the sensor and caused by the block noise.
In addition, for the case where the determinant of the 2×2 spatial signal matrix is not zero, there are other cases, such as that two spatial signals on one diagonal are zero and two spatial signals on the other diagonal are not zero, and that one spatial signal is zero and the other three spatial signals are not zero (as shown in case a of fig. 12), it may be determined that the spatial signals are caused by internal noise of the sensor, at this time, the relative magnitudes of the spatial signals which are not zero may be further compared, and if the magnitudes of the spatial signals which are not zero are equal, the spatial signal large probability is caused by some fixed noise, and if the magnitudes of the spatial signals which are not zero are different, the spatial signal large probability is caused by some random noise.
From the above analysis, in the present application, a 2×2 spatial signal matrix composed of 4 spatial signals can be used to perform noise interference filtering on the spatial signals acquired by the pixels based on determinant values of the spatial signal matrix. For example, in the case where the 4 spatial signals are not all zero, if the determinant value is zero, the source of the spatial signals is the internal noise of the image sensor. Further, the noise type can be judged according to the distribution condition of the signal values of 4 space signals, wherein random noise is determined to exist if the positions and the sizes of the space signals which are not zero are randomly distributed, column stripe noise correspondingly exists if 2 space signals on a certain column in a space signal matrix are equal, row stripe noise correspondingly exists if 2 space signals on a certain row in the space signal matrix are equal, and block noise or region noise correspondingly exists if 3 space signals or 4 space signals in the space signal matrix are equal.
In addition, if the value of the determinant is not zero, it is necessary to determine whether zero elements exist in the 2×2 spatial signal matrix, if no zero elements exist, the source of the spatial signal is an external light source, the inverse matrix exists correspondingly, there is a relevant incident spectrum, if zero elements exist, the source of the spatial signal is the internal noise of the sensor, and whether the noise type is random noise or fixed noise can be further determined according to the relative magnitude of each non-zero spatial signal.
It can be understood that the foregoing embodiment is only exemplified for the process of identifying the source of the spatial signal in the pixel with the n value of 4, so as to eliminate the internal noise interference of the image sensor, and for other pixels with the n value of other numbers and different matrix distribution forms of the n photoelectric converters in the photoelectric conversion module, the noise interference analysis can be performed similarly to the case of the foregoing 2×2 spatial signal matrix, but the analysis and judgment may not be directly performed according to the value of the determinant, especially for the pixels with the n photoelectric converters in the photoelectric conversion module in matrix distribution or irregular matrix distribution with unequal rows and columns, where no determinant exists, but further analysis of the signal values of each spatial signal is required, and in the case that each spatial signal is not all zero, the spatial distribution rule and the relative size of the non-zero spatial signal are analyzed. For example, 9 photoelectric converters in a pixel form a 3×3 spatial signal matrix, or 8 photoelectric converters in a pixel form a2×4 spatial signal matrix, if a partial value of a plurality of spatial signals is zero and a partial value of the plurality of spatial signals is not zero, a source of the spatial signals is internal noise of the image sensor, if each spatial signal on a certain column in the plurality of spatial signals is equal, column stripe noise exists, if each spatial signal on a certain row in the plurality of spatial signals is equal, row stripe noise exists, and similar analysis can be referred to the 2×2 spatial signal matrix, and no description is repeated here.
When the sources of the spatial signals in the pixels are identified through steps S31 to S33, if n spatial signals corresponding to the pixels are identified to originate from the internal noise of the image sensor, in step S34, the n spatial signals corresponding to the pixels may be removed, the pixels may be ignored, the n spatial signals corresponding to the pixels may be compensated and corrected, and the pixels may be calibrated based on operations such as neighborhood interpolation calculation, so as to effectively eliminate the interference of the internal noise.
In some embodiments, step S4 is performed to detect, on a pixel-by-pixel basis, based on the imaging mode, the color of the incident light in each pixel, either from the n spatial signals in the pixel or from the n spatial signals in the pixel, with the internal noise interference being eliminated.
In some embodiments, detecting the color of incident light from n spatial signals within a pixel includes:
S41, selecting n calibration color bands for each pixel to perform unfolding decomposition on incident light;
S42, carrying out pre-calibration on each pixel based on n calibration color bands to obtain pre-calibration parameters;
S43, acquiring n different space signals obtained by photoelectrically collecting incident light by the pixels according to each pixel;
s44, analyzing and calibrating the incident light by pre-calibrating calibration parameters and n space signals for each pixel, and calculating the spectral components of the incident light under each calibration color band.
In each pixel, the spatial signal response refers to the photoelectric effect response of each photoelectric converter on the asymmetric light intensity spatial distribution formed by the incident calibration color band after the asymmetric focusing diffraction of the asymmetric optical module, so as to form a corresponding electric signal. That is, the plurality of spatial signal responses within each pixel can describe the spatial distribution of incident light on the photoelectric conversion module.
For example, the spatial signal response may be obtained by illuminating a known calibration band (e.g., red, green, blue, etc.) onto the photoelectric conversion module and recording the electrical signal output by each photoelectric converter.
The spatial signal response may be represented as a vector or matrix in which each element corresponds to the strength of an electrical signal on a photoelectric converter. For example, for a2 x2 photoelectric conversion array, the spatial signal response may be represented as a 4 x 1 vector.
It will be appreciated that the spectral components may include a contribution ratio of each calibration band to the total incident light intensity, e.g. a contribution ratio of 20% for a red spectral band at 600nm, 30% for a green spectral band at 525nm, and 50% for a blue spectral band at 450 nm.
In some embodiments, the n calibration bands selected in step S41 at least include a red spectrum band, a green spectrum band, and a blue spectrum band, so as to satisfy the requirement of unfolding and decomposing the incident light of unknown color.
The red spectrum color band is used for calibrating and analyzing red components in the incident light, the green spectrum color band is used for calibrating and analyzing green components in the incident light, and the blue spectrum color band is used for calibrating and analyzing blue components in the incident light.
The unfolding decomposition refers to a process of decomposing incident light of an unknown color into different wavelength components. By using the red spectrum band, the green spectrum band and the blue spectrum band as the calibration band, light of an unknown color can be decomposed into three basic components of the red spectrum band, the green spectrum band and the blue spectrum band, so that the spectral characteristics thereof can be accurately described.
In order to ensure that the incident light of the unknown color can be accurately resolved and analyzed, the calibration band selected in step S41 must be capable of covering the spectral range of the unknown color. The red, green and blue spectral bands are commonly used reference bands in spectral analysis that can cover most of the visible range and effectively decompose and describe the spectral characteristics of incident light of unknown colors.
It should be noted that, in step S41, when the calibration color band is selected, the more the number of different color bands is covered by the selected calibration color band, the finer and more accurate the spread and decomposition of the plurality of calibration color bands on the incident light are, so that the selected calibration color band may include more other color spectrum color bands, such as yellow spectrum color band, purple spectrum color band, etc., on the basis of covering red spectrum color band, green spectrum color band, and blue spectrum color band, and also may select a plurality of similar color spectrum color bands of same color system with different wavelengths, such as green spectrum color band with 490nm and green spectrum color band with 550nm, but the corresponding calibration calculation amount is larger, which needs to be considered in a compromise.
In step S42, the pre-calibration parameters refer to parameters for performing the unfolding decomposition on the incident light obtained through a series of pre-calibration steps, which may be obtained by combining the single-color band spatial signal response calculation results of the respective calibration color bands.
In some embodiments, for each pixel, performing a pre-calibration based on n calibration bands, to obtain a pre-calibration parameter, step S42 further includes:
S421, constructing an n-element linear equation set based on the space signal response of the incident light after the expansion and decomposition of n calibration color bands on n photoelectric converters in each pixel, wherein the n-element linear equation set consists of a calibration coefficient matrix, a color band fraction vector and a space signal vector;
s422, traversing n calibration color bands for each pixel, performing reduction calculation on the n-element primary linear equation set based on the spatial signal response of the single calibration color band to obtain calibration coefficient vectors of the single calibration color band, and synthesizing the calibration coefficient vectors of the n calibration color bands to obtain a calibration coefficient matrix;
S423, performing inversion operation on the calibration coefficient matrix aiming at each pixel to obtain an inverse matrix of the calibration coefficient matrix, wherein the inverse matrix of the calibration coefficient matrix is the pre-calibration parameter.
In step S421, the calibration coefficient matrix represents the calibration coefficients of the spatial signal measured on each photoelectric converter for each calibration band. Specifically, the elements in each row of the calibration coefficient matrix represent the calibration coefficients of the spatial signal on the photoelectric converter by the respective calibration color band, and the elements in each column of the calibration coefficient matrix represent the calibration coefficients of the spatial signal on the photoelectric converter by the respective calibration color band, that is, each element a ij in the calibration coefficient matrix represents the calibration coefficient of the ith spatial signal under the jth calibration color band.
For example, in the embodiment of the present application, for the asymmetric optical module shown in fig. 8-9, n has a value of 4, and when a narrow spectrum color band with wavelengths of 430nm, 490nm, 550nm and 610nm is selected as the calibration color band in the visible spectrum range, the asymmetric optical module can effectively perform asymmetric focusing diffraction on the incident light of the 4 calibration color bands (b=430 nm, g2=490 nm, g1=550 nm and r=610 nm), as shown in fig. 20, so that the asymmetric optical module can effectively perform asymmetric diffraction on various incident lights based on the 4 calibration color bands, and different spatial signals can be formed on 2×2 photoelectric converters.
Based on the different space signal responses of 4 photoelectric converters in the pixel, 4 representative calibration color bands can be selected in the visible spectrum range, a 4-element primary linear equation set is constructed, calibration is performed based on the space signal responses of a single calibration color band, a calibration coefficient matrix corresponding to the 4-element primary linear equation set is determined, finally, color band components of incident light are solved based on the calibration coefficient matrix and 4 space signals measured in real time, namely, based on the 2×2 calibration color bands for unfolding and decomposing the incident light, the incident light is analyzed based on the 2×2 space signals, and the incident light is analyzed into 2×2 spectrum components corresponding to the 2×2 calibration color bands.
Illustratively, in an embodiment of the present application, for a pixel or image sensor as shown in fig. 8-9, each pixel spatially responds to incident light that is decomposed based on the above-described 4 calibration bands, forming a quaternary linear system of equations as follows.
;
;
;
;
Wherein, the A calibration coefficient representing the spatial signal Mij measured by the calibration band bk component of the incident light at the photoelectric converter PDij,The method comprises the steps of representing a contribution factor of a calibration color band bk component of incident light to a space signal Mij, wherein Mij represents a space signal obtained by measuring the incident light with unknown color on a photoelectric converter PDij, i and j are integers of 1-2, and k is an integer of 1-4.
In the embodiment of the present application, in the visible spectrum range of 400 to 640 nm, as shown in fig. 20 to 21, blue light with a full width half maximum FWHM of 60nm and a wavelength of 430nm is used as the calibration Band 1 (abbreviated as b 1), green light with a full width half maximum FWHM of 60nm and a wavelength of 490nm is used as the calibration Band 2 (abbreviated as b 2), green light with a full width half maximum FWHM of 60nm and a wavelength of 550nm is used as the calibration Band 3 (abbreviated as b 3), and red light with a full width half maximum FWHM of 60nm and a wavelength of 610nm is used as the calibration Band 4 (abbreviated as b 4).
It will be appreciated that the quaternary once-linear system of equations described above can be written as follows.
;
Wherein C is a 4×4 calibration coefficient matrix, X is a color band fraction vector contributing to the measured spatial signal in the incident light of unknown color, and M is a spatial signal vector obtained by measuring the incident light of unknown color;
,,
The two sides of the matrix equation are multiplied by the inverse matrix of C at the same time to obtain the following relational expression, namely the solution of the linear equation set:
;
According to the matrix equation, after the pixel structure and the selected 4 calibration color bands are determined, the calibration vector of each calibration color band is determined by the pixel structure, so that the calibration coefficient matrix is determined uniquely, the spatial signal response of the pixel structure to the unknown color incident light can be understood as ' encoding the incident light which is unfolded and decomposed according to the 4 calibration color bands based on the calibration coefficient matrix ' to obtain 4 different spatial signals ', meanwhile, according to the matrix equation, the spatial signal response reduction or encoding reduction can be carried out by using only one of the 4 calibration color bands at a time, namely, one element in X is 1, and the other three elements are 0, so that the simplification of the matrix equation is realized, and then part of elements or vectors in the calibration coefficient matrix C can be calculated based on the corresponding measured spatial signals, the complete calibration coefficient matrix C can be obtained by combining the calibration coefficient elements or the calibration coefficient vectors which are solved according to the 4 calibration color bands, and the inverse matrix C -1 of the calibration coefficient matrix C can be solved.
Finally, as can be seen from the above matrix equation, for the incident light X of an unknown color, on the basis of knowing the inverse matrix C -1 of the calibration coefficient matrix C and the spatial signal vector M corresponding to the incident light X, the incident light X can be directly calculated by analyzing based on the inverse matrix C -1 of the calibration coefficient matrix C and the measured spatial signal vector M, and the incident light X is analyzed into spectral components corresponding to the above 4 calibration color bands, so as to realize color measurement of the incident light while detecting the brightness of the incident light. The process can be understood as "decoding the inverse operation parameters (inverse matrix C -1 of calibration coefficient matrix C) based on the coding parameters and the coding results (4 different spatial signals), resulting in the unfolded decomposition results of the incident light under 4 calibration color bands".
Exemplary, as shown in FIG. 14, in an embodiment of the present application, the following is madeThe single-color Band pre-calibration simplification is carried out by using red light with full width half maximum FWHM of 60nm and wavelength of 610nm as a calibration color Band 4, and the corresponding quaternary linear equation set is simplified into:
;
;
;
;
Meanwhile, according to the actual measurement value of the space signal vector M, the quantum efficiencies of the four corresponding photoelectric converters are respectively qepd11=16.7, qepd12=12.7, QE pd21=12.7 and QE pd22=14.8.
Exemplary, as shown in FIG. 15, in an embodiment of the present application, the following is madeAnd (3) carrying out single-color Band pre-calibration simplification by using green light with full width half maximum FWHM of 60nm and wavelength of 550nm as a calibration color Band 3, wherein the corresponding quaternary linear equation set is simplified into:
;
;
;
;
Meanwhile, according to the actual measurement value of the space signal vector M, the quantum efficiencies of the four corresponding photoelectric converters are respectively qepd11=20.2, QE pd12=14, QE pd21=16 and QE pd22=17.5.
Exemplary, as shown in FIG. 16, in an embodiment of the present application, the following is madeThe single-color Band pre-calibration simplification is carried out by using green light with full width half maximum FWHM of 60nm and wavelength of 490nm as a calibration color Band 2, and the corresponding quaternary linear equation set is simplified into:
;
;
;
;
Meanwhile, according to the actual measurement value of the space signal vector M, the quantum efficiencies of the four corresponding photoelectric converters are respectively qepd11=19.8, QE pd12=15, QE pd21=15.4 and QE pd22=17.3.
Exemplary, as shown in FIG. 17, in an embodiment of the present application, the following is madeThe blue light with full width half maximum FWHM of 60nm and wavelength of 430nm is used as the calibration color Band 1 to perform single-color Band pre-calibration simplification, and the corresponding quaternary linear equation set is simplified into:
;
;
;
;
Meanwhile, according to the actual measurement value of the space signal vector M, the quantum efficiencies of the four corresponding photoelectric converters are respectively qepd11=18.1, qepd12=16.9, qepd21=14.4 and qepd22=19.9.
Thus, the pre-calibration simplifications of the four monochromatic calibration bands are combined to obtain a corresponding calibration coefficient matrix C as shown in table 1 below, where the four calibration coefficients of each column correspond to the pre-calibration simplistic results of one calibration band.
TABLE 1
Meanwhile, for calculation, four calibration coefficients corresponding to the pre-calibration simplification result of the single calibration color band are normalized to obtain a calibration coefficient matrix C shown in the following table 2.
TABLE 2
Illustratively, the calibration coefficient matrix C shown in the above table is subjected to an inversion calculation, resulting in an inverse matrix C -1 of the calibration coefficient matrix C as shown in the following table 3.
TABLE 3 Table 3
In this way, a pre-calibration or pre-calibration of the corresponding pixel or image sensor is achieved, subsequently for incident light of unknown color, based onThe formula can directly calculate the incident light X according to the inverse matrix C -1 of the calibration coefficient matrix C and the measured space signal vector M, and analyze the incident light X into spectral components corresponding to 4 calibration color bands, namelyThe color measurement of the incident light is realized while the detection of the incident light brightness is realized.
In the embodiment of the present application, as shown in fig. 20, the asymmetric distribution patterns of the spatial signals of the 4 photoelectric converters of the incident light of different colors are different, the asymmetric spatial patterns of the spatial signals recorded on the 4 photoelectric converters can be regarded as fingerprints of the incident spectrum/color, so that the spectral components corresponding to the 4 calibration color bands can be analyzed and decoded by compiling a linear equation set connecting the spatial signals of the photoelectric converters and the spectral components of the measured incident light signals, and in order to decode the measured 4 spatial signals, the pixels of one asymmetric optical module can be calibrated in advance by sequentially irradiating the pixels of the plurality of photoelectric converters with calibration color bands of different wavelength bands, and corresponding spatial signal responses are recorded, so as to obtain a calibration coefficient matrix, and then the inverse matrix of the calibration coefficient matrix is used to calculate the incident light of unknown color, so as to solve the linear equation set, wherein each element of the calibration coefficient matrix is used as a coefficient for connecting the 4 spatial measurement signals to the linear equation set of the 4 calibration color bands.
Therefore, in the embodiment of the present application, the combination of the asymmetric optical module and the multi-color band calibration can effectively replace the micro lens and the optical filter in the existing pixel, and realize the detection of the incident light brightness and the color detection, the flow is shown in fig. 22, after the incident light X is resolved into the spectral components corresponding to the 4 calibration color bands based on the flow, the obtained spectral components are multiplied by the total measurement signals in the 4 photoelectric converters, so that the signals are converted into the Least Significant Bit (LSB) of the measurement value, and finally the spectral components corresponding to the 4 calibration color bands are converted into the standard red, green and blue color space based on the least significant bit for display.
For example, in order to convert the measured spectral components into normalized values, it is necessary to normalize each spectral component, i.e. divide each spectral component by the total measurement signal of 4 photoelectric converters, eliminate the effect of the light source intensity variation, and ensure that the relative proportions of the measured values are consistent.
It can be understood that the above embodiment only illustrates the unfolding and analyzing process of the incident light with the unknown color in the 4 pixels, and based on the 4 calibration color bands selected to unfold and decompose the incident light, the incident light is analyzed based on the pre-calibration results of the 4 calibration color bands and the 4 spatial signals actually measured by the incident light, and the incident light is analyzed into the spectral components corresponding to the 4 calibration color bands, so as to realize the color detection of the incident light. For other numbers of pixels with values of n being 3,5, 8, etc., the color detection process of the incident light can be analyzed similarly, and will not be described herein.
In some embodiments, detecting a change in brightness of incident light from n spatial signals within a pixel includes:
s401, carrying out superposition summation on n spatial signals to obtain spatial superposition signals;
s402, detecting brightness change of incident light based on the spatial superposition signal.
The n space signals are added and summed, i.e. the signal values measured by each photoelectric converter in the pixel are added to obtain a total space superposition signal.
It should be noted that, by combining the spatial signals on n photoelectric converters sharing the same asymmetric optical module in the pixel, a spatial superimposed signal is obtained, and the spatial superimposed signal eliminates the internal noise interference of the sensor and is formed by superimposing a plurality of spatial signals, so that the dynamic range of the photoelectric conversion module can be improved, the intensity of an external light source can be more effectively distinguished, and the bright signal and the dark signal can be distinguished, so that the brightness change from weak light to strong light can be more accurately measured, and when the intensity change of incident light is monitored in real time, the event sensing capability of a remote object based on the brightness change or the light intensity change can be remarkably improved. In addition, the superposition of a plurality of spatial signals can average out the influence of random noise, thereby improving the signal-to-noise ratio of the signals.
In some embodiments, an increase in the signal value of the spatially superimposed signal is indicative of an increase in the brightness of the incident light, and a decrease in the signal value of the spatially superimposed signal is indicative of a decrease in the brightness of the incident light.
Illustratively, in some embodiments, as shown in fig. 23, the method of operating an image sensor described above includes the steps of:
step S2301, determining an imaging mode of the image sensor, where the imaging mode includes a spectral imaging mode and an event imaging mode;
Step S2302, performing imaging processing based on the imaging mode of the image sensor, and performing noise analysis on at least the spatial signal in the event imaging mode.
For example, as shown in fig. 23, if the imaging mode is a spectral imaging mode, in step S2302, the incident light may be resolved into spectral components of a plurality of calibration color bands directly based on the incident light color detection process shown in steps S41 to S44, which is described in detail in the foregoing, and will not be repeated here.
For example, as shown in fig. 23, if the imaging mode is the event imaging mode, in step S2302, noise identification needs to be performed on each acquired spatial signal based on the spatial signal source determination and identification process shown in steps S31 to S34, operations such as signal filtering or signal correction processing are performed on the basis of noise identification, so as to obtain a processed spatial signal, then the processed spatial signals in the pixels are combined and overlapped to obtain a spatial overlapped signal, and finally subsequent image processing is performed based on the spatial overlapped signal, so that a spatial overlapped signal which eliminates the internal noise interference of the sensor and performs intensity overlapping can be obtained, and the intensity of an external light source can be more effectively distinguished, and a bright signal and a dark signal can be distinguished, so as to improve the event sensing capability of a remote object.
For example, as shown in fig. 23, noise recognition of the spatial signal in the event imaging mode may also be fed back to the spectrum imaging mode, operations such as signal filtering or signal correction processing are performed on the basis of noise recognition, a processed spatial signal is obtained, and then the incident light is resolved into spectral components of a plurality of calibration color bands based on the processed spatial signal, so that internal noise interference of the sensor is eliminated, and sensing accuracy of spectrum imaging can be effectively improved.
From time to time, the embodiment of the application also provides an imaging system, which comprises:
the imaging lens is used for converging and imaging light rays of the object to form incident light;
The image sensor is used for carrying out focusing diffraction on incident light based on pixels in the image sensor so as to generate a three-dimensional asymmetric light intensity pattern, and n different space signals are formed by recording n photoelectric converters in the pixels in a one-to-one correspondence manner, wherein n is a positive integer greater than or equal to 3;
wherein the pixel is configured to:
determining the sources of n spatial signals according to the spatial correlation among the n spatial signals in the photoelectric conversion module so as to eliminate the interference of internal noise;
Under the condition that the internal noise interference is eliminated, the incident light is unfolded and decomposed by n calibration color bands, and the spectrum component of the incident light under each calibration color band is obtained through the pre-calibration parameters obtained by the pre-calibration of the n calibration color bands and the analysis and calibration of the n space signals;
And/or, under the condition that the internal noise interference is eliminated, carrying out superposition summation on the n spatial signals to obtain a spatial superposition signal, and detecting the brightness change of the incident light based on the spatial superposition signal.
Illustratively, in some embodiments, as shown in fig. 24, the imaging system comprises:
an imaging lens for converging and imaging light rays of an object;
The image sensor performs asymmetric spatial signal acquisition on incident light based on a plurality of pixels, determines sources of 4 spatial signals based on spatial correlation among the 4 spatial signals in each pixel to eliminate interference of internal noise, analyzes the incident light into spectral components corresponding to the 4 calibration color bands by combining pre-calibration of the 4 calibration color bands in a visible spectrum range under the condition of eliminating the interference of the internal noise so as to perform brightness detection and color detection on the incident light in each pixel, performs superposition summation on the 4 spatial signals under the condition of eliminating the interference of the internal noise to obtain a spatial superposition signal, and detects brightness change of the incident light in each pixel based on the spatial superposition signal.
As shown in fig. 24 and 25, in the embodiment of the present application, since the asymmetric optical module and the subsequent multi-band calibration technique in the visible spectrum range can effectively replace the microlens and the filter structure in the related pixels, the image sensor in the embodiment of the present application does not need any structural offset to adapt to the high chief ray angle, and the image sensor in the embodiment of the present application can also adapt to the imaging lens of any F-number. As shown in fig. 25, experiments prove that two imaging lenses with different high chief ray angles CRA and different F-numbers, namely, a lens 1 and a lens 2, can be effectively adapted to the structure of the image sensor in the embodiment of the present application.
Therefore, the assembly of the imaging system can be simplified, imaging lenses with different specifications and image sensors with different specifications can be flexibly assembled according to actual needs, the imaging system is obtained, and the assembled imaging system is calibrated and calibrated based on the pre-calibration of a plurality of calibration color bands in the visible spectrum range.
It should be noted that, the pixel and the image sensor based on the asymmetric optical module are formed, and the multi-color band calibration is performed based on the spatial correlation of the spatial signal, so that even if the asymmetric optical module has a printing error or a process error, the structure of the asymmetric optical module is not ideal, and the subsequent spectral imaging performance is not affected.
Finally, as shown in fig. 26, an embodiment of the present application further provides an optoelectronic device 2600.
Illustratively, as shown in FIG. 26, the optoelectronic device 2600 includes a processor 2601 having one or more processing cores, a memory 2602 having one or more computer-readable storage media, and a computer program stored on the memory 2602 and executable on the processor 2601. The processor 2601 is electrically connected to the memory 2602. It will be appreciated by those skilled in the art that the optoelectronic device structure shown in the figures does not constitute a limitation of the optoelectronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
The processor 2601 is a control center of the optoelectronic device 2600, connects various portions of the entire optoelectronic device 2600 using various interfaces and lines, and performs various functions and processes of the optoelectronic device 2600 by running or loading software programs and/or units stored in the memory 2602, and invoking data stored in the memory 2602, thereby monitoring the optoelectronic device 2600 as a whole. The processor 2601 may be a processor CPU, a graphics processor GPU, a network processor (Network Processor, NP), etc., that may implement or perform the methods, steps, and logic blocks disclosed in embodiments of the present application.
In the embodiment of the present application, the processor 2601 in the optoelectronic device 2600 loads instructions corresponding to the processes of one or more application programs into the memory 2602 according to the corresponding steps, and the processor 2601 executes the application programs stored in the memory 2602, so as to implement various functions, such as executing the steps of the operation method of the image sensor.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
As shown in fig. 26, the optoelectronic device 2600 further includes an image sensor (not shown in the figure), and the processor 2601 is electrically connected to the image sensors, and the image sensors may be the image sensors according to the foregoing embodiments. It will be appreciated by those skilled in the art that the opto-electronic device structure shown in fig. 26 is not limiting of the opto-electronic device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that all or a portion of the steps of the various methods of the above embodiments may be performed by instructions, or by instructions controlling associated hardware, which may be stored in a computer-readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium in which a plurality of computer programs are stored, the computer programs being capable of being loaded by a processor to perform any one of the methods of operating an image sensor provided by the embodiment of the present application. The computer program may perform the steps of the method of operating an image sensor as described above.
The specific implementation of each operation above may be referred to the previous embodiments, and will not be described herein.
The computer readable storage medium may include, among others, read Only Memory (ROM), random access Memory (RAM, random Access Memory), magnetic or optical disks, and the like.
Because the computer program stored in the computer readable storage medium can execute any one of the operation methods of the image sensor provided by the embodiments of the present application, the beneficial effects that can be achieved by any one of the operation methods of the image sensor provided by the embodiments of the present application can be achieved, which are detailed in the previous embodiments and are not described herein.
The pixel, the image sensor, the method for operating the image sensor, the optoelectronic device, the computer readable storage medium, and the imaging system provided in the embodiments of the present application are described in detail, the principles and the implementations of the embodiments of the present application are described in specific examples, the description of the above examples is only for aiding in understanding of the methods and the core ideas of the present application, and meanwhile, those skilled in the art may change in the specific implementations and application scope according to the ideas of the present application, so as to not limit the embodiments of the present application.
It should be noted that, in the present disclosure, each embodiment is described in a progressive manner, and each embodiment is mainly described as different from other embodiments, and identical and similar parts between the embodiments are all enough to be referred to each other.
It should also be noted that in the present disclosure, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises an element.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present disclosure is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (20)

1.一种像素,其特征在于,所述像素包括:1. A pixel, characterized in that the pixel comprises: 光电转换模块,包括呈阵列形式相邻排布的n个光电转换器,其中,n为大于或者等于3的正整数;A photoelectric conversion module, comprising n photoelectric converters arranged adjacent to each other in an array, wherein n is a positive integer greater than or equal to 3; 不对称光学模块,所述不对称光学模块设置在所述光电转换模块上,其中,所述不对称光学模块用于对入射光进行聚焦衍射以产生三维不对称光强图案,进而在所述光电转换模块中的n个所述光电转换器上一一对应记录形成n个不同的空间信号;an asymmetric optical module, the asymmetric optical module being disposed on the photoelectric conversion module, wherein the asymmetric optical module is configured to focus and diffract incident light to generate a three-dimensional asymmetric light intensity pattern, thereby recording n different spatial signals in a one-to-one correspondence on the n photoelectric converters in the photoelectric conversion module; 其中,所述像素被配置为:根据所述光电转换模块中n个所述空间信号之间的空间相关性确定n个所述空间信号的来源,以排除内部噪声的干扰。The pixel is configured to determine sources of the n spatial signals according to spatial correlations among the n spatial signals in the photoelectric conversion module, so as to eliminate interference from internal noise. 2.根据权利要求1所述的像素,其特征在于,所述像素还被配置为:在排除了内部噪声干扰的情况下,以n个校准色带对所述入射光进行展开分解,并通过由n个所述校准色带的预先校准标定所得到的预先校准标定参数及n个所述空间信号对所述入射光进行解析校准,得到所述入射光在每个所述校准色带下的光谱分量。2. The pixel according to claim 1 is characterized in that the pixel is further configured to: decompose the incident light using n calibration color bands after eliminating internal noise interference, and analytically calibrate the incident light using pre-calibration calibration parameters obtained by pre-calibration of the n calibration color bands and the n spatial signals to obtain the spectral components of the incident light under each of the calibration color bands. 3.根据权利要求1或者2所述的像素,其特征在于,所述像素还被配置为:在排除了内部噪声干扰的情况下,对n个所述空间信号进行叠加求和,得到空间叠加信号,并基于所述空间叠加信号对所述入射光的亮度变化进行检测。3. The pixel according to claim 1 or 2 is characterized in that the pixel is further configured to: superimpose and sum the n spatial signals after eliminating internal noise interference to obtain a spatial superposition signal, and detect the brightness change of the incident light based on the spatial superposition signal. 4.根据权利要求1所述的像素,其特征在于,所述不对称光学模块包括:4. The pixel according to claim 1, wherein the asymmetric optical module comprises: 背景结构层,所述背景结构层由具有第一折射率的第一材料构成;A background structure layer, wherein the background structure layer is made of a first material having a first refractive index; 嵌入所述背景结构层的衍射结构层,所述衍射结构层包括多个由具有第二折射率的第二材料构成的以对所述入射光进行聚焦衍射的组件,所述第一折射率低于所述第二折射率。A diffraction structure layer is embedded in the background structure layer, wherein the diffraction structure layer includes a plurality of components made of a second material having a second refractive index for focusing and diffracting the incident light, and the first refractive index is lower than the second refractive index. 5.根据权利要求4所述的像素,其特征在于,所述第一材料和所述第二材料为无机材料。The pixel according to claim 4 , wherein the first material and the second material are inorganic materials. 6.根据权利要求5所述的像素,其特征在于,所述组件包括至少两种不同尺寸的衍射圆柱和至少两种不同尺寸的衍射圆环柱,多个所述衍射圆柱与多个所述衍射圆环柱交错且非对称设置,以使所述衍射结构层对所述入射光的衍射为不对称衍射。6. The pixel according to claim 5 is characterized in that the component includes at least two diffraction cylinders of different sizes and at least two diffraction ring cylinders of different sizes, and multiple diffraction cylinders and multiple diffraction ring cylinders are staggered and asymmetrically arranged so that the diffraction structure layer diffracts the incident light asymmetricly. 7.一种图像传感器,其特征在于,所述图像传感器包括多个阵列设置的像素;所述像素包括:7. An image sensor, characterized in that the image sensor comprises a plurality of pixels arranged in an array; the pixels comprise: 光电转换模块,包括呈阵列形式相邻排布的n个光电转换器,其中,n为大于或者等于3的正整数;A photoelectric conversion module, comprising n photoelectric converters arranged adjacent to each other in an array, wherein n is a positive integer greater than or equal to 3; 不对称光学模块,所述不对称光学模块设置在所述光电转换模块上,其中,所述不对称光学模块用于对入射光进行聚焦衍射以产生三维不对称光强图案,进而在所述光电转换模块中的n个所述光电转换器上一一对应记录形成n个不同的空间信号;an asymmetric optical module, the asymmetric optical module being disposed on the photoelectric conversion module, wherein the asymmetric optical module is configured to focus and diffract incident light to generate a three-dimensional asymmetric light intensity pattern, thereby recording n different spatial signals in a one-to-one correspondence on the n photoelectric converters in the photoelectric conversion module; 其中,所述像素被配置为:根据所述光电转换模块中n个所述空间信号之间的空间相关性确定n个所述空间信号的来源,以排除内部噪声的干扰。The pixel is configured to determine sources of the n spatial signals according to spatial correlations among the n spatial signals in the photoelectric conversion module, so as to eliminate interference from internal noise. 8.根据权利要求7所述的图像传感器,其特征在于,所述图像传感器包括至少两种不同结构的所述像素,且所述图像传感器中的各个所述像素呈规则阵列设置。8 . The image sensor according to claim 7 , wherein the image sensor comprises pixels of at least two different structures, and the pixels in the image sensor are arranged in a regular array. 9.根据权利要求7或者8所述的图像传感器,其特征在于,所述像素还被配置为:在排除了内部噪声干扰的情况下,以n个校准色带对所述入射光进行展开分解,并通过由n个所述校准色带的预先校准标定所得到的预先校准标定参数及n个所述空间信号对所述入射光进行解析校准,得到所述入射光在每个所述校准色带下的光谱分量。9. The image sensor according to claim 7 or 8 is characterized in that the pixel is further configured to: decompose the incident light using n calibration color bands after eliminating internal noise interference, and analytically calibrate the incident light using pre-calibration calibration parameters obtained by pre-calibration of the n calibration color bands and the n spatial signals to obtain the spectral components of the incident light under each of the calibration color bands. 10.根据权利要求7或者8所述的图像传感器,其特征在于,所述像素还被配置为:在排除了内部噪声干扰的情况下,对n个所述空间信号进行叠加求和,得到空间叠加信号,并基于所述空间叠加信号对所述入射光的亮度变化进行检测。10. The image sensor according to claim 7 or 8 is characterized in that the pixel is further configured to: superimpose and sum the n spatial signals after eliminating internal noise interference to obtain a spatial superposition signal, and detect the brightness change of the incident light based on the spatial superposition signal. 11.根据权利要求7所述的图像传感器,其特征在于,所述像素还包括背照式硅衬底,所述光电转换模块设置在所述背照式硅衬底上,所述背照式硅衬底上环绕所述光电转换模块的周围设有沟槽。11. The image sensor according to claim 7, wherein the pixel further comprises a back-illuminated silicon substrate, the photoelectric conversion module is disposed on the back-illuminated silicon substrate, and a groove is provided on the back-illuminated silicon substrate surrounding the photoelectric conversion module. 12.一种成像系统,其特征在于,所述成像系统包括:12. An imaging system, characterized in that the imaging system comprises: 成像透镜,用于对物体的光线进行汇聚成像,形成入射光;Imaging lens, used to converge the light of the object into an image to form incident light; 图像传感器,用于基于所述图像传感器中的像素对所述入射光进行聚焦衍射,以产生三维不对称光强图案,进而在所述像素中的n个光电转换器上一一对应记录形成n个不同的空间信号;其中,n为大于或者等于3的正整数;An image sensor configured to focus and diffract the incident light based on pixels in the image sensor to generate a three-dimensional asymmetric light intensity pattern, and then record n different spatial signals on n photoelectric converters in the pixels in a one-to-one correspondence; wherein n is a positive integer greater than or equal to 3; 其中,所述像素被配置为:Wherein, the pixels are configured as follows: 根据n个所述空间信号之间的空间相关性确定n个所述空间信号的来源,以排除内部噪声的干扰;determining sources of the n spatial signals according to spatial correlations between the n spatial signals to eliminate interference from internal noise; 在排除了内部噪声干扰的情况下,以n个校准色带对所述入射光进行展开分解,并通过由n个所述校准色带的预先校准标定所得到的预先校准标定参数及n个所述空间信号对所述入射光进行解析校准,得到所述入射光在每个所述校准色带下的光谱分量;Under the condition of eliminating internal noise interference, the incident light is decomposed by using n calibration color bands, and the incident light is analytically calibrated using pre-calibrated calibration parameters obtained by pre-calibrating the n calibration color bands and the n spatial signals to obtain the spectral components of the incident light in each of the calibration color bands; 和/或,在排除了内部噪声干扰的情况下,对n个所述空间信号进行叠加求和,得到空间叠加信号,并基于所述空间叠加信号对所述入射光的亮度变化进行检测。And/or, when internal noise interference is eliminated, the n spatial signals are superimposed and summed to obtain a spatial superposition signal, and the brightness change of the incident light is detected based on the spatial superposition signal. 13.一种图像传感器的操作方法,其特征在于,应用于多个像素组成的图像传感器上,所述方法包括:13. A method for operating an image sensor, characterized in that it is applied to an image sensor composed of multiple pixels, the method comprising: 获取所述图像传感器的成像模式,其中,所述成像模式包括光谱成像模式、事件成像模式和融合成像模式;Acquiring an imaging mode of the image sensor, wherein the imaging mode includes a spectral imaging mode, an event imaging mode, and a fusion imaging mode; 基于所述成像模式,控制所述图像传感器以所述像素为单位对相应的入射光进行采集,得到每个所述像素对应的n个不同的空间信号;Based on the imaging mode, controlling the image sensor to collect corresponding incident light in units of the pixel, to obtain n different spatial signals corresponding to each pixel; 以所述像素为单位进行空间信号来源的判断,以排除内部噪声的干扰;Determining the source of the spatial signal using the pixel as a unit to eliminate interference from internal noise; 在排除了所述内部噪声的干扰的情况下,基于所述成像模式,以所述像素为单位进行检测,根据所述像素内的n个所述空间信号检测所述入射光的颜色,和/或,根据所述像素内的n个所述空间信号检测所述入射光的亮度变化;When interference from the internal noise is eliminated, based on the imaging mode, detection is performed in units of the pixels, and the color of the incident light is detected according to the n spatial signals within the pixels, and/or the brightness change of the incident light is detected according to the n spatial signals within the pixels; 其中,所述像素包括:The pixels include: 光电转换模块,包括呈阵列形式相邻排布的n个光电转换器,其中,n为大于或者等于3的正整数;A photoelectric conversion module, comprising n photoelectric converters arranged adjacent to each other in an array, wherein n is a positive integer greater than or equal to 3; 不对称光学模块,所述不对称光学模块设置在所述光电转换模块上,其中,所述不对称光学模块用于对所述入射光进行聚焦衍射以产生三维不对称光强图案,进而在所述光电转换模块中的n个所述光电转换器上一一对应记录形成n个不同的所述空间信号。An asymmetric optical module is arranged on the photoelectric conversion module, wherein the asymmetric optical module is used to focus and diffract the incident light to produce a three-dimensional asymmetric light intensity pattern, and then record the n different spatial signals on the n photoelectric converters in the photoelectric conversion module in a one-to-one correspondence. 14.根据权利要求13所述的方法,其特征在于,所述以所述像素为单位进行空间信号来源的判断,以排除内部噪声的干扰,包括:14. The method according to claim 13, wherein determining the source of the spatial signal in units of pixels to eliminate interference from internal noise comprises: 针对每个所述像素,获取对应的n个不同的所述空间信号;For each pixel, obtaining corresponding n different spatial signals; 针对每个所述像素,根据n个所述空间信号的信号值,分析n个所述空间信号之间的空间相关性;For each of the pixels, analyzing the spatial correlation between the n spatial signals according to the signal values of the n spatial signals; 针对每个所述像素,根据所述空间相关性确定n个所述空间信号的来源;For each of the pixels, determining sources of the n spatial signals according to the spatial correlation; 针对每个所述像素,在所述像素对应的n个所述空间信号来源于所述内部噪声的情况下,去除所述像素对应的n个所述空间信号,和/或,对所述像素对应的n个所述空间信号进行补偿校正,以排除所述内部噪声的干扰。For each of the pixels, when the n spatial signals corresponding to the pixel are derived from the internal noise, the n spatial signals corresponding to the pixel are removed, and/or the n spatial signals corresponding to the pixel are compensated and corrected to eliminate the interference of the internal noise. 15.根据权利要求14所述的方法,其特征在于,所述空间相关性包括第一相关性、第二相关性及第三相关性,所述针对每个所述像素,根据n个所述空间信号的信号值,分析n个所述空间信号之间的空间相关性,包括:15. The method according to claim 14, wherein the spatial correlation comprises a first correlation, a second correlation, and a third correlation, and wherein for each pixel, analyzing the spatial correlation between the n spatial signals according to the signal values of the n spatial signals comprises: 针对每个所述像素,基于n个所述空间信号的信号值构建空间信号矩阵;For each of the pixels, constructing a spatial signal matrix based on the signal values of the n spatial signals; 针对每个所述像素,基于所述空间信号矩阵中各个所述信号值的数值与分布规律,分析n个所述空间信号之间的空间相关性;For each of the pixels, analyzing the spatial correlation between the n spatial signals based on the values and distribution patterns of the signal values in the spatial signal matrix; 其中,在所述空间信号矩阵中n个所述信号值各不相同且均为非零值的情况下,n个所述空间信号之间呈现所述第一相关性;Wherein, when the n signal values in the spatial signal matrix are different and all are non-zero values, the n spatial signals exhibit the first correlation; 在所述空间信号矩阵中至少一行或者一列所述信号值为相同非零值的情况下,n个所述空间信号之间呈现所述第二相关性;When the signal values of at least one row or column in the spatial signal matrix are the same non-zero value, the n spatial signals present the second correlation; 在所述空间信号矩阵中n个所述信号值部分为零、部分为非零值且非零值随机分布的情况下,n个所述空间信号之间呈现所述第三相关性。In the case that some of the n signal values in the spatial signal matrix are zero and some are non-zero values, and the non-zero values are randomly distributed, the third correlation exists between the n spatial signals. 16.根据权利要求15所述的方法,其特征在于,所述针对每个所述像素,根据所述空间相关性确定n个所述空间信号的来源,包括:16. The method according to claim 15, wherein determining the sources of the n spatial signals for each pixel according to the spatial correlation comprises: 在n个所述空间信号之间呈现所述第一相关性的情况下,确定所述像素对应的n个所述空间信号来源于所述入射光;In a case where the first correlation exists between the n spatial signals, determining that the n spatial signals corresponding to the pixel originate from the incident light; 在n个所述空间信号之间呈现所述第二相关性的情况下,确定所述像素对应的n个所述空间信号来源于所述内部噪声,且所述内部噪声为行条纹噪声或者列条纹噪声;When the second correlation is present between the n spatial signals, determining that the n spatial signals corresponding to the pixel are derived from the internal noise, and the internal noise is row stripe noise or column stripe noise; 在n个所述空间信号之间呈现所述第三相关性的情况下,确定所述像素对应的n个所述空间信号来源于所述内部噪声,且所述内部噪声为随机噪声。In a case where the third correlation exists between the n spatial signals, it is determined that the n spatial signals corresponding to the pixel are derived from the internal noise, and the internal noise is random noise. 17.根据权利要求13所述的方法,其特征在于,所述根据所述像素内的n个所述空间信号检测所述入射光的颜色,包括:17. The method according to claim 13, wherein detecting the color of the incident light according to the n spatial signals within the pixel comprises: 针对每个所述像素,选取n个校准色带以对所述入射光进行展开分解;For each pixel, selecting n calibration color bands to decompose the incident light; 针对每个所述像素,基于n个所述校准色带进行预先校准标定,得到预先校准标定参数;For each of the pixels, pre-calibration is performed based on the n calibration ribbons to obtain pre-calibration parameters; 针对每个所述像素,获取所述像素对所述入射光进行光电采集得到的n个不同的所述空间信号;For each pixel, obtaining n different spatial signals obtained by the pixel performing photoelectric collection on the incident light; 针对每个所述像素,通过所述预先校准标定参数及n个所述空间信号对所述入射光进行解析校准,计算所述入射光在每个所述校准色带下的光谱分量。For each of the pixels, the incident light is analytically calibrated using the pre-calibrated calibration parameters and the n spatial signals, and the spectral components of the incident light in each of the calibration color bands are calculated. 18.根据权利要求13所述的方法,其特征在于,所述根据所述像素内的n个所述空间信号检测所述入射光的亮度变化,包括:18. The method according to claim 13, wherein detecting the brightness change of the incident light according to the n spatial signals within the pixel comprises: 对n个所述空间信号进行叠加求和,得到空间叠加信号;Superimposing and summing the n spatial signals to obtain a spatial superposition signal; 基于所述空间叠加信号对所述入射光的亮度变化进行检测。A brightness change of the incident light is detected based on the spatial superposition signal. 19.一种光电设备,其特征在于,包括:19. A photoelectric device, comprising: 存储器,其上存储有计算机程序;a memory having a computer program stored thereon; 处理器,用于执行所述存储器中的所述计算机程序,以实现权利要求13至18中任一项所述方法的步骤。A processor, configured to execute the computer program in the memory to implement the steps of the method according to any one of claims 13 to 18. 20.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现权利要求13至18中任一项所述方法的步骤。20. A computer-readable storage medium having a computer program stored thereon, wherein when the computer program is executed by a processor, the steps of the method according to any one of claims 13 to 18 are implemented.
CN202510886893.2A 2025-06-30 2025-06-30 Pixel, image sensor, imaging system, operating method, device and medium Pending CN120475272A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202510886893.2A CN120475272A (en) 2025-06-30 2025-06-30 Pixel, image sensor, imaging system, operating method, device and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202510886893.2A CN120475272A (en) 2025-06-30 2025-06-30 Pixel, image sensor, imaging system, operating method, device and medium

Publications (1)

Publication Number Publication Date
CN120475272A true CN120475272A (en) 2025-08-12

Family

ID=96644426

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202510886893.2A Pending CN120475272A (en) 2025-06-30 2025-06-30 Pixel, image sensor, imaging system, operating method, device and medium

Country Status (1)

Country Link
CN (1) CN120475272A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205726019U (en) * 2015-04-08 2016-11-23 半导体元件工业有限责任公司 Imaging system, imaging device and image sensor
US20190174120A1 (en) * 2017-05-16 2019-06-06 Samsung Electronics Co., Ltd. Time-resolving sensor using shared ppd+spad pixel and spatial-temporal correlation for range measurement
CN113940058A (en) * 2019-06-26 2022-01-14 索尼半导体解决方案公司 camera
CN218827144U (en) * 2022-09-30 2023-04-07 思特威(上海)电子科技股份有限公司 Pixel structure and image sensor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205726019U (en) * 2015-04-08 2016-11-23 半导体元件工业有限责任公司 Imaging system, imaging device and image sensor
US20190174120A1 (en) * 2017-05-16 2019-06-06 Samsung Electronics Co., Ltd. Time-resolving sensor using shared ppd+spad pixel and spatial-temporal correlation for range measurement
CN113940058A (en) * 2019-06-26 2022-01-14 索尼半导体解决方案公司 camera
CN218827144U (en) * 2022-09-30 2023-04-07 思特威(上海)电子科技股份有限公司 Pixel structure and image sensor

Similar Documents

Publication Publication Date Title
US7858921B2 (en) Guided-mode-resonance transmission color filters for color generation in CMOS image sensors
US6958862B1 (en) Use of a lenslet array with a vertically stacked pixel array
KR101442313B1 (en) Camera sensor correction
KR101890940B1 (en) Imaging device and imaging apparatus
US11747533B2 (en) Spectral sensor system using optical filter subarrays
US20180109769A1 (en) Imaging apparatus, imaging system, and signal processing method
JP2006237737A (en) Color filter array and solid-state imaging device
KR20170106251A (en) Hyper spectral image sensor and 3D Scanner using it
JP4967427B2 (en) Image sensor
WO2015059897A1 (en) Image pickup device, image pickup method, code type infrared cut filter, and code type particular-color cut filter
US11128819B2 (en) Combined spectral measurement and imaging sensor
EP3450938B1 (en) An image sensor and an imaging apparatus
US20180182798A1 (en) Imaging sensor
US20230387160A1 (en) Pixel with diffractive scattering grating and high color resolution assigning signal processing
US11696043B2 (en) White balance compensation using a spectral sensor system
US20210266431A1 (en) Imaging sensor pixels having built-in grating
CN120475272A (en) Pixel, image sensor, imaging system, operating method, device and medium
US20240151884A1 (en) Pixel having light focusing transparent diffractive grating
US20240063240A1 (en) Light state imaging pixel
US20240151583A1 (en) Hyperspectral sensor with pixel having light focusing transparent diffractive grating
CN107389193B (en) Sensor module, method for determining the brightness and/or color of electromagnetic radiation, and method for producing a sensor module
CN120416684A (en) Pixel, image sensor, imaging system, calibration method, device and medium
CN115497965A (en) Image sensor, manufacturing method thereof, and method for detecting crosstalk and halo between adjacent pixels of image sensor
JP6270277B2 (en) Image sensor, inspection apparatus, and inspection method
US20250301234A1 (en) Hyperspectral sensor with diffractive focusing elements and color filters

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination