[go: up one dir, main page]

WO2018099009A1 - 控制方法、控制装置、电子装置和计算机可读存储介质 - Google Patents

控制方法、控制装置、电子装置和计算机可读存储介质 Download PDF

Info

Publication number
WO2018099009A1
WO2018099009A1 PCT/CN2017/085213 CN2017085213W WO2018099009A1 WO 2018099009 A1 WO2018099009 A1 WO 2018099009A1 CN 2017085213 W CN2017085213 W CN 2017085213W WO 2018099009 A1 WO2018099009 A1 WO 2018099009A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel
unit
image
color
interpolation algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2017/085213
Other languages
English (en)
French (fr)
Inventor
唐城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Publication of WO2018099009A1 publication Critical patent/WO2018099009A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • H04N1/3871Composing, repositioning or otherwise geometrically modifying originals the composed originals being of different kinds, e.g. low- and high-resolution originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/843Demosaicing, e.g. interpolating colour pixel values
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • H04N23/88Camera processing pipelines; Components thereof for processing colour signals for colour balance, e.g. white-balance circuits or colour temperature control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • H04N25/581Control of the dynamic range involving two or more exposures acquired simultaneously
    • H04N25/585Control of the dynamic range involving two or more exposures acquired simultaneously with pixels having different sensitivities within the sensor, e.g. fast or slow pixels or pixels having different sizes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/0142Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes the interpolation being edge adaptive
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/042Picture signal generators using solid-state devices having a single pick-up sensor
    • H04N2209/045Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
    • H04N2209/046Colour interpolation to calculate the missing colour values

Definitions

  • the present invention relates to image processing technology, and more particularly to a control method, a control device, an electronic device, and a computer readable storage medium and a computer readable storage medium.
  • An existing image sensor includes a pixel unit array and a filter unit array disposed on the pixel unit array, each filter unit array covering a corresponding one of the photosensitive pixel units, and each of the photosensitive pixel units includes a plurality of photosensitive pixels.
  • the image sensor exposure output merged image may be controlled, and the merged image includes a merged pixel array, and the plurality of photosensitive pixels of the same pixel unit are combined and output as one merged pixel. In this way, the signal-to-noise ratio of the merged image can be improved, however, the resolution of the merged image is lowered.
  • the image sensor may also be controlled to output a high-pixel patch image
  • the patch image includes an original pixel array, and each photosensitive pixel corresponds to one original pixel.
  • the resolution of the patch image cannot be improved. Therefore, it is necessary to convert a high-pixel color patch image into a high-pixel pseudo-original image by interpolation calculation, and the pseudo-original image may include a pseudo-original pixel arranged in a Bell array.
  • the original image can be converted into a true color image by image processing and saved. Interpolation calculations can improve the sharpness of true color images, but they are resource intensive and time consuming, resulting in longer shooting times and poor user experience. On the other hand, in specific applications, users tend to focus only on the sharpness of the main part of the true color image.
  • Embodiments of the present invention provide a control method, a control device, an electronic device, and a computer readable storage medium.
  • a control method of an embodiment of the present invention for controlling an electronic device, the electronic device comprising an imaging device, the imaging device comprising an image sensor, the image sensor comprising an array of photosensitive pixel units and an array disposed on the photosensitive pixel unit An array of filter cells, each of the filter cell arrays covering a corresponding one of the photosensitive pixel units, each of the photosensitive pixel units comprising a plurality of photosensitive pixels, the control method comprising the steps of:
  • the patch image Controlling, by the image sensor, the patch image, the patch image comprising image pixel units arranged in a predetermined array, the image pixel unit comprising a plurality of original pixels, each of the photosensitive pixel units corresponding to one of the images a pixel unit, each of the photosensitive pixels corresponding to one of the original pixels;
  • Converting the color patch image into a pseudo original image by using a first interpolation algorithm includes the following steps:
  • the pixel value of the associated pixel is used as the pixel value of the current pixel
  • the pixel value of the current pixel is calculated by the first interpolation algorithm according to the pixel value of the associated pixel unit, where the image pixel unit includes the associated pixel a unit, the associated pixel unit having the same color as the current pixel and adjacent to the current pixel; and
  • the pixel value of the current pixel is calculated by the second interpolation algorithm, and the complexity of the second interpolation algorithm is smaller than the first interpolation algorithm.
  • a control device is for controlling an electronic device, and the electronic device includes an imaging device.
  • the image forming apparatus includes an image sensor including an array of photosensitive pixel units and an array of filter cells disposed on the array of photosensitive pixel units, each of the array of filter cells covering a corresponding one of the photosensitive pixels a unit, each of the photosensitive pixel units including a plurality of photosensitive pixels.
  • the control device includes an output module, a selected module, and a first conversion module.
  • the output module is configured to control the image sensor to output the color patch image
  • the color patch image includes an image pixel unit arranged in a predetermined array, the image pixel unit includes a plurality of original pixels, each of the photosensitive pixels The unit corresponds to one of the image pixel units, each of the photosensitive pixels corresponding to one of the original pixels;
  • the selected module is configured to determine the predetermined area on the patch image according to user input;
  • the first conversion The module is configured to convert the color patch image into a pseudo original image by using a first interpolation algorithm, where the pseudo original image includes an array of original pixels, each of the photosensitive pixels corresponding to one of the dummy pixels, The original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel, and the first conversion module includes a first determining module, a second determining module, a first calculating module, a second calculating module, and a a third computing module, the first determining module is configured to determine whether the associated pixel is
  • An electronic device includes an imaging device, a touch screen, and the above-described control device.
  • An electronic device includes a housing, a processor, a memory, a circuit board, and a power supply circuit.
  • the circuit board is disposed inside a space enclosed by the casing, the processor and the memory are disposed on the circuit board; and the power circuit is configured to supply power to each circuit or device of the electronic device;
  • the memory is for storing executable program code; the processor runs a program corresponding to the executable program code by reading executable program code stored in the memory for executing the control method described above.
  • a computer readable storage medium in accordance with an embodiment of the present invention has instructions stored therein.
  • the processor of the electronic device executes the instruction, the electronic device performs the control method described above.
  • the control method, the control device, the electronic device and the computer readable storage medium of the embodiment of the present invention adopt a first interpolation algorithm capable of improving image resolution and resolution for a user-specified area, and adopting a complexity smaller than an image outside the designated area.
  • the second interpolation algorithm of an interpolation algorithm improves the signal-to-noise ratio, resolution and resolution of the main part of the image, improves the user experience, and reduces the time of image processing.
  • FIG. 1 is a schematic flow chart of a control method according to an embodiment of the present invention.
  • FIG. 2 is another schematic flowchart of a control method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of functional modules of a control device according to an embodiment of the present invention.
  • FIG. 4 is a schematic block diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 5 is a circuit diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 6 is a schematic view of a filter unit according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an image sensor according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a merged image state according to an embodiment of the present invention.
  • FIG. 9 is a schematic diagram showing a state of a patch image according to an embodiment of the present invention.
  • FIG. 10 is a schematic diagram showing a state of a control method according to an embodiment of the present invention.
  • FIG. 11 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 12 is a schematic diagram of functional modules of a second computing module according to some embodiments of the present invention.
  • FIG. 13 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 14 is a schematic diagram of functional blocks of a control device according to some embodiments of the present invention.
  • 15 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 16 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • 17 is a functional block diagram of selected modules of some embodiments of the present invention.
  • FIG. 19 is a schematic diagram of functional modules of a first processing module according to some embodiments of the present invention.
  • 20 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 21 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • FIG. 22 is a schematic diagram of functional modules of a first processing module according to some embodiments of the present invention.
  • FIG. 23 is a schematic flow chart of a control method according to some embodiments of the present invention.
  • 24 is a schematic diagram of functional modules of an extension unit according to some embodiments of the present invention.
  • 25 is a schematic diagram showing the state of a control method according to some embodiments of the present invention.
  • 26 is a schematic diagram of functional modules of an electronic device according to an embodiment of the present invention.
  • 27 is a schematic diagram of functional modules of an electronic device according to some embodiments of the present invention.
  • a control method of an embodiment of the present invention is used to control an electronic device 100.
  • the electronic device 100 includes an imaging device 20, and the imaging device 20 includes an image sensor 21, and the image sensor 21 includes a photosensitive pixel unit array 212 and
  • the filter unit array 211 is disposed on the photosensitive pixel unit array 212.
  • Each of the filter unit arrays 211 covers a corresponding one of the photosensitive pixel units 212a.
  • Each of the photosensitive pixel units 212a includes a plurality of photosensitive pixels 2121.
  • the control method includes the following steps. :
  • the patch image includes an image pixel unit arranged in a predetermined array, the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixel units 212a corresponds to one image pixel unit, each The photosensitive pixel 2121 corresponds to one original pixel;
  • step S13 Converting the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an original pixel arranged in an array, each photosensitive pixel 2121 corresponding to one original pixel, and the original pixel includes a current pixel, and the original pixel includes With current
  • step S13 includes the following steps:
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • control method of the embodiment of the present invention may be implemented by the control device 10.
  • the control device 10 is for controlling the electronic device 100.
  • the electronic device 100 includes an imaging device 20, and the imaging device 20 includes an image sensor 21 including a photosensitive pixel unit array 212 and a filter unit array disposed on the photosensitive pixel unit array 212. 211, each of the filter unit arrays 211 covers a corresponding one of the photosensitive pixel units 212a, each of the photosensitive pixel units 212a includes a plurality of photosensitive pixels 2121, and the control device 10 includes an output module 11, a selected module 12, and a first conversion module 13, The first determining module 131, the second determining module 132, the first calculating module 133, the second calculating module 135, and the third calculating module 137.
  • the output module 11 is configured to control the image sensor 21 to output a patch image, the patch image includes a predetermined array of image pixel units, the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixel units 212a corresponds to one image pixel unit, each The photosensitive pixel 2121 corresponds to one original pixel; the selection module 12 is configured to determine a predetermined area on the color patch image according to the user input, and the first conversion module 13 is configured to convert the color block image into a pseudo original image by using the first interpolation algorithm.
  • the image includes an original pixel arranged in an array, and each photosensitive pixel 2121 corresponds to a pseudo original pixel.
  • the original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • the first conversion module 13 includes a first determining module 131.
  • the second determining module 132, the first calculating module 133, the second calculating module 135, and the third calculating module 137, the first determining module 131 is configured to determine whether the associated pixel is located in the predetermined area; and the second determining module 132 is configured to associate
  • the pixel is located in the predetermined area, it is determined whether the color of the current pixel is the same as the color of the associated pixel;
  • the first calculation module 133 when the color of the current pixel is the same as the color of the associated pixel, the pixel value of the associated pixel is used as the pixel value of the current pixel;
  • the second calculating module 135 is configured to: when the color of the current pixel is different from the color of the associated pixel, The pixel value of the associated pixel unit is calculated by the first interpolation algorithm,
  • step S11 can be implemented by the output module 11, and step S12 can be implemented by the selected module 12, step Step S13 can be implemented by the first conversion module 13, step S131 can be implemented by the first determination module 131, step S132 can be implemented by the second determination module 132, step S133 can be implemented by the first calculation module 133, and step S135 can be implemented by the second The calculation module 135 is implemented, and the step S137 can be implemented by the third calculation module 137.
  • the control method of the embodiment of the present invention processes the image in the predetermined area and the image outside the predetermined area by using the first interpolation algorithm and the second interpolation algorithm, respectively.
  • the predetermined area can be input and selected by the user.
  • the complexity of the first interpolation algorithm includes time complexity and spatial complexity, and the complexity of the first interpolation algorithm is relatively large compared to the second interpolation algorithm. In this way, in the actual shooting, only the first interpolation algorithm with large complexity is used for image processing in the image in the specified area of the user, which can effectively reduce the data and time required for image processing, and can also improve the main part of the user's attention.
  • the resolution of the images in the predetermined area enhances the user experience.
  • the image sensor 21 of the embodiment of the present invention includes a photosensitive pixel unit array 212 and a filter unit array 211 disposed on the photosensitive pixel unit array 212.
  • the photosensitive pixel unit array 212 includes a plurality of photosensitive pixel units 212a, each of which includes a plurality of adjacent photosensitive pixels 2121.
  • Each of the photosensitive pixels 2121 includes a photosensitive device 21211 and a transfer tube 21212, wherein the photosensitive device 21211 can be a photodiode, and the transfer tube 21212 can be a MOS transistor.
  • the filter unit array 211 includes a plurality of filter units 211a, each of which covers a corresponding one of the photosensitive pixel units 212a.
  • the filter cell array 211 includes a Bayer array, that is, the adjacent four filter units 211a are respectively a red filter unit and a blue filter unit. And two green filter units.
  • Each of the photosensitive pixel units 212a corresponds to the filter 211a of the same color. If one photosensitive pixel unit 212a includes a total of n adjacent photosensitive devices 21211, one filter unit 211a covers n of one photosensitive pixel unit 212a.
  • the photosensitive device 21211 may have an integral structure or may be assembled and connected by n independent sub-filters.
  • each photosensitive pixel unit 212a includes four adjacent photosensitive pixels 2121, and two adjacent photosensitive pixels 2121. together constitute one photosensitive pixel sub-unit 2120.
  • the photosensitive pixel sub-unit 2120 further includes a source follower.
  • the photosensitive pixel unit 212a further includes an adder 2122. Wherein one end electrode of each of the one of the photosensitive pixel sub-units 2120 is connected to the cathode electrode of the corresponding photosensitive device 21211, and the other end of each of the transfer tubes 21212 is commonly connected to the gate electrode of the source follower 21213. And connected to an analog to digital converter 21214 via a source follower 21213 source electrode.
  • the source follower 21213 may be a MOS transistor.
  • the two photosensitive pixel subunits 2120 are connected to the adder 2122 through respective source followers 21213 and analog to digital converters 21214.
  • the adjacent four photosensitive devices 21211 of one photosensitive pixel unit 212a of the image sensor 21 of the embodiment of the present invention share a filter unit 211a of the same color, and each photosensitive device 21211 is connected to a single transmission.
  • the adjacent two photosensitive devices 21211 share a source follower 21213 and an analog to digital converter 21214, and the adjacent four photosensitive devices 21211 share an adder 2122.
  • adjacent four photosensitive devices 21211 are arranged in a 2*2 array.
  • the two photosensitive devices 21211 in one photosensitive pixel subunit 2120 may be in the same column.
  • the pixels may be combined to output a combined image.
  • the photosensitive device 21211 is configured to convert illumination into electric charge, and the generated electric charge is proportional to the illumination intensity, and the transmission tube 21212 is configured to control the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal.
  • the adder 2122 is for summing the two digital signals for common output for processing by the image processing module connected to the image sensor 21.
  • the image sensor 21 of the embodiment of the present invention can combine 16M photosensitive pixels into 4M, or output a combined image.
  • the size of the photosensitive pixels is equivalent to change. It is 4 times the original size, which improves the sensitivity of the photosensitive pixels.
  • the noise in the image sensor 21 is mostly random noise, it is possible that for the photosensitive pixels before the combination, there is a possibility that noise is present in one or two pixels, and the four photosensitive pixels are combined into one large photosensitive light. After the pixel, the influence of the noise on the large pixel is reduced, that is, the noise is weakened, and the signal-to-noise ratio is improved.
  • the resolution of the merged image will also decrease as the pixel value decreases.
  • the patch image can be output through image processing.
  • the photosensitive device 21211 is for converting illumination into electric charge, and the generated electric charge is proportional to the intensity of the illumination, and the transmission tube 21212 is for controlling the on or off of the circuit according to the control signal.
  • the source follower 21213 is configured to convert the charge signal generated by the light-sensing device 21211 into a voltage signal.
  • Analog to digital converter 21214 is used to convert the voltage signal to a digital signal for processing by an image processing module coupled to image sensor 21.
  • the image sensor 21 of the embodiment of the present invention can also maintain a 16M photosensitive pixel output, or an output patch image, and the patch image includes an image pixel unit, and an image pixel unit.
  • the original pixel is arranged in a 2*2 array, the size of the original pixel is the same as the size of the photosensitive pixel, but since the filter unit 211a covering the adjacent four photosensitive devices 21211 is the same color, that is, although four The photosensitive devices 21211 are respectively exposed, but the filter units 211a covered by them are of the same color. Therefore, the adjacent four original pixels of each image pixel unit output are the same color, and the resolution of the image cannot be improved.
  • the control method of the embodiment of the present invention is configured to process the output patch image to obtain a pseudo original image.
  • the merged image when the merged image is output, four adjacent photosensitive pixels of the same color are outputted by the combined pixels, thus, The four adjacent merged pixels in the merged image can still be viewed as a typical Bayer array that can be directly processed by the image processing module for processing to output a true color image.
  • the color patch image is outputted separately for each photosensitive pixel at the time of output. Since the adjacent four photosensitive pixels have the same color, the four adjacent original pixels of one image pixel unit have the same color and are atypical Bayer arrays.
  • the image processing module cannot directly process the atypical Bayer array, that is, when the image sensor 21 adopts the unified image processing mode, the true color image output in the merge mode is compatible with the two modes of true color image output and The true color image output in the color block mode needs to convert the color block image into a pseudo original image, or convert the image pixel unit of the atypical Bayer array into a pixel arrangement of a typical Bayer array.
  • the original image includes imitation original pixels arranged in a Bayer array.
  • the pseudo original pixel includes a current pixel, and the original pixel includes an associated pixel corresponding to the current pixel.
  • a predetermined area of the patch image is first converted into a Bayer image array, and image processing is performed using a first interpolation algorithm.
  • the current pixels are R3'3' and R5'5', and the corresponding associated pixels are R33 and R55, respectively.
  • the pixel values above and below should be broadly understood as the color attribute values of the pixel, such as color values.
  • the associated pixel unit includes a plurality of, for example, four, original pixels in the image pixel unit that are the same color as the current pixel and are adjacent to the current pixel.
  • the associated pixel corresponding to R5'5' is B55, which is adjacent to the image pixel unit where B55 is located and has the same color as R5'5'.
  • the image pixel units in which the associated pixel unit is located are image pixel units in which R44, R74, R47, and R77 are located, and are not other red image pixel units that are spatially farther from the image pixel unit in which B55 is located.
  • red original pixels closest to the B55 are R44, R74, R47 and R77, respectively, that is, the associated pixel unit of R5'5' is composed of R44, R74, R47 and R77, R5'5'
  • the colors are the same as and adjacent to R44, R74, R47 and R77.
  • the original pixel is converted into the original pixel in different ways, thereby converting the color block image into the original image, and a special Bayer array structure filter is adopted when the image is captured.
  • the image signal-to-noise ratio is improved, and in the image processing process, the color block image is interpolated by the first interpolation algorithm, thereby improving the resolution and resolution of the image.
  • step S135 includes the following steps:
  • S1353 Calculate the pixel value of the current pixel according to the amount of the gradient and the weight.
  • the second calculation module 135 includes a second calculation unit 1351, a third calculation unit 1352, and a fourth calculation unit 1353.
  • the second calculating unit 1351 is configured to calculate the amount of gradation in each direction of the associated pixel
  • the third calculating unit 1352 is configured to calculate the weight in each direction of the associated pixel
  • the fourth calculating unit 1353 is configured to calculate the pixel of the current pixel according to the gradation amount and the weight. value.
  • step S1351 can be implemented by the second calculation unit 1351
  • step S1352 can be implemented by the third calculation unit 1352
  • step S1353 can be implemented by the fourth calculation unit 1353.
  • the first interpolation algorithm is an energy gradation of the reference image in different directions, and the color corresponding to the current pixel is the same and the adjacent associated pixel unit is calculated by linear interpolation according to the gradation weight in different directions.
  • the pixel value of the current pixel in the direction in which the amount of change in energy is small, the reference specific gravity is large, and therefore, the weight at the time of interpolation calculation is large.
  • R5'5' is interpolated from R44, R74, R47 and R77, and there are no original pixels of the same color in the horizontal and vertical directions, so the components of the color in the horizontal and vertical directions are calculated from the associated pixel unit.
  • the components in the horizontal direction are R45 and R75
  • the components in the vertical direction are R54 and R57 which can be calculated by R44, R74, R47 and R77, respectively.
  • R45 R44*2/3+R47*1/3
  • R75 2/3*R74+1/3*R77
  • R54 2/3*R44+1/3*R74
  • R57 2/3 *R47+1/3*R77.
  • the amount of gradation and the weight in the horizontal and vertical directions are respectively calculated, that is, the gradation amount in different directions according to the color is determined to determine the reference weights in different directions at the time of interpolation, and the weight is smaller in the direction of the gradation amount. Large, and in the direction of larger gradient, the weight is smaller.
  • the gradient amount X1
  • the gradient amount X2
  • W1 X1/(X1+X2)
  • W2 X2/(X1+X2) .
  • R5'5' (2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1. It can be understood that if X1 is greater than X2, W1 is greater than W2, so the weight in the horizontal direction is W2 when calculating, and the weight in the vertical direction is W1, and vice versa.
  • the pixel value of the current pixel can be calculated according to the first interpolation algorithm.
  • the original pixel can be converted into a pseudo original pixel arranged in a typical Bayer array, that is, the adjacent original pixels of the four 2*2 arrays include a red original pixel. , two green imitation original pixels and one blue imitation original pixel.
  • the first interpolation algorithm includes, but is not limited to, a manner in which only pixel values of the same color in both the vertical and horizontal directions are considered in the calculation, and for example, reference may also be made to pixel values of other colors.
  • step S135 the steps are included before step S135:
  • Step S135 includes steps:
  • S136a Perform white balance compensation and restoration on the original image.
  • the first conversion module 13 includes a white balance compensation module 134a and a white balance compensation reduction module 136a.
  • the white balance compensation module 134a is configured to perform white balance compensation on the patch image
  • the white balance compensation and restoration module 136a is configured to perform white balance compensation and restoration on the original image.
  • step S134a can be implemented by the white balance compensation module 134a
  • step S136a can be implemented by the white balance compensation restoration module 136a.
  • the red and blue imitation original pixels often refer not only to the color of the original pixel of the channel whose color is the same, but also refer to The color weight of the original pixels of the green channel, therefore, white balance compensation is required before interpolation to eliminate the effects of white balance in the interpolation calculation.
  • white balance compensation In order not to destroy the white balance of the patch image, it is necessary to perform white balance compensation reduction after the interpolation, and restore according to the gain values of red, green and blue in the compensation.
  • the step S135 includes the steps:
  • S134b Performs dead pixel compensation for the patch image.
  • the first conversion module 13 includes a dead point compensation module 134b.
  • step S134b can be implemented by the dead point compensation module 134b.
  • the image sensor 21 may have a dead pixel.
  • the bad point usually does not always show the same color as the sensitivity changes, and the presence of the dead pixel will affect the image quality. Therefore, in order to ensure accurate interpolation, The effect of the dead point requires bad point compensation before interpolation.
  • the original pixel may be detected.
  • the pixel compensation may be performed according to the pixel value of the other original image of the image pixel unit in which it is located.
  • the step S135 includes the steps:
  • the first conversion module 13 includes a crosstalk compensation module 134c.
  • step S134c can be implemented by the crosstalk module 134c.
  • the four photosensitive pixels in one photosensitive pixel unit cover the filter of the same color, and there may be a difference in sensitivity between the photosensitive pixels, so that the solid color region in the true color image converted by the original image is Solid Shape spectrum noise, affecting the quality of the image. Therefore, it is necessary to perform crosstalk compensation on the patch image.
  • step S135 includes steps:
  • S136b Perform lens shading correction, demosaicing, noise reduction, and edge sharpening on the original image.
  • the first conversion module 13 further includes a second processing module 136b.
  • step S136b can be implemented by the second processing module 136b.
  • the original pixel is arranged as a typical Bayer array, and the second processing module 136b can be used for processing, including lens shadow correction, demosaicing, noise reduction and edge processing. Sharpening processing, in this way, after processing, the true color image can be output to the user.
  • image processing is performed using a second interpolation algorithm.
  • the interpolation process of the second interpolation algorithm is: taking the average value of the pixel values of all the original pixels in each image pixel unit outside the predetermined area, and then determining whether the color of the current pixel and the associated pixel are the same, and the current pixel and the associated pixel color value.
  • the pixel value of the associated pixel is taken as the pixel value of the current pixel.
  • the current pixel and the associated pixel color are different, the pixel value of the original pixel in the image pixel unit with the same color as the current pixel value is taken as the current pixel.
  • the pixel value is: taking the average value of the pixel values of all the original pixels in each image pixel unit outside the predetermined area, and then determining whether the color of the current pixel and the associated pixel are the same, and the current pixel and the associated pixel color value.
  • the pixel values of R11, R12, R21, and R22 are all Ravg, and the pixel values of Gr31, Gr32, Gr41, and Gr42 are all Gravg, and the pixel values of Gb13, Gb14, Gb23, and Gb24 are all Gbavg, B33, B34, and B43.
  • the pixel value of B44 is Bavg.
  • the associated pixel corresponding to the current pixel B22 is R22. Since the color of the current pixel B22 is different from the color of the associated pixel R22, the pixel value of the current pixel B22 should be the corresponding blue filter of the nearest neighbor.
  • the pixel value is the value of any Bavg of B33, B34, B43, B44.
  • other colors are also calculated using a second interpolation algorithm to obtain pixel values for individual pixels.
  • the original pixel is converted to the original pixel by the second interpolation algorithm, thereby converting the patch image into the original image. Since the time complexity and the spatial complexity of the second interpolation algorithm are smaller than that of the first interpolation algorithm, the second interpolation algorithm is used to process the color patch image outside the highlighted area, thereby reducing the time required for image processing. Improved user experience.
  • the control method of the embodiment of the present invention processes different images in a predetermined area of a frame image and outside the predetermined area by using different interpolation algorithms.
  • the predetermined area can be selected by the user, and the predetermined area has multiple selection methods.
  • step S12 includes the following steps:
  • S121 Convert the color block image into a preview image by using a third interpolation algorithm, where the third interpolation algorithm includes a second interpolation algorithm;
  • the selected module 12 includes a second conversion module 121 , a display module 122 , and a first processing module 123 .
  • the second conversion module 121 is configured to convert the color patch image into a preview image by using a third interpolation algorithm
  • the third interpolation algorithm includes a second interpolation algorithm
  • the display module 122 is configured to control the touch screen 30 to display a preview image
  • the first processing module 123 is configured to: Processing user input on touch screen 30 determines a predetermined area.
  • step S121 can be implemented by the second conversion module 121
  • step S122 can be implemented by the display module 122
  • step S123 can be implemented by the first processing module 123.
  • the second conversion module 121 converts the patch image into a preview image and is displayed by the display module 122.
  • the process of converting the patch image into the preview image is performed by using a third interpolation algorithm.
  • the third interpolation algorithm includes a second interpolation algorithm and a bilinear interpolation method.
  • the second interpolation algorithm is first used to convert the color patch image into a pseudo original image of the Bayer array, and then the bilinear interpolation method is used to convert the original image into a true color image. In this way, the user can preview and facilitate the user to select a predetermined area.
  • the algorithm for converting the original image into a true color image is not limited to the bilinear interpolation method, and other interpolation algorithms may be used for calculation.
  • step S123 includes the following steps:
  • S1232 processing user input to identify a touch location
  • S1233 determining an origin extension unit where the touch location is located, the extension unit including an origin extension unit;
  • the first processing module 123 includes a dividing unit 1231, a first identifying unit 1232, a pointing unit 1233, a first calculating unit 1234, a first processing unit 1235, and a second processing unit 1236.
  • the dividing unit 1231 is configured to divide the preview image into an array of expansion units
  • the first identifying unit 1232 is for identifying a user input to identify a touch position
  • the pointing unit 1233 is for determining an origin expanding unit where the touch position is located
  • the expanding unit includes an origin
  • the expansion unit, the first calculation unit 1234 is configured to calculate a contrast value of each expansion unit that is sequentially expanded outwardly from the origin expansion unit, and the first processing unit 1235 is configured to determine that the corresponding extension unit is when the contrast value exceeds a predetermined threshold.
  • An edge expansion unit, the extension unit includes an edge extension unit, and the second processing unit 1236 is configured to determine that the area enclosed by the edge extension unit is a predetermined area.
  • step S1231 can be implemented by the dividing unit 1231
  • step S1232 can be implemented by the first identifying unit
  • step S1233 can be implemented by the pointing unit 1233
  • step S1234 can be implemented by the first calculating unit 1234
  • Step S1235 can be implemented by the first processing unit 1235
  • step S1236 can be implemented by the second processing unit 1236.
  • the black dot is the touch position of the user, and the touch location is expanded outward by the origin expansion unit, and each block in the figure is an expansion unit.
  • the contrast value of each expansion unit is compared with the preset threshold, and the expansion unit whose inverse value is greater than the preset threshold is the edge expansion unit.
  • the contrast difference of the extension unit where the edge portion of the face shown in FIG. 19 is greater than the preset threshold, that is, the extension unit where the edge portion of the face is located is the edge extension unit, and the gray box in the figure is Edge extension unit.
  • the area enclosed by the plurality of edge expansion units is a predetermined area
  • the predetermined area is a processing area designated by the user, that is, a main body part of the user's attention
  • the image in the area is processed by the first interpolation algorithm to enhance the attention of the user.
  • the resolution of the image of the main part enhances the user experience.
  • step S123 includes the following steps:
  • S1238 an area that expands a predetermined shape centering on the touch position as a predetermined area.
  • the first processing module 123 includes a second identification unit 1237 and an expansion unit 1238.
  • the second identification unit 1237 is for processing a user input to identify a touched position
  • the expansion unit 1238 is for expanding an area of a predetermined shape outwardly from the touch position as a predetermined area.
  • step S1237 can be implemented by the second identifying unit 1237
  • step S1238 can be implemented by the expanding unit 1238.
  • step S1238 includes the following steps:
  • S12381 Process user input to determine a predetermined shape.
  • the expansion unit 1238 includes a third processing unit 12381 for processing user input to determine a predetermined shape.
  • step S12381 can be implemented by the third processing unit 12381.
  • the black dot in the figure is the touch position of the user. Centering on the touch position, the circular area is expanded outward to generate a predetermined area.
  • the predetermined area expanded in a circle as shown in the figure includes the entire face portion, and the first interpolation algorithm can be applied to the face portion to improve the resolution of the face portion.
  • the expanded predetermined shape may also be a rectangle, a square, or other shapes, and the user may adjust and drag the size of the expanded predetermined shape according to actual needs.
  • the manner in which the user specifies the predetermined area further includes that the user directly draws an arbitrary shape on the touch screen as a predetermined area, or the user selects several points on the touch screen, and the area enclosed by the several points is used as the predetermined area. Image processing of the first interpolation algorithm. In this way, the user is allowed to independently select the main body part of the focus to improve the definition of the main body part, thereby further improving the user experience.
  • an electronic device 100 includes a control device 10, a touch screen 30, and an imaging device.
  • Set 20 an imaging device.
  • electronic device 100 includes a cell phone and a tablet.
  • Both the mobile phone and the tablet computer have a camera, that is, an imaging device 20.
  • the control method of the embodiment of the present invention can be used to obtain a high-resolution picture.
  • the electronic device 100 also includes other electronic devices having a photographing function.
  • the control method of the embodiment of the present invention is one of the designated processing modes in which the electronic device 100 performs image processing. That is to say, when the user performs the photographing by the electronic device 100, it is necessary to select various designated processing modes included in the electronic device 100.
  • the user selects the designated processing mode of the embodiment of the present invention, the user can select the predetermined region autonomously.
  • the electronic device 100 performs image processing using the control method of the embodiment of the present invention.
  • imaging device 20 includes a front camera and a rear camera.
  • imaging devices 20 include a front camera and a rear camera. Both the front camera and the rear camera can implement image processing by using the control method of the embodiment of the present invention to enhance the user experience.
  • an electronic device 100 includes a processor 40, a memory 50, a circuit board 60, a power supply circuit 70, and a housing 80.
  • the circuit board 60 is disposed inside the space enclosed by the housing 80, the processor 40 and the memory 50 are disposed on the circuit board; the power circuit 70 is used to supply power to the various circuits or devices of the electronic device 100; and the memory 50 is used for storing
  • the program code is executed; the processor 40 executes a program corresponding to the executable program code by reading the executable code stored in the memory 50 to implement the control method of any of the above embodiments of the present invention.
  • processor 40 can be used to perform the following steps:
  • the patch image includes an image pixel unit arranged in a predetermined array, the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixel units 212a corresponds to one image pixel unit, each The photosensitive pixel 2121 corresponds to one original pixel;
  • step S13 Converting the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an original pixel arranged in an array, each photosensitive pixel 2121 corresponding to one original pixel, and the original pixel includes a current pixel, and the original pixel includes The associated pixel corresponding to the current pixel, step S13 includes the following steps:
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • a computer readable storage medium in accordance with an embodiment of the present invention has instructions stored in a computer readable storage medium.
  • the processor 40 of the electronic device 100 executes an instruction, the electronic device 100 performs the control method of any of the embodiments of the present invention described above.
  • the electronic device 100 can perform the following steps:
  • the patch image includes an image pixel unit arranged in a predetermined array, the image pixel unit includes a plurality of original pixels, and each of the photosensitive pixel units 212a corresponds to one image pixel unit, each The photosensitive pixel 2121 corresponds to one original pixel;
  • step S13 Converting the color block image into a pseudo original image by using a first interpolation algorithm, where the original image includes an original pixel arranged in an array, each photosensitive pixel 2121 corresponding to one original pixel, and the original pixel includes a current pixel, and the original pixel includes The associated pixel corresponding to the current pixel, step S13 includes the following steps:
  • the pixel value of the current pixel is calculated according to the pixel value of the associated pixel unit by using a first interpolation algorithm, where the image pixel unit includes an associated pixel unit, and the color of the associated pixel unit is related to the current pixel. Same and adjacent to the current pixel; and
  • a "computer-readable medium” can be any apparatus that can contain, store, communicate, propagate, or transport a program for use in an instruction execution system, apparatus, or device, or in conjunction with the instruction execution system, apparatus, or device.
  • computer readable media include the following: electrical connections (electronic devices) having one or more wires, portable computer disk cartridges (magnetic devices), random access memory (RAM), Read only memory (ROM), erasable editable read only memory (EPROM or flash memory), fiber optic devices, and portable compact disk read only memory (CDROM).
  • the computer readable medium may even be a paper or other suitable medium on which the program can be printed, as it may be optically scanned, for example by paper or other medium, followed by editing, interpretation or, if appropriate, other suitable The method is processed to obtain the program electronically and then stored in computer memory.
  • portions of the invention may be implemented in hardware, software, firmware or a combination thereof.
  • multiple steps or methods may be performed by software or firmware stored in a memory and executed by a suitable instruction execution system.
  • a suitable instruction execution system For example, if executed in hardware, as in another embodiment, it can be performed by any one of the following techniques or combinations thereof known in the art: having logic gates for performing logic functions on data signals Discrete logic circuits, application specific integrated circuits with suitable combinational logic gates, programmable gate arrays (PGAs), field programmable gate arrays (FPGAs), etc.
  • each functional unit in each embodiment of the present invention may be integrated into one processing module, or each unit may exist physically separately, or two or more units may be integrated into one module.
  • the above integrated modules can be executed in the form of hardware or in the form of software functional modules.
  • the integrated modules, if executed in the form of software functional modules and sold or used as separate products, may also be stored in a computer readable storage medium.
  • the above mentioned storage medium may be a read only memory, a magnetic disk or an optical disk or the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Color Television Image Signal Generators (AREA)
  • Studio Devices (AREA)

Abstract

本申请公开了一种控制方法、控制装置、电子装置和计算机可读存储介质。控制方法包括:控制图像传感器输出色块图像;根据用户输入在色块图像上确定预定区域;利用第一插值算法将色块图像转换成仿原图像:判断关联像素是否位于预定区域内;在关联像素位于预定区域内时判断当前像素的颜色与关联像素是否相同;在当前像素的颜色与关联像素相同时将关联像素的像素值作为当前像素的像素值;在当前像素的颜色与关联像素不同时根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值;在关联像素位于预定区域外时通过第二插值算法计算当前像素的像素值。本申请的控制方法对预定区域内、外的图像分别采用第一、第二插值算法处理,减少处理时间。

Description

控制方法、控制装置、电子装置和计算机可读存储介质
优先权信息
本申请请求2016年11月29日向中国国家知识产权局提交的、专利申请号为201611079543.2的专利申请的优先权和权益,并且通过参照将其全文并入此处。
技术领域
本发明涉及图像处理技术,特别涉及一种控制方法、控制装置、电子装置和计算机可读存储介质和计算机可读存储介质。
背景技术
现有的一种图像传感器包括像素单元阵列和设置在像素单元阵列上的滤光片单元阵列,每个滤光片单元阵列覆盖对应一个感光像素单元,每个感光像素单元包括多个感光像素。工作时,可以控制图像传感器曝光输出合并图像,合并图像包括合并像素阵列,同一像素单元的多个感光像素合并输出作为一个合并像素。如此,可以提高合并图像的信噪比,然而,合并图像的解析度降低。当然,也可以控制图像传感器曝光输出高像素的色块图像,色块图像包括原始像素阵列,每个感光像素对应一个原始像素。然而,由于同一滤光片单元对应的多个原始像素颜色相同,同样无法提高色块图像的解析度。因此,需要通过插值计算的方式将高像素的色块图像转换成高像素的仿原图像,仿原图像可以包括呈贝尔阵列排布的仿原像素。仿原图像可以通过图像处理方法转换成真彩图像并保存下来。插值计算可以提高真彩图像的清晰度,然而耗费资源且耗时,导致拍摄时间加长,用户体验差。另一方面,具体应用时,用户往往只关注真彩图像中的主体部分的清晰度。
发明内容
本发明的实施例提供一种控制方法、控制装置、电子装置和计算机可读存储介质。
本发明实施方式的控制方法,用于控制电子装置,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
控制所述图像传感器输出所述色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述原始像素;
根据用户输入在所述色块图像上确定预定区域;
利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转换成仿原图像包括以下步骤:
判断所述关联像素是否位于所述预定区域内;
在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
在所述关联像素位于所述预定区域外时,通过所述第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
本发明实施方式的控制装置,用于控制电子装置,所述电子装置包括成像装置。所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素。所述控制装置包括输出模块、选定模块、第一转换模块。所述输出模块用于控制所述图像传感器输出所述色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述原始像素;所述选定模块用于根据用户输入在所述色块图像上确定所述预定区域;所述第一转换模块用于用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述第一转换模块包括第一判断模块、第二判断模块、第一计算模块、第二计算模块及第三计算模块,所述第一判断模块用于判断所述关联像素是否位于所述预定区域内;所述第二判断模块用于在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;所述第一计算模块用于在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;所述第二计算模块用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述 当前像素相邻;所述第三计算模块用于在所述关联像素位于所述预定区域外时,通过所述第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
本发明实施方式的电子装置包括成像装置、触摸屏和上述的控制装置。
本发明实施方式的电子装置包括壳体、处理器、存储器、电路板和电源电路。所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行上述的控制方法。
本发明实施方式的计算机可读存储介质,具有存储于其中的指令。当电子装置的处理器执行所述指令时,所述电子装置执行上述的控制方法。
本发明实施方式的控制方法、控制装置、电子装置和计算机可读存储介质,对用户指定区域采用能够提高图像分辨率及解析度的第一插值算法,对指定区域外的图像采用复杂度小于第一插值算法的第二插值算法,一方面提高了图像的主体部分的信噪比、分辨率及解析度,提升了用户体验,另一方面减少了图像处理的时间。
本发明的实施方式的附加方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本发明的实施方式的实践了解到。
附图说明
本发明的上述和/或附加的方面和优点从结合下面附图对实施方式的描述中将变得明显和容易理解,其中:
图1是本发明实施方式的控制方法的流程示意图;
图2是本发明实施方式的控制方法的另一流程示意图;
图3是本发明实施方式的控制装置的功能模块示意图;
图4是本发明实施方式的图像传感器的模块示意图;
图5是本发明实施方式的图像传感器的电路示意图;
图6是本发明实施方式的滤光片单元的示意图;
图7是本发明实施方式的图像传感器的结构示意图;
图8是本发明实施方式的合并图像状态示意图;
图9是本发明实施方式的色块图像的状态示意图;
图10是本发明实施方式的控制方法的状态示意图;
图11是本发明某些实施方式的控制方法的流程示意图;
图12是本发明某些实施方式的第二计算模块的功能模块示意图;
图13是本发明某些实施方式的控制方法的流程示意图;
图14是本发明某些实施方式的控制装置的功能模块示意图;
图15是本发明某些实施方式的控制方法的状态示意图;
图16是本发明某些实施方式的控制方法的流程示意图;
图17是本发明某些实施方式的选定模块的功能模块示意图;
图18是本发明某些实施方式的控制方法的流程示意图;
图19是本发明某些实施方式的第一处理模块的功能模块示意图;
图20是本发明某些实施方式的控制方法的状态示意图;
图21是本发明某些实施方式的控制方法的流程示意图;
图22是本发明某些实施方式的第一处理模块的功能模块示意图;
图23是本发明某些实施方式的控制方法的流程示意图;
图24是本发明某些实施方式的扩展单元的功能模块示意图;
图25是本发明某些实施方式的控制方法的状态示意图;
图26是本发明实施方式的电子装置的功能模块示意图;
图27是本发明某些实施方式的电子装置的功能模块示意图。
具体实施方式
下面详细描述本发明的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本发明,而不能理解为对本发明的限制。
请一并参阅图1至5,本发明实施方式的控制方法,用于控制电子装置100,电子装置100包括成像装置20,成像装置20包括图像传感器21,图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元阵列211覆盖对应一个感光像素单元212a,每个感光像素单元212a包括多个感光像素2121,控制方法包括以下步骤:
S11:控制图像传感器21输出色块图像;
S12:根据用户输入在色块图像上确定预定区域,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素单元212a对应一个图像像素单元,每个感光像素2121对应一个原始像素;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,每个感光像素2121对应一个仿原像素,仿原像素包括当前像素,原始像素包括与当前 像素对应的关联像素,步骤S13包括以下步骤:
S131:判断关联像素是否位于预定区域内;
S132:在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S133:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
S135:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S136:在关联像素位于预定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
请再参阅图3,本发明实施方式的控制方法可以由控制装置10实现。
控制装置10用于控制电子装置100,电子装置100包括成像装置20,成像装置20包括图像传感器21,图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211,每个滤光片单元阵列211覆盖对应一个感光像素单元212a,每个感光像素单元212a包括多个感光像素2121,控制装置10包括输出模块11、选定模块12、第一转换模块13、第一判断模块131、第二判断模块132、第一计算模块133、第二计算模块135及第三计算模块137。输出模块11用于控制图像传感器21输出色块图像,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素单元212a对应一个图像像素单元,每个感光像素2121对应一个原始像素;选定模块12用于根据用户输入在色块图像上确定预定区域,第一转换模块13用于用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,每个感光像素2121对应一个仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,第一转换模块13包括第一判断模块131、第二判断模块132、第一计算模块133、第二计算模块135及第三计算模块137,第一判断模块131用于判断关联像素是否位于预定区域内;第二判断模块132用于在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;第一计算模块133用于在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;第二计算模块135用于在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;第三计算模块137用于在关联像素位于预定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
也即是说,步骤S11可以由输出模块11实现,步骤S12可以由选定模块12实现,步 骤S13可以由第一转换模块13实现,步骤S131可以由第一判断模块131实现,步骤S132可以由第二判断模块132实现,步骤S133可以由第一计算模块133实现,步骤S135可以由第二计算模块135实现,步骤S137可以由第三计算模块137实现。
可以理解,本发明实施方式的控制方法分别采用第一插值算法和第二插值算法处理预定区域内的图像和预定区域外的图像。其中,预定区域可由用户进行输入和选择。第一插值算法的复杂度包括时间复杂度和空间复杂度,相较于第二插值算法,第一插值算法的复杂度相对较大。如此,在实际拍摄中,仅对用户指定区域内的图像采用复杂度较大的第一插值算法进行图像处理,可以有效减少图像处理的数据和所需时间,还能提高用户关注的主体部分即预定区域内的图像的解析度,提升了用户体验。
请一并参阅图4至7,本发明实施方式的图像传感器21包括感光像素单元阵列212和设置在感光像素单元阵列212上的滤光片单元阵列211。
进一步地,感光像素单元阵列212包括多个感光像素单元212a,每个感光像素单元212a包括多个相邻的感光像素2121。每个感光像素2121包括一个感光器件21211和一个传输管21212,其中,感光器件21211可以是光电二极管,传输管21212可以是MOS晶体管。
滤光片单元阵列211包括多个滤光片单元211a,每个滤光片单元211a覆盖对应一个感光像素单元212a。
具体地,在某些示例中,滤光片单元阵列211包括拜耳阵列,也即是说,相邻的四个滤光片单元211a分别为一个红色滤光片单元、一个蓝色滤光片单元和两个绿色滤光片单元。
每一个感光像素单元212a对应同一颜色的滤光片211a,若一个感光像素单元212a中一共包括n个相邻的感光器件21211,那么一个滤光片单元211a覆盖一个感光像素单元212a中的n个感光器件21211,该滤光片单元211a可以是一体构造,也可以由n个独立的子滤光片组装连接在一起。
在某些实施方式中,每个感光像素单元212a包括四个相邻的感光像素2121,相邻两个感光像素2121共同构成一个感光像素子单元2120,感光像素子单元2120还包括一个源极跟随器21213和一个模数转换器21214。感光像素单元212a还包括一个加法器2122。其中,一个感光像素子单元2120中的每个传输管21212的一端电极被连接到对应感光器件21211的阴极电极,每个传输管21212的另一端被共同连接至源极跟随器21213的闸极电极,并通过源极跟随器21213源极电极连接至一个模数转换器21214。其中,源极跟随器21213可以是MOS晶体管。两个感光像素子单元2120通过各自的源极跟随器21213及模数转换器21214连接至加法器2122。
也即是说,本发明实施方式的图像传感器21的一个感光像素单元212a中相邻的四个感光器件21211共用一个同颜色的滤光片单元211a,每个感光器件21211对应连接一个传 输管21212,相邻两个感光器件21211共用一个源极跟随器21213和一个模数转换器21214,相邻四个感光器件21211共用一个加法器2122。
进一步地,相邻四个感光器件21211呈2*2阵列排布。其中,一个感光像素子单元2120中的两个感光器件21211可以处于同一列。
在成像时,当同一滤光片单元211a下覆盖的两个感光像素子单元2120或者说四个感光器件21211同时曝光时,可以对像素进行合并进而可输出合并图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于电压信号转换为数字信号。加法器2122用于将两路数字信号相加共同输出,以供与图像传感器21相连的图像处理模块处理。
请参阅图8,以16M的图像传感器21举例来说,本发明实施方式的图像传感器21可以将16M的感光像素合并成4M,或者说,输出合并图像,合并后,感光像素的大小相当于变成了原来大小的4倍,从而提升了感光像素的感光度。此外,由于图像传感器21中的噪声大部分都是随机噪声,对于合并之前的感光像素来说,有可能其中一个或两个像素中存在噪点,而在将四个感光像素合并成一个大的感光像素后,减小了噪点对该大像素的影响,也即是减弱了噪声,提高了信噪比。
但在感光像素大小变大的同时,由于像素值降低,合并图像的解析度也将降低。
在成像时,当同一滤光片单元211a覆盖的四个感光器件21211依次曝光时,经过图像处理可以输出色块图像。
具体地,感光器件21211用于将光照转换为电荷,且产生的电荷与光照的强度成比例关系,传输管21212用于根据控制信号来控制电路的导通或断开。当电路导通时,源极跟随器21213用于将感光器件21211经光照产生的电荷信号转换为电压信号。模数转换器21214用于将电压信号转换为数字信号以供与图像传感器21相连的图像处理模块处理。
请参阅图9,以16M的图像传感器21举例来说,本发明实施方式的图像传感器21还可以保持16M的感光像素输出,或者说输出色块图像,色块图像包括图像像素单元,图像像素单元包括2*2阵列排布的原始像素,该原始像素的大小与感光像素大小相同,然而由于覆盖相邻四个感光器件21211的滤光片单元211a为同一颜色,也即是说,虽然四个感光器件21211分别曝光,但其覆盖的滤光片单元211a颜色相同,因此,输出的每个图像像素单元的相邻四个原始像素颜色相同,仍然无法提高图像的解析度。
本发明实施方式的控制方法,用于对输出的色块图像进行处理,以得到仿原图像。
可以理解,合并图像在输出时,四个相邻的同色的感光像素以合并像素输出,如此, 合并图像中的四个相邻的合并像素仍可看作是典型的拜耳阵列,可以直接被图像处理模块接收进行处理以输出真彩图像。而色块图像在输出时每个感光像素分别输出,由于相邻四个感光像素颜色相同,因此,一个图像像素单元的四个相邻原始像素的颜色相同,是非典型的拜耳阵列。而图像处理模块无法对非典型拜耳阵列直接进行处理,也即是说,在图像传感器21采用统一图像处理模式时,为兼容两种模式的真彩图像输出即合并模式下的真彩图像输出及色块模式下的真彩图像输出,需将色块图像转换为仿原图像,或者说将非典型拜耳阵列的图像像素单元转换为典型的拜耳阵列的像素排布。
仿原图像包括呈拜耳阵列排布的仿原像素。仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素。
对于一帧色块图像的预定区域内的图像,先将该色块图像的预定区域转换成拜耳图像阵列,再利用第一插值算法进行图像处理。具体地,请参阅图10,以图10为例,当前像素为R3’3’和R5’5’,对应的关联像素分别为R33和R55。
在获取当前像素R3’3’时,由于R3’3’与对应的关联像素R33的颜色相同,因此在转换时直接将R33的像素值作为R3’3’的像素值。
在获取当前像素R5’5’时,由于R5’5’与对应的关联像素B55的颜色不相同,显然不能直接将B55的像素值作为R5’5’的像素值,需要根据R5’5’的关联像素单元通过插值的方式计算得到。
需要说明的是,以上及下文中的像素值应当广义理解为该像素的颜色属性数值,例如色彩值。
关联像素单元包括多个,例如4个,颜色与当前像素相同且与当前像素相邻的图像像素单元中的原始像素。
需要说明的是,此处相邻应做广义理解,以图10为例,R5’5’对应的关联像素为B55,与B55所在的图像像素单元相邻的且与R5’5’颜色相同的关联像素单元所在的图像像素单元分别为R44、R74、R47、R77所在的图像像素单元,而并非在空间上距离B55所在的图像像素单元更远的其他的红色图像像素单元。其中,与B55在空间上距离最近的红色原始像素分别为R44、R74、R47和R77,也即是说,R5’5’的关联像素单元由R44、R74、R47和R77组成,R5’5’与R44、R74、R47和R77的颜色相同且相邻。
如此,针对不同情况的当前像素,采用不同方式的将原始像素转换为仿原像素,从而将色块图像转换为仿原图像,由于拍摄图像时,采用了特殊的拜耳阵列结构的滤光片,提高了图像信噪比,并且在图像处理过程中,通过第一插值算法对色块图像进行插值处理,提高了图像的分辨率及解析度。
请参阅图11,在某些实施方式中,步骤S135包括以下步骤:
S1351:计算关联像素各个方向上的渐变量;
S1352:计算关联像素各个方向上的权重;和
S1353:根据渐变量及权重计算当前像素的像素值。
请参阅图12,在某些实施方式中,第二计算模块135包括第二计算单元1351、第三计算单元1352、第四计算单元1353。第二计算单元1351用于计算关联像素各个方向上的渐变量,第三计算单元1352用于计算关联像素各个方向上的权重,第四计算单元1353用于根据渐变量及权重计算当前像素的像素值。
也即是说,步骤S1351可以由第二计算单元1351实现,步骤S1352可以由第三计算单元1352实现,步骤S1353可以由第四计算单元1353实现。
具体地,第一插值算法是参考图像在不同方向上的能量渐变,将与当前像素对应的颜色相同且相邻的关联像素单元依据在不同方向上的渐变权重大小,通过线性插值的方式计算得到当前像素的像素值。其中,在能量变化量较小的方向上,参考比重较大,因此,在插值计算时的权重较大。
在某些示例中,为方便计算,仅考虑水平和垂直方向。
R5’5’由R44、R74、R47和R77插值得到,而在水平和垂直方向上并不存在颜色相同的原始像素,因此需根据关联像素单元计算在水平和垂直方向上该颜色的分量。其中,水平方向上的分量为R45和R75、垂直方向的分量为R54和R57可以分别通过R44、R74、R47和R77计算得到。
具体地,R45=R44*2/3+R47*1/3,R75=2/3*R74+1/3*R77,R54=2/3*R44+1/3*R74,R57=2/3*R47+1/3*R77。
然后,分别计算在水平和垂直方向的渐变量及权重,也即是说,根据该颜色在不同方向的渐变量,以确定在插值时不同方向的参考权重,在渐变量小的方向,权重较大,而在渐变量较大的方向,权重较小。其中,在水平方向的渐变量X1=|R45-R75|,在垂直方向上的渐变量X2=|R54-R57|,W1=X1/(X1+X2),W2=X2/(X1+X2)。
如此,根据上述可计算得到,R5’5’=(2/3*R45+1/3*R75)*W2+(2/3*R54+1/3*R57)*W1。可以理解,若X1大于X2,则W1大于W2,因此计算时水平方向的权重为W2,而垂直方向的权重为W1,反之亦反。
如此,可根据第一插值算法计算得到当前像素的像素值。依据上述对关联像素的处理方式,可将原始像素转换为呈典型拜耳阵列排布的仿原像素,也即是说,相邻的四个2*2阵列的仿原像素包括一个红色仿原像素,两个绿色仿原像素和一个蓝色仿原像素。
需要说明的是,第一插值算法包括但不限于本实施例中公开的在计算时仅考虑垂直和水平两个方向相同颜色的像素值的方式,例如还可以参考其他颜色的像素值。
请参阅图13,在某些实施方式中,在步骤S135前包括步骤:
S134a:对色块图像做白平衡补偿;
步骤S135后包括步骤:
S136a:对仿原图像做白平衡补偿还原。
请参阅图14,在某些实施方式中,第一转换模块13包括白平衡补偿模块134a和白平衡补偿还原模块136a。白平衡补偿模块134a用于对色块图像做白平衡补偿,白平衡补偿还原模块136a用于对仿原图像做白平衡补偿还原。
也即是说,步骤S134a可以由白平衡补偿模块134a实现,步骤S136a可以由白平衡补偿还原模块136a实现。
具体地,在一些示例中,在将色块图像转换为仿原图像的过程中,在插值时,红色和蓝色仿原像素往往不仅参考与其颜色相同的通道的原始像素的颜色,还会参考绿色通道的原始像素的颜色权重,因此,在插值前需要进行白平衡补偿,以在插值计算中排除白平衡的影响。为了不破坏色块图像的白平衡,因此,在插值之后需要将仿原图像进行白平衡补偿还原,还原时根据在补偿中红色、绿色及蓝色的增益值进行还原。
如此,可排除在插值过程中白平衡的影响,并且能够使得插值后得到的仿原图像保持色块图像的白平衡。
请再参阅图13,在某些实施方式中,步骤S135前包括步骤:
S134b:对色块图像做坏点补偿。
请再参阅图14,在某些实施方式中,第一转换模块13包括坏点补偿模块134b。
也即是说,步骤S134b可以由坏点补偿模块134b实现。
可以理解,受限于制造工艺,图像传感器21可能会存在坏点,坏点通常不随感光度变化而始终呈现同一颜色,坏点的存在将影响图像质量,因此,为保证插值的准确,不受坏点的影响,需要在插值前进行坏点补偿。
具体地,坏点补偿过程中,可以对原始像素进行检测,当检测到某一原始像素为坏点时,可根据其所在的图像像素单元的其他原始像的像素值进行坏点补偿。
如此,可排除坏点对插值处理的影响,提高图像质量。
请再参阅图13,在某些实施方式中,步骤S135前包括步骤:
S134c:对色块图像做串扰补偿。
请再参阅图14,在某些实施方式中,第一转换模块13包括串扰补偿模块134c。
也即是说,步骤S134c可以由串扰模块134c实现。
具体的,一个感光像素单元中的四个感光像素覆盖同一颜色的滤光片,而感光像素之间可能存在感光度的差异,以至于以仿原图像转换输出的真彩图像中的纯色区域会出现固 定型谱噪声,影响图像的质量。因此,需要对色块图像进行串扰补偿。
请再参阅图13,在某些实施方式中,步骤S135后包括步骤:
S136b:对仿原图像进行镜片阴影校正、去马赛克、降噪和边缘锐化处理。
请再参阅图14,在某些实施方式中,第一转换模块13还包括第二处理模块136b。
也即是说,步骤S136b可以由第二处理模块136b实现。
可以理解,将色块图像转换为仿原图像后,仿原像素排布为典型的拜耳阵列,可采用第二处理模块136b进行处理,处理过程中包括镜片阴影校正、去马赛克、降噪和边缘锐化处理,如此,处理后即可得到真彩图像输出给用户。
对于一帧色块图像的预定区域外的图像,需利用第二插值算法进行图像处理。第二插值算法的插值过程是:对预定区域外的每一个图像像素单元中所有的原始像素的像素值取均值,随后判断当前像素与关联像素的颜色是否相同,在当前像素与关联像素颜色值相同时,取关联像素的像素值作为当前像素的像素值,在当前像素与关联像素颜色不同时,取最邻近的与当前像素值颜色相同的图像像素单元中的原始像素的像素值作为当前像素的像素值。
具体地,请参阅图15,以图15为例,先计算图像像素单元中各个原始像素的像素值:Ravg=(R1+R2+R3+R4)/4,Gravg=(Gr1+Gr2+Gr3+Gr4)/4,Gbavg=(Gb1+Gb2+Gb3+Gb4)/4,Bavg=(B1+B2+B3+B4)/4。此时,R11、R12、R21、R22的像素值均为Ravg,Gr31、Gr32、Gr41、Gr42的像素值均为Gravg,Gb13、Gb14、Gb23、Gb24的像素值均为Gbavg,B33、B34、B43、B44的像素值均为Bavg。以当前像素B22为例,当前像素B22对应的关联像素为R22,由于当前像素B22的颜色与关联像素R22的颜色不同,因此当前像素B22的像素值应取最邻近的蓝色滤光片对应的像素值即取B33、B34、B43、B44中任一Bavg的值。同样地,其他颜色也采用第二插值算法进行计算以得到各个像素的像素值。
如此,对于色块图像的高亮区域外的图像,采用第二插值算法将原始像素转换为仿原像素,从而将色块图像转换为仿原图像。由于第二插值算法的时间复杂度和空间复杂度相较于第一插值算法较小,因此,,采用第二插值算法处理高亮区域外的色块图像,减少了图像处理所需的时间,提升了用户体验。
本发明实施方式的控制方法对于一帧图像的预定区域内和预定区域外的图像采用不同的插值算法进行处理。预定区域可由用户自主选取,预定区域有多种选取方式。
请参阅图16,在某些实施方式中,步骤S12包括以下步骤:
S121:利用第三插值算法将色块图像转换成预览图像,第三插值算法包括第二插值算法;
S122:控制触摸屏30显示预览图像;和
S123:处理触摸屏30上的用户输入确定预定区域。
请参阅图17,在某些实施方式中,选定模块12包括第二转换模块121、显示模块122及第一处理模块123。第二转换模块121用于利用第三插值算法将色块图像转换成预览图像,第三插值算法包括第二插值算法,显示模块122用于控制触摸屏30显示预览图像,第一处理模块123用于处理触摸屏30上的用户输入确定预定区域。
也即是说,步骤S121可以由第二转换模块121实现,步骤S122可以由显示模块122实现,步骤S123可以由第一处理模块123实现。
可以理解,用户选择预定区域时,需要在预览图像中进行选取。第二转换模块121将色块图像转换成预览图像并由显示模块122显示。其中,色块图像转换成预览图像的过程采用第三插值算法进行计算。第三插值算法包括第二插值算法和双线性插值法。转换过程中,首先利用第二插值算法将色块图像转换成拜耳阵列的仿原图像,再利用双线性插值法将仿原图像转换成真彩图像。如此,用户可进行预览,便于用户选取预定区域。
需要说明的是,仿原图像转换成真彩图像的算法不限定于双线性插值法,也可采用其他插值算法进行计算。
请参阅图18,在某些实施方式中,步骤S123包括以下步骤:
S1231:将预览图像划分为阵列排布的扩展单元;
S1232:处理用户输入以识别触摸位置;
S1233:确定触摸位置所在的原点扩展单元,扩展单元包括原点扩展单元;
S1234:计算以原点扩展单元为中心向外依次扩展的每个扩展单元的反差值;
S1235:当反差值超过预定阈值时确定对应的扩展单元为边缘扩展单元,扩展单元包括边缘扩展单元;和
S1236:确定边缘扩展单元围成的区域为预定区域。
请参阅图19,在某些实施方式中,第一处理模块123包括划分单元1231、第一识别单元1232、定点单元1233、第一计算单元1234、第一处理单元1235、第二处理单元1236。划分单元1231用于将预览图像划分为阵列排布的扩展单元,第一识别单元1232用于识别用户输入以识别触摸位置,定点单元1233用于确定触摸位置所在的原点扩展单元,扩展单元包括原点扩展单元,第一计算单元1234用于计算以原点扩展单元为中心依次向外扩展的每个扩展单元的反差值,第一处理单元1235用于当反差值超过预定阈值时确定对应的扩展单元为边缘扩展单元,扩展单元包括边缘扩展单元,第二处理单元1236用于确定边缘扩展单元围成的区域为预定区域。
也即是说,步骤S1231可以由划分单元1231实现,步骤S1232可以由第一识别单元实现,步骤S1233可以由定点单元1233实现,步骤S1234可以由第一计算单元1234实现, 步骤S1235可以由第一处理单元1235实现,步骤S1236可以由第二处理单元1236实现。
具体地,请参阅图20,以图20为例,黑色圆点为用户的触摸位置,以该触摸位置为原点扩展单元向外扩展,图中每个方框即为一个扩展单元。随后比较每个扩展单元的反差值与预设阈值的大小,反差值大于预设阈值的扩展单元即为边缘扩展单元。图19中所示人脸的边缘部分所在的扩展单元的反差值均大于预设阈值,也即是说,图中人脸边缘部分所在的扩展单元即为边缘扩展单元,图中灰色方框为边缘扩展单元。如此,将多个边缘扩展单元围成的区域作为预定区域,预定区域是由用户指定的处理区域即用户关注的主体部分,采用第一插值算法对该区域内的图像进行处理,提升用户关注的主体部分的图像的解析度,提升用户体验。
请参阅图21,在某些实施方式中,步骤S123包括以下步骤:
S1237:处理用户输入以识别触摸位置;和
S1238:以触摸位置为中心向外扩展预定形状的区域作为预定区域。
请参阅图22,在某些实施方式中,第一处理模块123包括第二识别单元1237和扩展单元1238。第二识别单元1237用于处理用户输入以识别触摸位置,扩展单元1238用于以触摸位置为中心向外扩展预定形状的区域作为预定区域。
也即是说,步骤S1237可以由第二识别单元1237实现,步骤S1238可以由扩展单元1238实现。
请参阅图23,在某些实施方式中,步骤S1238包括以下步骤:
S12381:处理用户输入以确定预定形状。
请参阅图24,在某些实施方式中,扩展单元1238包括第三处理单元12381,第三处理单元12381用于处理用户输入以确定预定形状。
也即是说,步骤S12381可以由第三处理单元12381实现。
具体地,请参阅图25,图中的黑色圆点为用户的触摸位置。以该触摸位置为中心,向外扩展圆形的区域以生成预定区域。图中所示以圆形扩展的预定区域包括了整个人脸部分,可以对人脸部分采用第一插值算法以提高人脸部分的解析度。
需要说明的是,在其它具体实施例中,扩展的预定形状还可以是矩形、方形或其他形状,用户可以根据实际所需对扩展的预定形状进行大小的调整以及拖动。此外,用户指定预定区域的方式还包括用户直接在触摸屏上勾画出一个任意形状作为预定区域,或者用户在触摸屏上选取几个点,这几个点连线后围成的区域作为预定区域后进行第一插值算法的图像处理。如此,便于用户自主选择关注的主体部分以提高主体部分的清晰度,从而进一步提升用户体验。
请参阅图26,本发明实施方式的电子装置100包括控制装置10、触摸屏30和成像装 置20。
在某些实施方式中,电子装置100包括手机和平板电脑。
手机和平板电脑均带有摄像头即成像装置20,用户使用手机或平板电脑进行拍摄时,可以采用本发明实施方式的控制方法,以得到高解析度的图片。
需要说明的是,电子装置100也包括其他具有拍摄功能的电子设备。本发明实施方式的控制方法是电子装置100进行图像处理的指定处理模式之一。也即是说,用户利用电子装置100进行拍摄时,需要对电子装置100中包含的各种指定处理模式进行选择,当用户选择本发明实施方式的指定处理模式时,用户可以自主选择预定区域,电子装置100采用本发明实施方式的控制方法进行图像处理。
在某些实施方式中,成像装置20包括前置相机和后置相机。
可以理解,许多成像装20包括前置相机和后置相机,前置相机和后置相机均可采用本发明实施方式的控制方法实现图像处理,以提升用户体验。
请参阅图27,本发明实施方式的电子装置100包括处理器40、存储器50、电路板60、电源电路70和壳体80。其中,电路板60安置在壳体80围成的空间内部,处理器40和存储器50设置在电路板上;电源电路70用于为电子装置100的各个电路或器件供电;存储器50用于存储可执行程序代码;处理器40通过读取存储器50中存储的可执行代码来运行与可执行程序代码对应的程序以实现上述中本发明任一实施方式的控制方法。
例如,处理器40可以用于执行以下步骤:
S11:控制图像传感器21输出色块图像;
S12:根据用户输入在色块图像上确定预定区域,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素单元212a对应一个图像像素单元,每个感光像素2121对应一个原始像素;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,每个感光像素2121对应一个仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,步骤S13包括以下步骤:
S131:判断关联像素是否位于预定区域内;
S132:在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S133:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
S135:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S136:在关联像素位于预定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
本发明实施方式的计算机可读存储介质,具有存储于计算机可读存储介质中的指令。当电子装置100的处理器40执行指令时,电子装置100执行上述中本发明任一实施方式的控制方法。
例如,电子装置100可以执行以下步骤:
S11:控制图像传感器21输出色块图像;
S12:根据用户输入在色块图像上确定预定区域,色块图像包括预定阵列排布的图像像素单元,图像像素单元包括多个原始像素,每个感光像素单元212a对应一个图像像素单元,每个感光像素2121对应一个原始像素;
S13:利用第一插值算法将色块图像转换成仿原图像,仿原图像包括阵列排布的仿原像素,每个感光像素2121对应一个仿原像素,仿原像素包括当前像素,原始像素包括与当前像素对应的关联像素,步骤S13包括以下步骤:
S131:判断关联像素是否位于预定区域内;
S132:在关联像素位于预定区域内时判断当前像素的颜色与关联像素的颜色是否相同;
S133:在当前像素的颜色与关联像素的颜色相同时,将关联像素的像素值作为当前像素的像素值;
S135:在当前像素的颜色与关联像素的颜色不同时,根据关联像素单元的像素值通过第一插值算法计算当前像素的像素值,图像像素单元包括关联像素单元,关联像素单元的颜色与当前像素相同且与当前像素相邻;和
S136:在关联像素位于预定区域外时,通过第二插值算法计算当前像素的像素值,第二插值算法的复杂度小于第一插值算法。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本发明的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于执行特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本发明的优选实施方式的范围包括另外的执行,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本发明的实施例所属技术领域的技术人员所理解。
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于执行逻辑功能的可执行指令的定序列表,可以具体执行在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。
应当理解,本发明的各部分可以用硬件、软件、固件或它们的组合来执行。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来执行。例如,如果用硬件来执行,和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来执行:具有用于对数据信号执行逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。
本技术领域的普通技术人员可以理解执行上述实施方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。
此外,在本发明各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式执行,也可以采用软件功能模块的形式执行。所述集成的模块如果以软件功能模块的形式执行并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本发明的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本发明的限制,本领域的普通技术人员在本发明的范围内可以对上述实施例进行变化、修改、替换和变型。

Claims (29)

  1. 一种控制方法,用于控制电子装置,其特征在于,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制方法包括以下步骤:
    控制所述图像传感器输出所述色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述原始像素;
    根据用户输入在所述色块图像上确定预定区域;
    利用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述利用第一插值算法将所述色块图像转换成仿原图像包括以下步骤:
    判断所述关联像素是否位于所述预定区域内;
    在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    在所述当前像素的颜色与所述关联像素的颜色相同时,将所述关联像素的像素值作为所述当前像素的像素值;
    在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    在所述关联像素位于所述预定区域外时,通过所述第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
  2. 根据权利要求1所述的控制方法,其特征在于,所述预定阵列包括拜耳阵列。
  3. 根据权利要求1所述的控制方法,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  4. 根据权利要求1所述的控制方法,其特征在于,所述电子装置包括触摸屏,所述根据用户输入在所述色块图像上确定所述预定区域包括以下步骤:
    利用第三插值算法将所述色块图像转换成预览图像,所述第三插值算法包括所述第二 插值算法;
    控制所述触摸屏显示所述预览图像;和
    处理所述触摸屏上的用户输入确定所述预定区域。
  5. 根据权利要求4所述的控制方法,其特征在于,所述处理所述触摸屏上的用户输入确定所述预定区域的步骤包括以下步骤:
    将所述预览图像划分为阵列排布的扩展单元;
    处理所述用户输入以识别触摸位置;
    确定所述触摸位置所在的原点扩展单元,所述扩展单元包括所述原点扩展单元;
    计算以所述原点扩展单元为中心向外依次扩展的每个所述扩展单元的反差值;
    当所述反差值超过预定阈值时确定对应的所述扩展单元为边缘扩展单元,所述扩展单元包括所述边缘扩展单元;和
    确定所述边缘扩展单元围成的区域为所述预定区域。
  6. 根据权利要求4所述的控制方法,其特征在于,所述处理所述触摸屏上的所述用户输入确定所述预定区域的步骤包括以下步骤:
    处理所述用户输入以识别触摸位置;和
    以所述触摸位置为中心向外扩展预定形状的区域作为所述预定区域。
  7. 根据权利要求6所述的控制方法,其特征在于,所述以所述触摸位置为中心向外扩展预定形状的区域作为所述预定区域的步骤包括以下步骤:
    处理所述用户输入以确定所述预定形状。
  8. 根据权利要求1所述的控制方法,其特征在于,所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    计算所述关联像素各个方向上的渐变量;
    计算所述关联像素各个方向上的权重;和
    根据所述渐变量及所述权重计算所述当前像素的像素值。
  9. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    对所述色块图像做白平衡补偿;
    所述图像处理方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    对所述仿原图像做白平衡补偿还原。
  10. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    对所述色块图像做坏点补偿。
  11. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    对所述色块图像做串扰补偿。
  12. 根据权利要求1所述的控制方法,其特征在于,所述控制方法在所述根据关联像素单元的像素值通过第一插值算法计算所述当前像素的像素值的步骤包括以下步骤:
    对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  13. 一种控制装置,用于控制电子装置,其特征在于,所述电子装置包括成像装置,所述成像装置包括图像传感器,所述图像传感器包括感光像素单元阵列和设置在所述感光像素单元阵列上的滤光片单元阵列,每个所述滤光片单元阵列覆盖对应一个所述感光像素单元,每个所述感光像素单元包括多个感光像素,所述控制装置包括:
    输出模块,所述输出模块用于控制所述图像传感器输出所述色块图像,所述色块图像包括预定阵列排布的图像像素单元,所述图像像素单元包括多个原始像素,每个所述感光像素单元对应一个所述图像像素单元,每个所述感光像素对应一个所述原始像素;
    选定模块,所述选定模块用于根据用户输入在所述色块图像上确定所述预定区域;
    第一转换模块,所述第一转换模块用于用第一插值算法将所述色块图像转换成仿原图像,所述仿原图像包括阵列排布的仿原像素,每个所述感光像素对应一个所述仿原像素,所述仿原像素包括当前像素,所述原始像素包括与所述当前像素对应的关联像素,所述第一转换模块包括:
    第一判断模块,所述第一判断模块用于判断所述关联像素是否位于所述预定区域内;
    第二判断模块,所述第二判断模块用于在所述关联像素位于所述预定区域内时判断所述当前像素的颜色与所述关联像素的颜色是否相同;
    第一计算模块,所述第一计算模块用于在所述当前像素的颜色与所述关联像素的颜色 相同时,将所述关联像素的像素值作为所述当前像素的像素值;
    第二计算模块,所述第二计算模块用于在所述当前像素的颜色与所述关联像素的颜色不同时,根据关联像素单元的像素值通过所述第一插值算法计算所述当前像素的像素值,所述图像像素单元包括所述关联像素单元,所述关联像素单元的颜色与所述当前像素相同且与所述当前像素相邻;和
    第三计算模块,所述第三计算模块用于在所述关联像素位于所述预定区域外时,通过所述第二插值算法计算所述当前像素的像素值,所述第二插值算法的复杂度小于所述第一插值算法。
  14. 根据权利要求13所述的控制装置,其特征在于,所述预定阵列包括拜耳阵列。
  15. 根据权利要求13所述的控制装置,其特征在于,所述图像像素单元包括2*2阵列的所述原始像素。
  16. 根据权利要求13所述的控制装置,其特征在于,所述电子装置包括触摸屏,所述选定模块包括:
    第二转换模块,所述第二转换模块用于利用第三插值算法将所述色块图像转换成预览图像,所述第三插值算法包括所述第二插值算法;
    显示模块,所述显示模块用于控制所述触摸屏显示所述预览图像;和
    第一处理模块,所述第一处理模块用于处理所述触摸屏上的用户输入确定所述预定区域。
  17. 根据权利要求16所述的控制装置,其特征在于,所述第一处理模块包括:
    划分单元,所述划分单元用于将所述预览图像划分为阵列排布的扩展单元;
    第一识别单元,所述第一识别单元用于识别所述用户输入以识别触摸位置;
    定点单元,所述定点单元用于确定所述触摸位置所在的原点的扩展单元,所述扩展单元包括所述原点扩展单元;
    第一计算单元,所述第一计算单元用于计算以所述原点扩展单元为中心依次向外扩展的每个所述扩展单元的反差值;
    第一处理单元,所述第一处理单元用于当所述反差值超过预定阈值时确定对应的所述扩展单元为边缘扩展单元,所述扩展单元包括所述边缘扩展单元;和
    第二处理单元,所述第二处理单元用于确定所述边缘扩展单元围成的区域为所述预定 区域。
  18. 根据权利要求16所述的控制装置,其特征在于,所述第一处理模块包括:
    第二识别单元,所述第二识别单元用于处理所述用户输入以识别触摸位置;和
    扩展单元,所述扩展单元用于以所述触摸位置为中心向外扩展预定形状的区域作为所述预定区域。
  19. 根据权利要求18所述的控制装置,其特征在于,所述扩展单元包括:
    第三处理单元,所述第三处理单元用于处理所述用户输入以确定所述预定形状。
  20. 根据权利要求13所述的控制装置,其特征在于,所述第二计算模块包括:
    第二计算单元,所述第二计算单元用于计算所述关联像素各个方向上的渐变量;
    第三计算单元,所述第三计算单元用于计算所述关联像素各个方向上的权重;和
    第四计算单元,所述第四计算单元用于根据所述渐变量及所述权重计算所述当前像素的像素值。
  21. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    白平衡补偿模块,所述白平衡补偿模块用于对所述色块图像做白平衡补偿;
    白平衡补偿还原模块,所述白平衡补偿还原模块用于对所述仿原图像做白平衡补偿还原。
  22. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    坏点补偿模块,所述坏点补偿模块用于对所述色块图像做坏点补偿。
  23. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    串扰补偿模块,所述串扰补偿模快用于对所述色块图像做串扰补偿。
  24. 根据权利要求13所述的控制装置,其特征在于,所述第一转换模块包括:
    第二处理模块,所述第二处理模块用于对所述仿原图像进行镜片形状校正、去马赛克、降噪和边缘锐化处理。
  25. 一种电子装置,其特征在于包括:
    成像装置;
    触摸屏;和
    权利要求13-24任意一项所述的控制装置。
  26. 根据权利要求25所述的电子装置,其特征在于,所述电子装置包括手机和平板电脑。
  27. 根据权利要求25所述的电子装置,其特征在于,所述成像装置包括前置相机和后置相机。
  28. 一种电子装置,包括壳体、处理器、存储器、电路板和电源电路,其特征在于,所述电路板安置在所述壳体围成的空间内部,所述处理器和所述存储器设置在所述电路板上;所述电源电路用于为所述电子装置的各个电路或器件供电;所述存储器用于存储可执行程序代码;所述处理器通过读取所述存储器中存储的可执行程序代码来运行与所述可执行程序代码对应的程序,以用于执行权利要求1至12中任意一项所述的控制方法。
  29. 一种计算机可读存储介质,具有存储于其中的指令,当电子装置的处理器执行所述指令时,所述电子装置执行权利要求1至12中任意一项所述的控制方法。
PCT/CN2017/085213 2016-11-29 2017-05-19 控制方法、控制装置、电子装置和计算机可读存储介质 Ceased WO2018099009A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611079543.2A CN106507019B (zh) 2016-11-29 2016-11-29 控制方法、控制装置、电子装置
CN201611079543.2 2016-11-29

Publications (1)

Publication Number Publication Date
WO2018099009A1 true WO2018099009A1 (zh) 2018-06-07

Family

ID=58328273

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/085213 Ceased WO2018099009A1 (zh) 2016-11-29 2017-05-19 控制方法、控制装置、电子装置和计算机可读存储介质

Country Status (5)

Country Link
US (1) US10356315B2 (zh)
EP (1) EP3328076B1 (zh)
CN (1) CN106507019B (zh)
ES (1) ES2761401T3 (zh)
WO (1) WO2018099009A1 (zh)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017149932A1 (ja) * 2016-03-03 2017-09-08 ソニー株式会社 医療用画像処理装置、システム、方法及びプログラム
CN106604001B (zh) * 2016-11-29 2018-06-29 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106507019B (zh) * 2016-11-29 2019-05-10 Oppo广东移动通信有限公司 控制方法、控制装置、电子装置
CN106507068B (zh) 2016-11-29 2018-05-04 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
WO2018123801A1 (ja) * 2016-12-28 2018-07-05 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ 三次元モデル配信方法、三次元モデル受信方法、三次元モデル配信装置及び三次元モデル受信装置
CN107808361B (zh) * 2017-10-30 2021-08-10 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN107743199B (zh) * 2017-10-30 2020-05-15 努比亚技术有限公司 图像处理方法、移动终端及计算机可读存储介质
CN111126568B (zh) * 2019-12-09 2023-08-08 Oppo广东移动通信有限公司 图像处理方法及装置、电子设备及计算机可读存储介质
KR102880112B1 (ko) * 2020-03-04 2025-11-03 에스케이하이닉스 주식회사 이미지 센싱 장치 및 그의 동작 방법
CN112019775B (zh) * 2020-09-04 2023-03-24 成都微光集电科技有限公司 一种坏点检测校正方法及装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0981245A2 (en) * 1998-08-20 2000-02-23 Canon Kabushiki Kaisha Solid-state image sensing apparatus, control method therefor, basic layout of photoelectric conversion cell and storage medium
US8102435B2 (en) * 2007-09-18 2012-01-24 Stmicroelectronics S.R.L. Method for acquiring a digital image with a large dynamic range with a sensor of lesser dynamic range
CN105609516A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器及输出方法、相位对焦方法、成像装置和终端
CN106488203A (zh) * 2016-11-29 2017-03-08 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106507019A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置、电子装置
CN106507068A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106604001A (zh) * 2016-11-29 2017-04-26 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006140594A (ja) 2004-11-10 2006-06-01 Pentax Corp デジタルカメラ
KR100843087B1 (ko) 2006-09-06 2008-07-02 삼성전자주식회사 영상 생성 장치 및 방법
CN101150733A (zh) * 2006-09-22 2008-03-26 华邦电子股份有限公司 影像像素干扰的补偿方法
CN101227621A (zh) * 2008-01-25 2008-07-23 炬力集成电路设计有限公司 在cmos传感器中对cfa进行插值的方法及电路
US7745779B2 (en) * 2008-02-08 2010-06-29 Aptina Imaging Corporation Color pixel arrays having common color filters for multiple adjacent pixels for use in CMOS imagers
JP5633518B2 (ja) * 2009-11-27 2014-12-03 株式会社ニコン データ処理装置
CN102073986A (zh) 2010-12-28 2011-05-25 冠捷显示科技(厦门)有限公司 实现显示装置画面放大的方法
CN103430552B (zh) * 2011-03-11 2015-04-08 富士胶片株式会社 摄像装置
JP5822508B2 (ja) 2011-04-05 2015-11-24 キヤノン株式会社 撮像装置及びその制御方法
JP5818514B2 (ja) 2011-05-27 2015-11-18 キヤノン株式会社 画像処理装置および画像処理方法、プログラム
JP2013066146A (ja) * 2011-08-31 2013-04-11 Sony Corp 画像処理装置、および画像処理方法、並びにプログラム
US20130202191A1 (en) 2012-02-02 2013-08-08 Himax Technologies Limited Multi-view image generating method and apparatus using the same
CN102630019B (zh) * 2012-03-27 2014-09-10 上海算芯微电子有限公司 去马赛克的方法和装置
WO2014118868A1 (ja) 2013-01-30 2014-08-07 パナソニック株式会社 撮像装置及び固体撮像装置
US20140267701A1 (en) 2013-03-12 2014-09-18 Ziv Aviv Apparatus and techniques for determining object depth in images
US9692992B2 (en) 2013-07-01 2017-06-27 Omnivision Technologies, Inc. Color and infrared filter array patterns to reduce color aliasing
US9894281B2 (en) * 2013-09-30 2018-02-13 Nikon Corporation Electronic apparatus, method for controlling electronic apparatus, and control program for setting image-capture conditions of image sensor
EP2887642A3 (en) 2013-12-23 2015-07-01 Nokia Corporation Method, apparatus and computer program product for image refocusing for light-field images
DE102014209197B4 (de) 2014-05-15 2024-09-19 Continental Autonomous Mobility Germany GmbH Vorrichtung und Verfahren zum Erkennen von Niederschlag für ein Kraftfahrzeug
US9479695B2 (en) 2014-07-31 2016-10-25 Apple Inc. Generating a high dynamic range image using a temporal filter
CN105120248A (zh) 2015-09-14 2015-12-02 北京中科慧眼科技有限公司 像素阵列及相机传感器
CN105592303B (zh) 2015-12-18 2018-09-11 广东欧珀移动通信有限公司 成像方法、成像装置及电子装置
CN106454289B (zh) 2016-11-29 2018-01-23 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106507069B (zh) 2016-11-29 2018-06-05 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0981245A2 (en) * 1998-08-20 2000-02-23 Canon Kabushiki Kaisha Solid-state image sensing apparatus, control method therefor, basic layout of photoelectric conversion cell and storage medium
US8102435B2 (en) * 2007-09-18 2012-01-24 Stmicroelectronics S.R.L. Method for acquiring a digital image with a large dynamic range with a sensor of lesser dynamic range
CN105609516A (zh) * 2015-12-18 2016-05-25 广东欧珀移动通信有限公司 图像传感器及输出方法、相位对焦方法、成像装置和终端
CN106488203A (zh) * 2016-11-29 2017-03-08 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置
CN106507019A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 控制方法、控制装置、电子装置
CN106507068A (zh) * 2016-11-29 2017-03-15 广东欧珀移动通信有限公司 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106604001A (zh) * 2016-11-29 2017-04-26 广东欧珀移动通信有限公司 图像处理方法、图像处理装置、成像装置及电子装置

Also Published As

Publication number Publication date
US20180152634A1 (en) 2018-05-31
US10356315B2 (en) 2019-07-16
CN106507019A (zh) 2017-03-15
EP3328076A1 (en) 2018-05-30
ES2761401T3 (es) 2020-05-19
CN106507019B (zh) 2019-05-10
EP3328076B1 (en) 2019-10-23

Similar Documents

Publication Publication Date Title
WO2018099009A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
WO2018098978A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
CN106454289B (zh) 控制方法、控制装置及电子装置
WO2018098982A1 (zh) 图像处理方法、图像处理装置、成像装置及电子装置
CN106604001B (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018098983A1 (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106412592B (zh) 图像处理方法、图像处理装置、成像装置及电子装置
WO2018099010A1 (zh) 控制方法、控制装置和电子装置
CN106341670B (zh) 控制方法、控制装置及电子装置
CN106454288B (zh) 控制方法、控制装置、成像装置及电子装置
WO2017101451A1 (zh) 成像方法、成像装置及电子装置
WO2018098981A1 (zh) 控制方法、控制装置、电子装置和计算机可读存储介质
CN106507069B (zh) 控制方法、控制装置及电子装置
WO2018099007A1 (zh) 控制方法、控制装置及电子装置
CN107370917B (zh) 控制方法、电子装置和计算机可读存储介质
WO2018099031A1 (zh) 控制方法和电子装置
WO2018099006A1 (zh) 控制方法、控制装置及电子装置
WO2018098977A1 (zh) 图像处理方法、图像处理装置、成像装置、制造方法和电子装置
CN106507067B (zh) 控制方法、控制装置及电子装置
CN105611257B (zh) 成像方法、图像传感器、成像装置及电子装置
CN106534822B (zh) 控制方法、控制装置及电子装置
CN106506984A (zh) 图像处理方法及装置、控制方法及装置、成像及电子装置
CN106504217A (zh) 图像处理方法、图像处理装置、成像装置及电子装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17875332

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17875332

Country of ref document: EP

Kind code of ref document: A1