[go: up one dir, main page]

WO2019072222A1 - Procédé, dispositif et appareil de traitement d'image - Google Patents

Procédé, dispositif et appareil de traitement d'image Download PDF

Info

Publication number
WO2019072222A1
WO2019072222A1 PCT/CN2018/109951 CN2018109951W WO2019072222A1 WO 2019072222 A1 WO2019072222 A1 WO 2019072222A1 CN 2018109951 W CN2018109951 W CN 2018109951W WO 2019072222 A1 WO2019072222 A1 WO 2019072222A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target image
camera
ghost
sensitivity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2018/109951
Other languages
English (en)
Chinese (zh)
Inventor
王银廷
胡碧莹
张熙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201710959936.0A external-priority patent/CN109671106B/zh
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to EP18866515.2A priority Critical patent/EP3686845B1/fr
Publication of WO2019072222A1 publication Critical patent/WO2019072222A1/fr
Priority to US16/847,178 priority patent/US11445122B2/en
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration

Definitions

  • the present invention relates to the field of terminal technologies, and in particular, to an image processing method, apparatus, and device.
  • the present invention proposes a set of photographing methods for capturing motion scenes.
  • the embodiment of the invention provides an image processing method, device and device, which can provide a capture mechanism for a user, and can capture high-definition images when processing a motion scene, thereby improving the user's photographing experience.
  • an embodiment of the present invention provides an image processing method, including: obtaining an N-frame image; determining a reference image in the N-frame image, and resting the N-1 frame image as a to-be-processed image; Processing the image to obtain an N-1 frame de-ghost image; performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image; wherein, the N-1 frame is obtained according to the N-1 frame to be processed image
  • De-ghosting image includes: performing step 1 - step 4 for the ith frame image in the image to be processed of the N-1 frame, i taking all positive integers not greater than N-1,
  • Step 1 register the image of the i-th frame with the reference image to obtain an i-th registration image
  • step 2 obtain an i-th difference image according to the i-th registration image and the reference image
  • step 3 obtain the first image according to the i-th difference image i ghost ghost weight image
  • step 4 according to the i-th ghost weight image, the i-th registration image is merged with the reference image to obtain the i-th frame de-ghost image.
  • an embodiment of the present invention provides an image processing apparatus, where the apparatus includes: an acquiring module, configured to obtain an N-frame image; and a determining module, configured to determine a reference image in the N-frame image, and the remaining N-1 frame images are An image to be processed; a ghosting module, configured to perform the following steps 1 - 4 on the ith frame image in the image to be processed of the N-1 frame to obtain an N-1 frame de ghost image, where the i is not greater than N All positive integers of -1; Step 1: Register the image of the ith frame with the reference image to obtain the i-th registration image; Step 2: Obtain the ith difference image according to the i-th registration image and the reference image; Step 3: Obtaining an i-th ghost weight image according to the i-th difference image; step 4: fusing the i-th registration image and the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image; the mean operation module, The reference image and
  • the user can still capture the image in motion in the motion scene, and can obtain a high-definition picture.
  • the method before obtaining the N frame image, further comprises: when detecting that the following three situations exist simultaneously, generating a control signal, the control signal is used to indicate acquisition N frame image; Case 1: The framing image of the camera is detected as a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe duration; Case 3: The camera is detected to be in a very bright scene, correspondingly, the current camera The sensitivity is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  • the above three situations can detect at least one to generate a corresponding control signal.
  • the smart switch to the first capture mode mentioned below, and the control signal is generated to acquire the N frame picture in the first capture mode.
  • the method of detecting the above situation can be performed by the detection module.
  • obtaining the N-frame image includes: maintaining the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration and increasing the sensitivity according to a preset ratio, a first exposure duration and a first sensitivity; setting an exposure duration and a sensitivity of the camera to the first exposure duration and the first sensitivity, respectively, and capturing an N-frame image.
  • the method before obtaining the N frame image, further comprises: when detecting that the following three situations exist simultaneously, generating a control signal, the control signal is used to indicate acquisition N frame image; Case 1: The viewfinder image of the camera is detected as a moving image; Case 2: The current exposure time of the camera is detected to exceed the safe duration; Case 3: The camera is detected to be in a moderately bright scene, correspondingly, the camera The current sensitivity is in a first preset threshold interval, and the current exposure duration is in a second predetermined threshold interval.
  • the above three situations can detect at least one to generate a corresponding control signal.
  • the smart switch to the second capture mode or the third capture mode mentioned below, and the control signal is generated to acquire the N frame image in the second capture mode or the third capture mode.
  • the method of detecting the above situation can be performed by the detection module.
  • obtaining the N frame image includes: keeping the product of the current sensitivity of the camera and the exposure duration constant, and decreasing the exposure duration according to the preset ratio. And increasing the sensitivity, obtaining the second exposure duration and the second sensitivity; setting the exposure duration and the sensitivity of the camera to the second exposure duration and the second sensitivity respectively, and taking N frames of images; the method further comprises: pressing the current camera The first new image of one frame is captured by the sensitivity and the exposure duration; the second target image is obtained according to the first target image and the first new image.
  • the obtaining the second target image according to the first target image and the first new image comprises: A new image is registered with the reference image or the first target image to obtain a first (two) registration image; and the first (two) registration image and the first target image are obtained according to the first (two) a difference image; obtaining a first (two) ghost weight image according to the first (two) difference image; and the first (two) registration image according to the first (two) ghost weight image
  • the first target image is fused to obtain a first (two) de-ghost image; and the first (two) de-ghost image and the first target image are subjected to weighted fusion of pixel values to obtain the first Two target images.
  • obtaining the N frame image includes: keeping the current sensitivity of the camera unchanged, and setting the current exposure duration to a lower third exposure. And capturing N frames of images; the method further comprising: capturing a second new image according to the current sensitivity and the exposure duration of the camera; obtaining a third according to the first target image and the second new image Target image.
  • obtaining the third target image according to the first target image and the second new image includes: according to the second new image, Processing the first target image according to a preset brightness correction algorithm to obtain a fourth target image; and registering the second new image with the reference image or the fourth target image to obtain a third (four a registration image; obtaining a third (four) difference image according to the third (four) registration map and the fourth target image; obtaining a third (four) according to the third (four) difference image ghost weighting image; merging the third (four) registration image with the fourth target image according to the third (four) ghost weight image to obtain a third (four) de-ghost image; Performing weighted fusion of pixel values according to the third (four) de-ghost image and the fourth target image to obtain a fifth (six) target image; and the fifth (six) target image and the first The target image is subjected to pyramid fusion processing to obtain the third target image
  • the above-mentioned possible technical implementations may be processed by the processor in response to programs and instructions in the memory.
  • the user directly enters the capture mode according to his own choice, such as the first capture mode or the second capture mode or the third capture mode mentioned above;
  • the terminal does not need to detect the framing environment, because each capture mode has a preset parameter rule (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course. Other performance parameters, etc. can also be included; once entering a particular capture mode, the camera automatically adjusts to the corresponding sensitivity and corresponding exposure duration for shooting. Therefore, if the user directly adopts the snap mode, the N pictures are taken to take N pictures with corresponding sensitivity and corresponding exposure time to perform subsequent image processing in the corresponding mode.
  • the action of photographing can be triggered by the user pressing the shutter button.
  • the current sensitivity of the camera and the current exposure duration have been adjusted to the first exposure duration and the first sensitivity.
  • the first exposure duration and the first exposure time N photos are taken at a sensitivity for subsequent processing; in another case, the camera still maintains the current sensitivity and the current exposure time before the user presses the shutter, and the current sensitivity of the camera when the user presses the shutter.
  • the current exposure duration is adjusted to be set to the first exposure duration and the first sensitivity, and N pictures are taken for the first exposure duration and the first sensitivity for subsequent processing.
  • the image may be displayed in the state of the current sensitivity and the current exposure time, or the image may be displayed in the state of the first exposure time and the first sensitivity.
  • the action of photographing can be triggered by the user pressing the shutter button.
  • a first new image of the frame is obtained with the current sensitivity and the current exposure duration, and the current sensitivity and the current exposure duration adjustment are set to the second exposure.
  • the duration and the second sensitivity were taken and N pictures were taken under the conditions, and a total of N+1 pictures were obtained for subsequent processing.
  • the current sensitivity and the current exposure duration are set to the second exposure duration and the second sensitivity, and N pictures are taken under the condition, and then restored to the location.
  • a first new image of one frame is obtained under the condition of the current sensitivity and the current exposure duration; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be displayed in the state of the second exposure duration and the second sensitivity.
  • the action of photographing can be triggered by the user pressing the shutter button.
  • the second new image of one frame is obtained with the current sensitivity and the current exposure duration; and the current sensitivity of the camera is kept unchanged, and the current exposure duration is set to be lower. Three exposure durations, and N pictures were taken under this condition, and a total of N+1 pictures were obtained for subsequent processing.
  • the current exposure duration is set to a lower third exposure duration, and N pictures are taken under the condition, and then the current exposure time is restored to the current sensitivity.
  • a second new image of one frame is obtained under the condition of degree; a total of N+1 pictures are obtained for subsequent processing.
  • the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be performed in the state of the third exposure duration and the current sensitivity.
  • an embodiment of the present invention provides a terminal device, where the terminal device includes a memory, a processor, a bus, and a camera, where the memory, the camera, and the processor are connected by using the bus; wherein, the camera is used to Acquiring an image signal under control of the processor; storing a computer program and instructions; the processor is configured to invoke the computer program and instructions stored in the memory, to cause the terminal device to perform any of the above possibilities Design method.
  • the terminal device further includes an antenna system, and the antenna system transmits and receives wireless communication signals under the control of the processor to implement wireless communication with the mobile communication network;
  • the mobile communication network includes the following one Or multiple: GSM network, CDMA network, 3G network, 4G network, FDMA, TDMA, PDC, TACS, AMPS, WCDMA, TDSCDMA, WIFI and LTE networks.
  • the above method, device and device can be applied to a scene in which the camera software provided by the terminal is used for shooting; or can be applied to a scene in which a third-party camera software is used for shooting in the terminal; the shooting includes normal shooting, self-timer, video telephony, and video conference. , VR shooting, aerial photography and other shooting methods.
  • the terminal in the embodiment of the present invention may include multiple camera modes, such as a simple capture mode, or a camera-only mode that determines whether to capture after the scene condition is detected; when the terminal is in the capture mode, for the motion scene Or, if the signal-to-noise ratio is high, it is difficult to take a clear photo scene.
  • This program has been able to take high-definition photos, greatly improving the user's photo experience.
  • 1 is a schematic structural view of a terminal
  • FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for de-ghosting an image according to an embodiment of the present invention
  • FIG. 4 is a flowchart of a capture system according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of another image processing method according to an embodiment of the present invention.
  • FIG. 6 is a schematic diagram of another image processing method according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention.
  • the terminal may be a device that provides photographing and/or data connectivity to the user, a handheld device with a wireless connection function, or other processing device connected to the wireless modem, such as a digital camera, a SLR camera, Mobile phones (or “cellular" phones) can be portable, pocket-sized, handheld, wearable devices (such as smart watches, etc.), tablets, personal computers (PCs, Personal Computers), PDAs (Personal Digital Assistants, Personal digital assistant), POS (Point of Sales), on-board computer, drone, aerial camera, etc.
  • a digital camera a SLR camera
  • Mobile phones or “cellular” phones
  • PCs personal computers
  • PDAs Personal Digital Assistants
  • POS Point of Sales
  • on-board computer drone
  • aerial camera etc.
  • FIG. 1 shows an alternative hardware structure diagram of the terminal 100.
  • the terminal 100 may include a radio frequency unit 110, a memory 120, an input unit 130, a display unit 140, a camera 150, an audio circuit 160, a speaker 161, a microphone 162, a processor 170, an external interface 180, a power supply 190, and the like.
  • the camera 150 has at least two.
  • the camera 150 is used for capturing images or videos, and can be triggered by an application instruction to realize a photographing or photographing function.
  • the camera may include an imaging lens, a filter, an image sensor, a focus anti-shake motor, and the like.
  • the light emitted or reflected by the object enters the imaging lens, passes through the filter, and finally converges on the image sensor.
  • the imaging lens is mainly used for collecting and reflecting light emitted or reflected by all objects (also referred to as objects to be photographed) in the photographing angle of view; the filter is mainly used to remove unnecessary light waves in the light (for example, light waves other than visible light)
  • the image sensor is mainly used for photoelectrically converting the received optical signal, converting it into an electrical signal, and inputting it to the processing 170 for subsequent processing.
  • FIG. 2 is merely an example of a portable multi-function device, and does not constitute a limitation of the portable multi-function device, and may include more or less components than those illustrated, or may combine some components, or different. Parts.
  • the input unit 130 can be configured to receive input numeric or character information and to generate key signal inputs related to user settings and function control of the portable multifunction device.
  • the input unit 130 may include a touch screen 131 and other input devices 132.
  • the touch screen 131 can collect touch operations on or near the user (such as the user's operation on the touch screen or near the touch screen using any suitable object such as a finger, a joint, a stylus, etc.), and drive the corresponding according to a preset program. Connection device.
  • the touch screen can detect a user's touch action on the touch screen, convert the touch action into a touch signal and send the signal to the processor 170, and can receive and execute a command sent by the processor 170; the touch signal includes at least a touch Point coordinate information.
  • the touch screen 131 can provide an input interface and an output interface between the terminal 100 and a user.
  • touch screens can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 130 may also include other input devices.
  • other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control button 132, switch button 133, etc.), trackball, mouse, joystick, and the like.
  • the display unit 140 can be used to display information input by a user or information provided to a user and various menus of the terminal 100.
  • the display unit is further configured to display an image acquired by the device using the camera 150, including a preview image, an initial image captured, and a target image processed by a certain algorithm after the shooting.
  • the touch screen 131 may cover the display panel 141.
  • the touch screen 131 detects a touch operation on or near it, the touch screen 131 transmits to the processor 170 to determine the type of the touch event, and then the processor 170 displays the panel according to the type of the touch event.
  • a corresponding visual output is provided on 141.
  • the touch screen and the display unit can be integrated into one component to implement the input, output, and display functions of the terminal 100.
  • the touch display screen represents the function set of the touch screen and the display unit; In some embodiments, the touch screen and the display unit can also function as two separate components.
  • the memory 120 can be used to store instructions and data, the memory 120 can mainly include a storage instruction area and a storage data area, the storage data area can store an association relationship between the joint touch gesture and the application function; the storage instruction area can store an operating system, an application, Software units such as instructions required for at least one function, or their subsets, extension sets.
  • a non-volatile random access memory can also be included; providing hardware, software, and data resources in the management computing device to the processor 170, supporting the control software and applications. Also used for the storage of multimedia files, as well as the storage of running programs and applications.
  • the processor 170 is a control center of the terminal 100, and connects various parts of the entire mobile phone by various interfaces and lines, and executes various kinds of the terminal 100 by operating or executing an instruction stored in the memory 120 and calling data stored in the memory 120. Function and process data to monitor the phone as a whole.
  • the processor 170 may include one or more processing units; preferably, the processor 170 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It can be understood that the above modem processor may not be integrated into the processor 170.
  • the processors, memories can be implemented on a single chip, and in some embodiments, they can also be implemented separately on separate chips.
  • the processor 170 can also be configured to generate corresponding operational control signals, send to corresponding components of the computing processing device, read and process data in the software, and in particular read and process the data and programs in the memory 120 to enable Each function module performs the corresponding function, thereby controlling the corresponding component to act according to the requirements of the instruction.
  • the radio frequency unit 110 can be used for receiving and transmitting signals during transmission and reception of information or during a call. Specifically, after receiving the downlink information of the base station, the processing is performed by the processor 170. In addition, the uplink data is designed to be sent to the base station.
  • RF circuits include, but are not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • the radio unit 110 can also communicate with network devices and other devices through wireless communication.
  • the wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code). Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile communication
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • the audio circuit 160, the speaker 161, and the microphone 162 can provide an audio interface between the user and the terminal 100.
  • the audio circuit 160 can transmit the converted electrical data of the received audio data to the speaker 161 for conversion to the sound signal output by the speaker 161; on the other hand, the microphone 162 is used to collect the sound signal, and can also convert the collected sound signal.
  • the electrical signal is received by the audio circuit 160 and converted into audio data, and then processed by the audio data output processor 170, transmitted to the terminal, for example, via the radio frequency unit 110, or outputted to the memory 120 for further processing.
  • the audio circuit can also include a headphone jack 163 for providing a connection interface between the audio circuit and the earphone.
  • the terminal 100 also includes a power source 190 (such as a battery) for powering various components.
  • a power source 190 such as a battery
  • the power source can be logically coupled to the processor 170 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the terminal 100 further includes an external interface 180, which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
  • an external interface 180 which may be a standard Micro USB interface, or a multi-pin connector, which may be used to connect the terminal 100 to communicate with other devices, or may be used to connect the charger to the terminal 100. Charging.
  • the terminal 100 may further include a flash, a wireless fidelity (WiFi) module, a Bluetooth module, various sensors, and the like, and details are not described herein. All of the methods described below can be applied to the terminal shown in FIG. 1.
  • WiFi wireless fidelity
  • Bluetooth Bluetooth
  • an embodiment of the present invention provides an image processing method.
  • the specific processing method includes the following steps:
  • Step 31 Obtain an N frame image, where N is a positive integer greater than 2;
  • Step 32 Determine a reference image in the N frame image, and the remaining N-1 frame images are to be processed images; if N is 20, the first frame image is a reference image, and the remaining 19 frames are images to be processed.
  • i in 33 may be any one of 1-19;
  • Step 33 Obtain an N-1 frame de-ghost image according to the N-1 frame to be processed image; specifically, step s331-s334 may be performed on the ith frame in the N-1 frame; wherein i may be taken no more than For all positive integers of N-1, in some embodiments, only the M frames to be processed may be taken to obtain an M frame de ghost image, and M is a positive integer smaller than N-1; , see Figure 3;
  • S331 register an ith frame image and the reference image to obtain an i-th registration image
  • Step 34 Obtain a first target image according to the reference image and the N-1 frame de-ghost image; specifically, performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image, and an average value
  • the operation can also include some corrections to the average, or an average of the absolute values, and the like.
  • step 31 receives the shooting instruction under the current parameter setting, and continuously takes N pictures, and can be used as the first capture mode, the second capture mode, and the third capture mode.
  • An alternative to step 31 in the middle Specifically, the user directly enters the capture mode according to his own choice, as the first capture mode or the second capture mode or the third capture mode mentioned in the following; at this time, the terminal does not need to detect the framing environment, because each capture mode There will be a preset parameter rule (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course, may include other performance parameters, etc.; The capture mode, the camera will automatically adjust to the corresponding sensitivity and the corresponding exposure time to shoot. Therefore, if the user directly adopts the capture mode, the N pictures are taken, and the N pictures are taken by the sensitivity corresponding to the capture mode and the corresponding exposure time to perform subsequent image processing in the corresponding mode.
  • the camera of the terminal is in the automatic mode or the smart mode, the camera needs to detect the framing environment, if the framing image of the camera is detected as a moving image; and the current exposure time of the camera is detected to exceed the safe time; and the framing is detected; The environment is extremely bright; the first capture mode proposed in the present invention is adopted. If the framing image of the camera is detected as a moving image; and the current exposure time of the camera is detected to exceed the safe duration; and the framing environment is detected to be in a medium-high brightness environment, the second capture mode or the third proposed in the present invention is adopted. Capture mode. If none of the above scenarios are detected, you can use any of the camera-supported camera modes to shoot. A specific photographing process can be seen in Figure 4.
  • the "camera” in this document generally refers to a system capable of performing a photographing function in a terminal device, including a camera, and a necessary processing module and a storage module to complete image acquisition and transmission, and may also include some processing function modules.
  • the “current exposure duration” and “current sensitivity” respectively refer to the exposure duration and sensitivity corresponding to the preview of the data stream of the framing image under initial conditions. Usually related to the camera's own properties and initial settings. In a possible design, if the terminal does not detect the camera's framing environment, or detects the framing environment but does not detect any of the following three situations, the camera previews the framing image data stream corresponding to The exposure duration and sensitivity are also "current exposure duration" and "current sensitivity”.
  • Case 1 The view image of the camera is a moving image
  • Case 2 The current exposure time of the camera is detected to exceed the safe time
  • Case 3 The framing environment is detected as a very bright environment or a moderately bright environment.
  • the framing image as a moving image
  • performing motion detection on the preview data stream analyzing the photo preview stream, and detecting each time x frames (the number of interval frames x is adjustable, x is a positive integer), each time
  • the difference between the current detected frame image and the last detected frame image is compared at the time of detection.
  • the two images may be divided into several regions by the same division manner, for example, 64 regions per image, and if there is a large difference between one or more regions, it is regarded as a motion scene.
  • the current exposure time and the safety shutter can be obtained by acquiring camera parameters.
  • a secure shutter is a property of a terminal camera.
  • the current exposure time is greater than the safety shutter will consider the capture mode.
  • the iso_th1 and expo_th1 can be determined according to the specific needs of the user; the medium highlight scene definition: iso_th1 ⁇ ISO ⁇ iso_th2, and expo_th1 ⁇ expo ⁇ expo_th2, the same iso_th2 and expo_th2 can also be based on The user's specific needs are determined; the low-light scene definition: iso_th2 ⁇ ISO and expo_th2 ⁇ expo; it should be understood that the division of these intervals is determined by the user's needs, and there are cases where discontinuities or coincidences are allowed between these value intervals.
  • the first capture mode, the second capture mode, and the third capture mode are described in detail below.
  • Step 31 is specifically: obtaining parameters such as the current sensitivity of the camera and the current exposure duration, keeping the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration according to the preset ratio, and increasing the sensitivity to obtain the first exposure duration and
  • the first sensitivity such as the first exposure time is 1/2 or 1/4 of the original exposure time, and the first sensitivity is corresponding to 2 or 4 times the original sensitivity, and the specific ratio may be set according to the user's needs or
  • the rule is adjusted; the exposure time and the sensitivity of the camera are set to the first exposure time length and the first sensitivity, respectively, and N frames of images are taken. The following steps are to perform noise reduction on the N frames.
  • the action of taking a picture can be triggered by the user pressing the shutter button.
  • the current sensitivity of the camera and the current exposure duration have been adjusted to the first exposure duration and the first sensitivity.
  • the first exposure duration and the first exposure time N photos are taken at a sensitivity for subsequent processing; in another case, the camera still maintains the current sensitivity and the current exposure time before the user presses the shutter, and the current sensitivity of the camera when the user presses the shutter.
  • the current exposure duration is adjusted to be set to the first exposure duration and the first sensitivity, and N pictures are taken for the first exposure duration and the first sensitivity for subsequent processing.
  • the image may be displayed in the state of the current sensitivity and the current exposure time, or the image may be displayed in the state of the first exposure time and the first sensitivity.
  • Step 32 is specifically: determining one reference image in the N frame image, and the remaining N-1 frame images are to be processed images.
  • the first frame image or the middle frame image of the N frame images is taken as a reference image.
  • the subsequent steps are described by taking the first frame image as an example.
  • Step 33 is specifically: obtaining an N-1 frame de-ghost image according to the image to be processed of the N-1 frame. This step can be subdivided into many substeps. Step s331-s334 may be performed on the ith frame in the remaining N-1 frames; wherein i may take all positive integers that are not greater than N-1, and in the specific implementation process, the local frame may also be taken to obtain the local frame degugos. For the sake of convenience of explanation, in the present embodiment, the de-ghost image is obtained by all the frames in the N-1 frame.
  • S331 is specifically: registering the ith frame image with the reference image to obtain an i-th registration image.
  • the specific registration method may be: (1) performing feature extraction on the i-th frame image and the reference image in the same manner, obtaining a series of feature points, and characterizing each feature point; (2) The i-frame image is matched with the feature points of the reference image; a series of feature point pairs are obtained, and the ransac algorithm (prior art) is used for bad point culling; (3) the two image images are obtained by solving the matched feature point pairs.
  • a matrix homoography matrix, affine matrix, etc.
  • the ith frame image is aligned with the reference image to obtain a registration map of the ith frame.
  • mature open source algorithms can be used to call this step, so it will not be expanded in detail here.
  • S332 is specifically: obtaining an ith difference image according to the i-th registration image and the reference image. Specifically, the ith frame registration map and the reference image are obtained for pixel-by-pixel difference, and the difference between the two images is obtained according to the absolute value of each difference.
  • S333 is specifically: obtaining an i-th ghost weight image according to the ith difference image; specifically, a pixel point exceeding a preset threshold in the difference map is set to M (eg, 255), and a pixel point not exceeding a preset threshold is set to N ( For example, 0), and Gaussian smoothing of the re-assigned difference map, the i-th ghost weight image can be obtained.
  • M e.g. 255
  • N For example, 0
  • S334 is specifically: merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain the i-th frame de-ghost image.
  • the i-th frame registration map (image_i in the following formula) and the reference image (image_1 in the following formula) are fused pixel by pixel, that is, the first
  • the i frame removes ghost images (no_ghost_mask).
  • the fusion formula is as follows, where m, n represent pixel coordinates:
  • Step 34 is specifically: performing a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  • a mean operation is performed on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  • the first target image is the final image obtained when the terminal executes the photographing mode.
  • the second capture mode is more complicated than the first capture mode. Some of the steps are the same as those of the capture mode 1. A flowchart of the second capture mode can be seen in FIG. 5.
  • Step 41 According to the current sensitivity of the camera and the exposure duration, one frame of the first new image is taken; and the exposure time is decreased according to the preset ratio and the sensitivity is increased to obtain the second exposure duration and the second sensitivity, and the exposure duration of the camera is The sensitivity is set to the second exposure duration and the second sensitivity, respectively, and N frames of images are taken;
  • the action of taking a picture can be triggered by the user pressing the shutter button.
  • a first new image of the frame is obtained with the current sensitivity and the current exposure duration, and the current sensitivity and the current exposure duration adjustment are set to the second exposure.
  • the duration and the second sensitivity were taken and N pictures were taken under the conditions, and a total of N+1 pictures were obtained for subsequent processing.
  • the current sensitivity and the current exposure duration are set to the second exposure duration and the second sensitivity, and N pictures are taken under the condition, and then restored to the location.
  • a first new image of one frame is obtained under the condition of the current sensitivity and the current exposure duration; a total of N+1 pictures are obtained for subsequent processing. Further, in the preview image data stream, the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be displayed in the state of the second exposure duration and the second sensitivity.
  • Step 42 Obtain the first target image by using the first capture mode scheme (step 31 - step 34) for the N frame image obtained in the previous step, wherein it should be understood that the second sensitivity, the second exposure duration, and some of the above may be The adjusted threshold may change accordingly due to changes in the scene;
  • Step 43 Obtain a second target image according to the first target image and the first new image.
  • it may include but is not limited to the following two implementation modes:
  • S4312 obtaining a first difference image according to the first registration image and the first target image
  • S4315 performing weighted fusion of pixel values according to the first de-ghost image and the first target image to obtain a second target image, specifically, including time domain fusion s4315(1), time domain fusion s4315(3), and frequency domain Four implementations of s4315(2) and frequency domain fusion s4315(4) are combined.
  • Time domain fusion s4315(1) Guide filtering the first target image and the first de-ghost image respectively, filtering out short frame information (existing mature algorithm), and recording it as fusion_gf and noghost_gf.
  • the fusion_gf and noghost_gf are pixel-weighted and fused.
  • the specific fusion formula is as follows:
  • v is a constant noise corresponding to the current ISO gear position, and is a constant value
  • W is a weight value
  • the value range is [0, 1).
  • the target details are the larger values of the details filtered by the first target image and the first de-ghost image in the pixel-directed filtering. Increase the image detail to get the second target image.
  • Time domain fusion s4315 (3) Sampling the first target image (denoted as fusion) and the first de-ghost image (denoted as noghost), respectively, the width and height are downsampled twice, respectively, and the first target image is obtained.
  • the sampled map and the first de-ghost image downsampled graph are recorded as fusionx4 and noghostx4.
  • the fusionx4 and noghostx4 are upsampled, and the width and height are upsampled by 2 times, and two images with the same size as before the unsampling are obtained, which are denoted as fusion' and noghost'.
  • the sampling error map of the first target image is obtained, which is denoted as fusion_se; the difference between the noghost and the noghost' pixel-by-pixel points is obtained, and the sampling error map of the first de-ghost image is obtained, which is recorded as Noghost_se.
  • Guided filtering existing mature algorithm
  • fusion_gf and noghost_gf are weighted by pixel values to obtain a fused image, which is denoted as Fusion.
  • Fusion is the same as the formula in s4315(1).
  • the fused image is added back to the point where the first target image is filtered out in the directional filtering.
  • the image is upsampled, and the width and height are upsampled by 2 times, which is recorded as FusionUp.
  • the two sampling error maps of fusion_se and noghost_se are selected point by point, and added to the FusionUp point by point to increase the image detail to obtain the second target image.
  • Frequency domain fusion s4315(2) Guide filtering of the first target image and the first de-ghost image image respectively (existing mature algorithm); respectively performing Fourier transform on the two filtered images, and obtaining corresponding Amplitude; the amplitude ratio is used as the weight, and the Fourier spectrum corresponding to the two images is fused.
  • the specific fusion formula is similar to the time domain fusion.
  • the inverse spectrum of the fused spectrum is inversely transformed to obtain a fused image. Adding back to the target details pixel by pixel for the fused image. For any pixel, the target details are the larger values of the details filtered by the first target image and the first de-ghost image in the pixel-directed filtering. Increase the image detail to get the second target image.
  • Frequency domain fusion s4315 Sampling the first target image (denoted as fusion) and the first de-ghost image (denoted as noghost), respectively, the width and height are downsampled twice, respectively, and the first target image is obtained.
  • the sampled map and the first de-ghost image downsampled graph are recorded as fusionx4 and noghostx4.
  • the fusionx4 and noghostx4 are upsampled, and the width and height are upsampled by 2 times, and two images with the same size as before the unsampling are obtained, which are denoted as fusion' and noghost'.
  • the sampling error map of the first target image is obtained, which is denoted as fusion_se; the difference between the noghost and the noghost' pixel-by-pixel points is obtained, and the sampling error map of the first de-ghost image is obtained, which is recorded as Noghost_se.
  • Guided filtering of fusionx4 and noghostx4 denoted as fusion_gf and noghost_gf.
  • Fourier transform is performed on the two filtered images respectively, and the corresponding amplitude is obtained.
  • the amplitude ratio is used as the weight to fuse the Fourier spectrum corresponding to the two images.
  • the specific fusion formula is similar to the time domain fusion.
  • the inverse spectrum of the fused spectrum is inversely transformed to obtain a fused image.
  • the merged image is added back to the pixel of the first target image by pixel-by-pixel point, and the added image is double-sampled in width and height, which is recorded as FusionUp.
  • the two sampling error maps of fusion_se and noghost_se are selected point by point, and added to FusionUp point by point to increase the image detail to obtain the second target image.
  • s4311-s4314 is the same as the specific algorithm involved in s331-s334, mainly the replacement of the input image, and will not be described here.
  • S4325 performing weighted fusion of pixel values according to the second de-ghost image and the first target image to obtain a second target image.
  • the method may include two implementations of time domain fusion and frequency domain fusion, and may refer to the foregoing s4315 (1). ), s4315 (3) and frequency domain fusion s4315 (2), s4315 (4), as the algorithm, only the replacement of the input image, no longer repeat here.
  • the third capture mode is more complicated than the first capture mode, and can be understood as a kind of replacement of the second capture mode to some extent, and the second and third capture modes are commonly used in the medium highlight scene.
  • a flowchart of the third capture mode can be seen in FIG. 6.
  • Step 51 According to the current sensitivity of the camera and the current exposure duration, one frame of the second new image is taken; and the current sensitivity of the camera is kept unchanged, the current exposure duration is set to a lower third exposure duration, and N frames of images are captured;
  • the action of taking a picture can be triggered by the user pressing the shutter button.
  • the second new image of one frame is obtained with the current sensitivity and the current exposure duration; and the current sensitivity of the camera is kept unchanged, and the current exposure duration is set to be lower. Three exposure durations, and N pictures were taken under this condition, and a total of N+1 pictures were obtained for subsequent processing.
  • the current exposure duration is set to a lower third exposure duration, and N pictures are taken under the condition, and then the current exposure time is restored to the current sensitivity.
  • a second new image of one frame is obtained under the condition of degree; a total of N+1 pictures are obtained for subsequent processing.
  • the display image may be displayed in the state of the current sensitivity and the current exposure duration, or the display image may be performed in the state of the third exposure duration and the current sensitivity.
  • Step 52 Obtain the first target image by using the first snapshot mode scheme (step 31 - step 34) for the N frame image obtained in the previous step, wherein it should be understood that the third exposure duration and some adjustable thresholds may be due to Changes in the scene produce corresponding changes.
  • Step 53 Obtain a third target image according to the first target image and the second new image.
  • it may include but is not limited to the following two implementation modes:
  • S5316 performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image, and the fusion algorithm can be referred to as time domain fusion s4315(1), s4315(3), and frequency domain fusion s4315(2) ), any one of s4315(4);
  • S5317 performing pyramid fusion processing on the fifth target image and the first target image to obtain a third target image; specifically, constructing a Laplacian pyramid of the fifth target image and the first target image respectively, and constructing a weight map of the image fusion And normalizing and smoothing the weight map, constructing a Gaussian pyramid on the normalized and smoothed weight map, and merging the pyramids of all the images on the corresponding layer according to the weight setting of each layer of pyramids to obtain a synthetic pyramid; At the top of the Plass pyramid, the synthetic pyramid is reconstructed according to the inverse process of the pyramid generation, and each layer of information is added one by one to restore the fused image.
  • the pyramid fusion process is an existing mature algorithm and will not be described in detail.
  • s5312-s5316 can be referred to s4311-s4315; it will not be described here.
  • S5326 performing weighted fusion of pixel values according to the third de-ghost image and the fourth target image to obtain a fifth target image, and the fusion algorithm can refer to time domain fusion s4315(1), s4315(3), and frequency domain fusion s4315(2) One of s4315(4);
  • S5327 performing pyramid fusion processing on the fifth target image and the first target image to obtain a third target image; specifically, constructing a Laplacian pyramid of the fifth target image and the first target image respectively, and constructing a weight map of the image fusion And normalizing and smoothing the weight map, constructing a Gaussian pyramid on the normalized and smoothed weight map, and merging the pyramids of all the images on the corresponding layer according to the weight setting of each layer of pyramids to obtain a synthetic pyramid; At the top of the Plass pyramid, the synthetic pyramid is reconstructed according to the inverse process of the pyramid generation, and each layer of information is added one by one to restore the fused image.
  • the pyramid fusion process is an existing mature algorithm and will not be described in detail.
  • the present invention provides an image processing method capable of providing a snap mode for a camera.
  • the user can capture clear images in different scenes, satisfy the user's snapping psychology, and can capture and record his life scene anytime and anywhere, greatly improving the user experience.
  • the embodiment of the present invention provides an image processing apparatus 700.
  • the apparatus 700 can be applied to various types of photographing apparatuses. As shown in FIG. 7, the apparatus 700 includes an obtaining module 701 and a determining module. 702, go ghost module 703, mean operation module 704, wherein:
  • the obtaining module 701 is configured to obtain an N frame image.
  • the obtaining module 701 can be implemented by the processor invoking a program instruction in the memory to control the camera to acquire an image.
  • the determining module 702 is configured to determine one reference image in the N frame image, and the remaining N-1 frame images are to be processed images.
  • the determining module 702 can be implemented by a processor invoking a program instruction in a memory or an externally input program instruction.
  • the ghosting module 703 is configured to perform the following steps 1 - 4 on the ith frame image in the N-1 frame to be processed image to obtain an N-1 frame de ghost image, where the i is not greater than N- All positive integers of 1;
  • Step 1 register the image of the ith frame with the reference image to obtain an i-th registration image
  • Step 2 Obtain an ith difference image according to the ith registration image and the reference image
  • Step 3 Obtain an i-th ghost weight image according to the ith difference image
  • Step 4 merging the i-th registration image with the reference image according to the i-th ghost weight image to obtain an i-th frame de-ghost image;
  • the de-ghosting module 703 can be implemented by a processor, and can perform corresponding calculations by calling data and algorithms in the local storage or the cloud server.
  • the averaging operation module 704 is configured to perform a mean operation on the reference image and the N-1 frame de-ghost image to obtain a first target image.
  • the averaging operation module 704 can be implemented by a processor, and can be invoked by using a local memory or a cloud. The data and algorithms in the server are implemented accordingly.
  • the obtaining module 701 is specifically configured to perform the method mentioned in step 31 and the method that can be replaced by the same; the determining module 702 is specifically configured to perform the method mentioned in step 32 and the method that can be replaced equally; The de-ghosting module 703 is specifically configured to perform the method mentioned in the step 33 and the method that can be replaced equally; the averaging operation module 704 is specifically configured to perform the method mentioned in the step 34 and the method that can be replaced equally.
  • the above specific method embodiments and the explanations and expressions in the embodiments are also applicable to the method execution in the device.
  • the device 700 further includes a detection module 705, and the detection module 705 is configured to control the acquisition module to acquire an N frame image according to the following first acquisition manner when detecting that the following three situations exist simultaneously;
  • Case 1 The framing image of the camera is detected as a moving image
  • Case 2 The current exposure time of the camera is detected to exceed the safe duration
  • Case 3 It is detected that the camera is in an extremely bright environment, that is, the current sensitivity is less than the first preset threshold, and the current exposure duration is less than the second preset threshold.
  • the first acquisition mode keep the product of the current sensitivity of the camera and the exposure duration constant, reduce the exposure duration and increase the sensitivity according to the preset ratio, and obtain the first exposure duration and the first sensitivity; the exposure time and sensitivity of the camera The first exposure time and the first sensitivity are respectively set, and an N-frame image is taken.
  • the detecting module 705 is configured to control the acquiring module to acquire an N frame image according to the following second obtaining manner or the third obtaining manner when detecting that the following three situations exist simultaneously;
  • Case 1 The view image of the camera is detected as a moving image; or,
  • Case 3 It is detected that the camera is in a medium-high brightness environment, that is, the current sensitivity is in the first preset threshold interval, and the current exposure time is in the second preset threshold interval.
  • the second acquisition mode keeping the product of the current sensitivity of the camera and the exposure duration constant, decreasing the exposure duration and increasing the sensitivity according to the preset ratio, obtaining the second exposure duration and the second sensitivity; and taking the exposure time and sensitivity of the camera Setting the second exposure duration and the second sensitivity respectively, and capturing N frames of images;
  • the third acquisition mode shooting a second new image according to the current sensitivity and the exposure duration of the camera; keeping the current sensitivity of the camera unchanged, setting the current exposure duration to a lower third exposure duration; and taking N Frame image.
  • the apparatus 700 may further include a fusion module 706, configured to obtain a second target image according to the first target image and the first new image, or to obtain a first image according to the first target image and the second new image. Three target images.
  • the first new image is registered with the reference image to obtain a first registration image; the first difference image is obtained according to the first registration image and the first target image; and the first difference image is obtained according to the first difference image.
  • a ghosting weight image combining the first registration image with the first target image according to the first ghost weight image to obtain a first de-ghost image; according to the first de-ghost image and the first target image
  • a weighted fusion of pixel values is performed to obtain a second target image.
  • it is used to perform the method mentioned in the method (1) of the step 43 and the method which can be equivalently replaced.
  • the first new image is registered with the first target image to obtain a second registration image; and the second difference image is obtained according to the second registration image and the first target image;
  • the second difference image obtains a second ghost image;
  • the second registration image is merged with the first target image according to the second ghost weight image to obtain a second de-ghost image;
  • the second de-ghost image and the first target image perform weighted fusion of pixel values to obtain the second target image.
  • it is used to perform the method mentioned in the method (2) of the step 43 and the method which can be equivalently replaced.
  • the third registration image is merged with the fourth target image to obtain a third de-ghost image; and the third de-ghost image and the fourth target image are subjected to weighted fusion of pixel values to obtain a fifth target image.
  • performing pyramid fusion processing on the fifth target image and the first target image to obtain the third target image is used to perform the method mentioned in the method (1) of the step 53 and the method which can be equivalently replaced.
  • each capture mode will have Pre-set parameter rules (pre-stored in the terminal local or cloud server), that is, each capture mode will have a corresponding sensitivity and exposure duration, of course, may include other performance parameters, etc.; once entering a specific capture Mode, the acquisition module will automatically adjust to the corresponding sensitivity and the corresponding exposure time to shoot. Therefore, if the user directly adopts the capture mode, the acquisition module will take N pictures with corresponding sensitivity and corresponding exposure duration to perform subsequent image processing in the corresponding mode.
  • Pre-set parameter rules pre-stored in the terminal local or cloud server
  • the above detection module 705 and the fusion module 706 can be implemented by a processor calling a program instruction in a memory or a program instruction in the cloud.
  • the present invention provides an image processing apparatus 700.
  • the user can capture a clear image in different scenes, satisfy the user's snapping psychology, and can capture and record his life scene anytime and anywhere, thereby greatly improving the user experience.
  • each module in the above device 700 is only a division of a logical function, and the actual implementation may be integrated into one physical entity in whole or in part, or may be physically separated.
  • each of the above modules may be a separately set processing component, or may be integrated in one chip of the terminal, or may be stored in a storage element of the controller in the form of program code, and processed by one of the processors.
  • the component calls and executes the functions of each of the above modules.
  • the individual modules can be integrated or implemented independently.
  • the processing elements described herein can be an integrated circuit chip with signal processing capabilities.
  • each step of the above method or each of the above modules may be completed by an integrated logic circuit of hardware in the processor element or an instruction in a form of software.
  • the processing element may be a general purpose processor, such as a central processing unit (CPU), or may be one or more integrated circuits configured to implement the above method, for example: one or more specific integrations Circuit (English: application-specific integrated circuit, ASIC for short), or one or more microprocessors (English: digital signal processor, referred to as: DSP), or one or more field programmable gate arrays (English: Field-programmable gate array, referred to as: FPGA).
  • CPU central processing unit
  • DSP digital signal processor
  • FPGA Field-programmable gate array
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

L'invention concerne un procédé, un dispositif et un appareil de traitement d'image. Le procédé consiste à : acquérir un nombre N de trames d'images ; déterminer, parmi les N trames d'images, une image de référence, les N-1 trames d'images restantes étant les images à traiter ; obtenir, conformément aux N-1 trames d'images à traiter, N-1 trames de réduction d'images fantômes ; et effectuer une opération de moyenne sur l'image de référence et les N-1 trames de réduction d'images fantômes afin d'obtenir une première image cible. Le procédé peut fournir un mode de capture pour une caméra de façon à ce qu'un utilisateur puisse capturer des images nettes dans différentes scènes, ce qui permet d'améliorer l'expérience de l'utilisateur.
PCT/CN2018/109951 2017-10-13 2018-10-12 Procédé, dispositif et appareil de traitement d'image Ceased WO2019072222A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18866515.2A EP3686845B1 (fr) 2017-10-13 2018-10-12 Procédé, dispositif et appareil de traitement d'image
US16/847,178 US11445122B2 (en) 2017-10-13 2020-04-13 Image processing method and apparatus, and device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN201710954301.1 2017-10-13
CN201710954301 2017-10-13
CN201710959936.0 2017-10-16
CN201710959936.0A CN109671106B (zh) 2017-10-13 2017-10-16 一种图像处理方法、装置与设备

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/847,178 Continuation US11445122B2 (en) 2017-10-13 2020-04-13 Image processing method and apparatus, and device

Publications (1)

Publication Number Publication Date
WO2019072222A1 true WO2019072222A1 (fr) 2019-04-18

Family

ID=66100399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/109951 Ceased WO2019072222A1 (fr) 2017-10-13 2018-10-12 Procédé, dispositif et appareil de traitement d'image

Country Status (1)

Country Link
WO (1) WO2019072222A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2816527A1 (fr) * 2012-02-15 2014-12-24 Intel Corporation Procédé et dispositif de traitement d'une image numérique, et support d'enregistrement lisible par un ordinateur
CN104349066A (zh) * 2013-07-31 2015-02-11 华为终端有限公司 一种生成高动态范围图像的方法、装置
CN105264567A (zh) * 2013-06-06 2016-01-20 苹果公司 用于图像稳定化的图像融合方法
CN105931213A (zh) * 2016-05-31 2016-09-07 南京大学 基于边缘检测和帧差法的高动态范围视频去鬼影的方法
CN106506981A (zh) * 2016-11-25 2017-03-15 阿依瓦(北京)技术有限公司 生成高动态范围图像的设备和方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2816527A1 (fr) * 2012-02-15 2014-12-24 Intel Corporation Procédé et dispositif de traitement d'une image numérique, et support d'enregistrement lisible par un ordinateur
CN105264567A (zh) * 2013-06-06 2016-01-20 苹果公司 用于图像稳定化的图像融合方法
CN104349066A (zh) * 2013-07-31 2015-02-11 华为终端有限公司 一种生成高动态范围图像的方法、装置
CN105931213A (zh) * 2016-05-31 2016-09-07 南京大学 基于边缘检测和帧差法的高动态范围视频去鬼影的方法
CN106506981A (zh) * 2016-11-25 2017-03-15 阿依瓦(北京)技术有限公司 生成高动态范围图像的设备和方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3686845A4 *

Similar Documents

Publication Publication Date Title
CN109671106B (zh) 一种图像处理方法、装置与设备
JP6945744B2 (ja) 撮影方法、装置、およびデバイス
US10810720B2 (en) Optical imaging method and apparatus
CN112840634B (zh) 用于获得图像的电子装置及方法
WO2019071613A1 (fr) Procédé et dispositif de traitement d'image
CN110493538A (zh) 图像处理方法、装置、存储介质及电子设备
CN102883104A (zh) 自动图像捕捉
CN113099122A (zh) 拍摄方法、装置、设备和存储介质
WO2017124899A1 (fr) Procédé, appareil et dispositif électronique de traitement d'informations
CN106231200A (zh) 一种拍照方法及装置
CN110213484A (zh) 一种拍照方法、终端设备及计算机可读存储介质
CN114143471B (zh) 图像处理方法、系统、移动终端及计算机可读存储介质
CN109784327B (zh) 边界框确定方法、装置、电子设备及存储介质
CN105391940A (zh) 一种图像推荐方法及装置
CN108427938A (zh) 图像处理方法、装置、存储介质和电子设备
WO2018219274A1 (fr) Procédé et appareil de traitement de débruitage, support d'informations et terminal
WO2019072222A1 (fr) Procédé, dispositif et appareil de traitement d'image
CN110677581A (zh) 一种镜头切换方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18866515

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018866515

Country of ref document: EP

Effective date: 20200423