CN111028192A - Image synthesis method and electronic equipment - Google Patents
Image synthesis method and electronic equipment Download PDFInfo
- Publication number
- CN111028192A CN111028192A CN201911309384.4A CN201911309384A CN111028192A CN 111028192 A CN111028192 A CN 111028192A CN 201911309384 A CN201911309384 A CN 201911309384A CN 111028192 A CN111028192 A CN 111028192A
- Authority
- CN
- China
- Prior art keywords
- image
- fusion
- images
- brightness
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
The invention provides an image synthesis method and electronic equipment, and relates to the technical field of image processing. The image synthesis method comprises the following steps: acquiring at least three images and sequencing the at least three images; selecting a first image and a second image in order starting from a first end of the ordering; comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, aligning and fusing the reference image to the other image to obtain a first fused image; and obtaining a target fusion image according to the first fusion image and a third image in the at least three images. According to the scheme, the images are synthesized by using the mode of aligning and fusing, so that the accuracy of fusion is guaranteed, the accuracy of removing ghost images is improved, and the fusion effect is improved.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image synthesis method and an electronic device.
Background
With the increasing performance of mobile phones and the popularization of mobile phone photographing, people have higher and higher requirements for mobile phone photographing. However, due to the limitation of the hardware of the mobile phone, it is difficult to cover all the brightness of a scene with a single picture, such as a backlight scene, and the subject usually appears dark. The HDR (high dynamic) mode is to solve the problem of shooting in a scene with a large light ratio. The basic principle of the method is that pictures with different brightness of the same scene are obtained by shooting under different exposures, so that the details of a dark part can be seen in a brighter picture, the contents of a bright area can be seen in a darker picture, and the contents of a middle brightness area are reserved in a picture with normal brightness. Then, the pictures are fused into an image, and the HDR picture with more details can be obtained through final tone adjustment.
According to the existing mobile phone shooting process, pictures with different exposures are usually shot frame by frame, so that when moving objects or hand shaking exists in a shooting scene, a plurality of pictures cannot be directly aligned, and the existing fusion de-ghosting algorithm is used for directly fusing images, so that a result image has ghosts.
Disclosure of Invention
The embodiment of the invention provides an image synthesis method and electronic equipment, and aims to solve the problems that fusion ghost cannot be improved by the existing fusion ghost removing algorithm, so that the fusion accuracy is low and the fusion effect is poor.
In order to solve the above technical problem, an embodiment of the present invention provides an image synthesis method, including:
acquiring at least three images and sequencing the at least three images;
selecting a first image and a second image in order starting from a first end of the ordering;
comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, aligning and fusing the reference image to the other image to obtain a first fused image;
and obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
An embodiment of the present invention further provides an electronic device, including:
the sorting module is used for acquiring at least three images and sorting the at least three images;
a selection module for selecting a first image and a second image in order starting from a first end of said ordering;
the first fusion module is used for comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, aligning and fusing the reference image to the other image to obtain a first fused image;
and the processing module is used for obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
The invention has the beneficial effects that:
according to the scheme, the images are synthesized by using the mode of aligning and fusing, so that the accuracy of fusion is guaranteed, the accuracy of removing ghost images is improved, and the fusion effect is improved.
Drawings
FIG. 1 shows a schematic diagram of a conventional HDR fusion process;
FIG. 2 is a schematic flow chart of an image synthesis method according to an embodiment of the present invention;
FIG. 3 is a detailed flow chart of an image synthesis method according to an embodiment of the present invention;
FIG. 4 shows one of the block diagrams of an electronic device according to an embodiment of the invention;
FIG. 5 is a second block diagram of an electronic device according to an embodiment of the invention;
FIG. 6 is a third block diagram of an electronic device according to an embodiment of the invention;
fig. 7 is a schematic diagram showing a hardware configuration of an electronic device according to an embodiment of the present invention.
Detailed Description
In the prior art, as shown in fig. 1, a currently used fusion de-ghosting algorithm is to select a reference frame (EV0) with proper brightness, which may also be referred to as a reference map, align the reference frame with a map (EV +1) brighter than the reference frame, fill EV +1 with information of EV0 if a motion region is detected between two frames, similarly de-align EV +2 with aligned EV +1, and then align EV-1 and EV-2 with EV0 in the direction of a dark frame.
However, the main drawbacks of this method are:
if an error occurs when EV +1 and EV0 are aligned, and abnormal information is filled in EV +1, then when EV +2 and EV +1 are aligned, the error area is inevitably detected as motion, and according to the logic of the algorithm, the previous fusion error information is continuously filled in the current frame, resulting in the accumulation and transmission of errors. That is, if only one frame is in error during the alignment process, the result will be in error and will be amplified step by step.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 2, an embodiment of the present invention provides an image synthesis method, including:
the at least three images may be a plurality of images having different exposure levels obtained by continuous shooting, or a plurality of images having the same exposure level obtained by continuous shooting.
and 204, obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
It should be noted that, in the embodiment of the present invention, the fusion of multiple images is realized by using a mode of aligning and fusing, so that the fusion ghost can be effectively improved, and the fusion effect is improved.
It should be noted that the embodiment of the present invention can implement fusion of different exposure levels, and improve the fusion ghost, and can also implement fusion of images of the same exposure level by using this way, so that multi-frame noise reduction can be implemented.
Further, in different processing scenarios, the specific implementation manner of step 201 is different, and specifically may be:
improving and fusing ghost
Specifically, in this case, the implementation manner of step 201 is:
when the brightness of the at least three images is different, the at least three images are sorted according to the sequence of the brightness values.
In this case, the first end of the ranking may be an end of high brightness, or an end of low brightness, that is, the images may be processed one by one from the end of highest brightness of the ranked images, or the images may be processed one by one from the end of lowest brightness of the ranked images.
Secondly, realizing noise reduction of multiple frames
Specifically, in this case, the implementation manner of step 201 is:
when the brightness of the at least three images is the same, the at least three images are sorted according to the order of the noise of the images.
In this case, the first end of the ranking may be the end with the largest noise, or the end with the smallest noise, that is, the images may be processed one by one from the end with the largest noise of the ranked images, or the images may be processed one by one from the end with the smallest noise of the ranked images.
The following specifically describes embodiments of the present invention mainly in terms of improving the fusion ghost.
Specifically, the implementation manner of step 203 is:
determining a motion area according to the first image and the second image;
if the motion area of the first image does not have the target object and the motion area of the second image has the target object, determining the second image as a reference image, aligning the second image with the first image, and filling the target object in the second image into the first image to obtain a first fusion image.
The motion area mentioned above refers to a position area where the same feature exists in the two images and the feature exists in the two images, that is, a position difference area where the same feature exists in the two images. In general, when image processing is performed, it is considered that the noise of an image with a large brightness is weak, and the alignment accuracy can be improved by using this image as a reference image, but when a moving region in a selected reference image has no target information, it is impossible to fill another image with the reference image, and in this case, the reference image needs to be replaced, so that highlight ghost can be avoided.
The specific implementation process is as follows: taking the image with the maximum brightness in the two images as a reference image, filling the information of the reference image into the motion area with the minimum brightness if the motion area of the reference image has information (characteristics), and fusing the images to obtain a first fused image; and if no information is detected in the motion area of the reference image, taking the image with the minimum brightness in the two images as the reference image, filling the information of the reference image into the motion area of the other image, and fusing the images to obtain a first fused image.
It should be noted that, optionally, in performing alignment, if the noise of the image with the smallest brightness in the two images is the weakest, the image with the smallest noise is defaulted as the reference image in the embodiment of the present invention, and if the motion region of the reference image has information (feature), the information of the reference image is filled in the motion region with the largest brightness, and image fusion is performed to obtain the first fusion image, but if the noise of the image with the largest brightness in the two images is the weakest, the image with the largest brightness is taken as the reference image, and if the motion region of the reference image has information, the information of the reference image is filled in the motion region with the smallest brightness, and image fusion is performed to obtain the first fusion image.
It should be further noted that after the first fused image is obtained through the first alignment fusion process, a second process is required to obtain a second fused image until a final target fused image is obtained, and specifically, the implementation manner of step 204 may be:
selecting a third image after the second image in order from the first end;
comparing the contents of the first fusion image and the third image, taking the reference image used for obtaining the first fusion image as the reference image used for aligning the third image, and fusing the third image obtained after alignment and the first fusion image to obtain a second fusion image;
and sequentially executing the alignment and fusion processes until the alignment and fusion of the last image in the sequence are completed to obtain the target fusion image.
In this case, the reference image adopted in the last alignment is usually used as the reference image of the current alignment, but if the confidence of the last alignment is low and the reference image needs to be adjusted, the reference image needs to be adjusted to be the third image, and the content of the motion region in the third image is filled into the motion region of the first fusion image, so as to obtain the final second fusion image; or, the reference image is adjusted to be the first fusion image, and the content of the motion area in the first fusion image is filled into the motion area of the third image, so as to obtain the final second fusion image. The step of specifically determining whether the reference image is the first fused image or the third image may be similar to the process of fusing the first image and the second image, and details are not repeated here.
And sequentially fusing the obtained latest fused image and the next image selected in sequence according to the method to finally obtain the target fused image.
After the final target fusion image is obtained through multiple processes, because a common display only supports 8-bit data of each pixel, and multiple frames of images with different exposure of 8 bits are usually larger than 10 bits, the pixels of the final target fusion image need to be adjusted to 8 bits by using a tone adjustment algorithm (toneMapping/DRC) under the condition of not losing details and contrast as much as possible to obtain a final composite image, so that normal display of the image can be realized.
The following describes a specific implementation of the embodiments of the present invention.
Specifically, image processing procedures as shown in fig. 3, EV0 represents a normally exposed image, EV + X represents a brighter image than EV0, the larger X, the brighter image, and conversely EV-X represents a darker image than EV 0. The multi-exposure alignment fusion algorithm provided by the embodiment of the invention starts from the brightest image, aligns with the second-brightest image, detects the motion region and compensates, and then fuses to obtain the HDR high bitmap. And performing the same alignment fusion on the obtained HDR high bitmap and the next image with the brightness until all the images are fused. And finally, obtaining a final HDR output result by using a tone adjusting method (toneMapping) in the HDR algorithm.
It should be noted that the two-way arrow in fig. 3 indicates that the non-fixed reference image (i.e., the reference frame) is used in the alignment process, and for the implementation process of EV +2 and EV +1 alignment, the reference frame may be EV +2 on the left or EV + 1. In a specific implementation, the image (EV +2) with the bright left side is defaulted as a reference frame, if the detected motion area is overexposed (no information), the motion area of the EV +1 frame cannot be filled with the information of the reference frame, the reference frame is changed to EV +1, and the motion area of the EV +2 frame is filled with the information of EV + 1. Because EV +1 is darker than EV +2, the probability of motion area overexposure is far less than that of EV +2 as a reference frame, and the highlight ghost problem is greatly avoided.
After EV +2 and EV +1 are aligned, fusion of EV +2 and EV +1 is carried out to obtain an HDR high bitmap, then the obtained intermediate HDR high bitmap and EV0 are aligned, a reference frame (EV +2 or EV +1) in the previous step and EV0 can be aligned, if it is judged that high-light motion exists or the confidence coefficient of the alignment in the previous step is low and the reference frame needs to be adjusted, information of EV0 is used for filling a motion region of the intermediate high bitmap, the reference frame is replaced by EV0, otherwise, the reference frame (EV +2 or EV +1) in the previous step is continuously used, information of the motion region of EV0 is filled by the intermediate high bitmap, then the obtained next HDR high bitmap is continuously processed with EV-1 until all images are processed, the obtained final HDR high bitmap is subjected to tone adjustment, and the dynamic range is compressed to obtain a final HDR result image.
It should be noted that, in the above embodiment, a mode of alignment and fusion is adopted, that is, each time a frame of a darker image is aligned, the HDR high bitmap obtained last time is fused with the frame of the image, and because a mode of not fixing a reference frame is adopted, an error of a previous stage is eliminated in a process of replacing the reference frame and fusing. Whether the reference frame needs to be changed into a darker frame or not in the current fusion is selected by calculating the confidence degree of the previous alignment and the highlight motion condition, so that the transmission and the gradual amplification of errors can be avoided, and the processing accuracy is ensured.
It should be noted that, in the embodiment of the present invention, the reference frame is not fixed in the alignment process, and a mode of alignment and fusion is adopted, so that the accuracy of fusion can be ensured, the accuracy of removing the ghost is improved, the fusion effect is improved, and further, the use experience of the user is improved.
As shown in fig. 4 to 6, an embodiment of the present invention further provides an electronic device 400, including:
a sorting module 401, configured to obtain at least three images and sort the at least three images;
a selection module 402 for selecting a first image and a second image in order starting from a first end of said ordering;
a first fusion module 403, configured to compare the contents of the first image and the second image, determine one of the first image and the second image that meets a preset condition as a reference image, align and fuse the reference image to another image, so as to obtain a first fused image;
a processing module 404, configured to obtain a target fusion image according to the first fusion image and a third image of the at least three images.
Specifically, the sorting module 401 implements one of:
when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of the brightness values;
when the brightness of the at least three images is the same, the at least three images are sorted according to the order of the noise of the images.
Optionally, the first fusion module 403 includes:
a determining unit 4031 configured to determine a motion region from the first image and the second image;
a first fusion unit 4032, configured to determine the second image as a reference image if the motion region of the first image does not have a target object and the motion region of the second image has a target object, align the second image with the first image, and fill the target object in the second image in the first image to obtain a first fusion image.
Optionally, the first fusing module 403 is configured to:
and if the noise of the image with the minimum brightness in the first image and the second image is weakest, taking the image with the minimum brightness as a reference image, and otherwise, taking the image with the maximum brightness as the reference image.
Optionally, the processing module 404 includes:
a selecting unit 4041 configured to sequentially select, from the first end, a third image after the second image;
a second fusion unit 4042, configured to compare the contents of the first fusion image and the third image, use the reference image used for obtaining the first fusion image as a reference image used for aligning the third image, perform alignment of the third image, and fuse the aligned third image with the first fusion image to obtain a second fusion image;
the processing unit 4043 is configured to sequentially perform the alignment and fusion processes until the last image in the sequence is aligned and fused, so as to obtain the target fusion image.
The electronic device provided in the embodiment of the present invention can implement each process implemented by the electronic device in the method embodiment of fig. 2, and is not described here again to avoid repetition. The electronic equipment of the embodiment of the invention synthesizes the images by using a mode of aligning and fusing, thereby ensuring the accuracy of fusion, improving the accuracy of removing ghost, improving the fusion effect and improving the use experience of users.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present invention.
The electronic device 70 includes, but is not limited to: radio frequency unit 710, network module 720, audio output unit 730, input unit 740, sensor 750, display unit 760, user input unit 770, interface unit 780, memory 790, processor 711, and power supply 712. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 7 does not constitute a limitation of the electronic device, and that the electronic device may include more or fewer components than shown, or some components may be combined, or a different arrangement of components. In the embodiment of the present invention, the electronic device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, a pedometer, and the like.
The processor 711 is configured to obtain at least three images and sort the at least three images; selecting a first image and a second image in order starting from a first end of the ordering; comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, aligning and fusing the reference image to the other image to obtain a first fused image; and obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
The electronic equipment of the embodiment of the invention synthesizes the images by using a mode of aligning and fusing, thereby ensuring the accuracy of fusion, improving the accuracy of removing ghost, improving the fusion effect and improving the use experience of users.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 710 may be used for receiving and sending signals during a message transmission or call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 711; in addition, the uplink data is transmitted to the base station. Generally, the radio frequency unit 710 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. In addition, the radio unit 710 may also communicate with a network and other devices through a wireless communication system.
The electronic device provides wireless broadband internet access to the user via the network module 720, such as assisting the user in sending and receiving e-mails, browsing web pages, and accessing streaming media.
The audio output unit 730 may convert audio data received by the radio frequency unit 710 or the network module 720 or stored in the memory 790 into an audio signal and output as sound. Also, the audio output unit 730 may also provide audio output related to a specific function performed by the electronic apparatus 70 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 730 includes a speaker, a buzzer, a receiver, and the like.
The input unit 740 is used to receive an audio or video signal. The input Unit 740 may include a Graphics Processing Unit (GPU) 741 and a microphone 742, and the Graphics processor 741 processes image data of a still picture or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 760. The image frames processed by the graphic processor 741 may be stored in the memory 790 (or other storage medium) or transmitted via the radio frequency unit 710 or the network module 720. The microphone 742 may receive sounds and may be capable of processing such sounds into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 710 in case of a phone call mode.
The electronic device 70 also includes at least one sensor 750, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor includes an ambient light sensor that adjusts the brightness of the display panel 761 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 761 and/or a backlight when the electronic device 70 is moved to the ear. As one type of motion sensor, an accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the posture of an electronic device (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), and vibration identification related functions (such as pedometer, tapping); the sensor 750 may also include a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, etc., which will not be described in detail herein.
The display unit 760 is used to display information input by a user or information provided to the user. The Display unit 760 may include a Display panel 761, and the Display panel 761 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 770 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the electronic device. Specifically, the user input unit 770 includes a touch panel 771 and other input devices 772. The touch panel 771, also referred to as a touch screen, may collect touch operations by a user on or near the touch panel 771 (e.g., operations by a user on or near the touch panel 771 using a finger, stylus, or any suitable object or attachment). The touch panel 771 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 711, and receives and executes commands sent by the processor 711. In addition, the touch panel 771 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 771, the user input unit 77 may also include other input devices 772. In particular, other input devices 772 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
Further, the touch panel 771 may be overlaid on the display panel 761, and when the touch panel 771 detects a touch operation thereon or nearby, the touch operation is transmitted to the processor 711 to determine the type of the touch event, and then the processor 711 provides a corresponding visual output on the display panel 761 according to the type of the touch event. Although the touch panel 771 and the display panel 761 are shown as two separate components in fig. 7 to implement the input and output functions of the electronic device, in some embodiments, the touch panel 771 and the display panel 761 may be integrated to implement the input and output functions of the electronic device, and is not limited herein.
The interface unit 780 is an interface for connecting an external device to the electronic apparatus 70. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 780 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the electronic apparatus 70 or may be used to transmit data between the electronic apparatus 70 and the external device.
The memory 790 may be used to store software programs as well as various data. The memory 790 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 790 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The processor 711 is a control center of the electronic device, connects various parts of the entire electronic device using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 790 and calling data stored in the memory 790, thereby performing overall monitoring of the electronic device. The processor 711 may include one or more processing units; preferably, the processor 711 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 711.
The electronic device 70 may further include a power source 712 (e.g., a battery) for supplying power to the various components, and preferably, the power source 712 may be logically connected to the processor 711 via a power management system, such that functions of managing charging, discharging, and power consumption are performed via the power management system.
In addition, the electronic device 70 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides an electronic device, which includes a processor 711, a memory 790, and a computer program stored in the memory 790 and capable of running on the processor 711, where the computer program is executed by the processor 711 to implement each process of the embodiment of the image synthesis method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the embodiment of the image synthesis method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the preferred embodiments of the present invention have been described, it will be understood by those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention as defined in the following claims.
Claims (10)
1. An image synthesis method, comprising:
acquiring at least three images and sequencing the at least three images;
selecting a first image and a second image in order starting from a first end of the ordering;
comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, aligning and fusing the reference image to the other image to obtain a first fused image;
and obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
2. The image synthesis method of claim 1, wherein the sorting the at least three images comprises one of:
when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of the brightness values;
when the brightness of the at least three images is the same, the at least three images are sorted according to the order of the noise of the images.
3. The image synthesis method according to claim 1, wherein comparing the contents of the first image and the second image, determining one of the first image and the second image that satisfies a preset condition as a reference image, and aligning and fusing the reference image to the other image to obtain a first fused image comprises:
determining a motion area according to the first image and the second image;
if the motion area of the first image does not have the target object and the motion area of the second image has the target object, determining the second image as a reference image, aligning the second image with the first image, and filling the target object in the second image into the first image to obtain a first fusion image.
4. The image synthesis method according to claim 1, wherein determining one of the first image and the second image that satisfies a preset condition as a reference image comprises:
and if the noise of the image with the minimum brightness in the first image and the second image is weakest, taking the image with the minimum brightness as a reference image, and otherwise, taking the image with the maximum brightness as the reference image.
5. The image synthesis method according to claim 1, wherein obtaining the target fused image from the first fused image and a third image of the at least three images comprises:
selecting a third image after the second image in order from the first end;
comparing the contents of the first fusion image and the third image, taking the reference image used for obtaining the first fusion image as the reference image used for aligning the third image, and fusing the third image obtained after alignment and the first fusion image to obtain a second fusion image;
and sequentially executing the alignment and fusion processes until the alignment and fusion of the last image in the sequence are completed to obtain the target fusion image.
6. An electronic device, comprising:
the sorting module is used for acquiring at least three images and sorting the at least three images;
a selection module for selecting a first image and a second image in order starting from a first end of said ordering;
the first fusion module is used for comparing the contents of the first image and the second image, determining one image meeting preset conditions in the first image and the second image as a reference image, aligning and fusing the reference image to the other image to obtain a first fused image;
and the processing module is used for obtaining a target fusion image according to the first fusion image and a third image in the at least three images.
7. The electronic device of claim 6, wherein the ranking module implements one of:
when the brightness of the at least three images is different, sequencing the at least three images according to the sequence of the brightness values;
when the brightness of the at least three images is the same, the at least three images are sorted according to the order of the noise of the images.
8. The electronic device of claim 6, wherein the first fusion module comprises:
a determining unit for determining a motion region from the first image and the second image;
and the first fusion unit is used for determining the second image as a reference image if the motion region of the first image does not have the target object and the motion region of the second image has the target object, and filling the target object in the second image into the first image after aligning the second image with the first image to obtain a first fusion image.
9. The electronic device of claim 6, wherein the first fusion module is configured to:
and if the noise of the image with the minimum brightness in the first image and the second image is weakest, taking the image with the minimum brightness as a reference image, and otherwise, taking the image with the maximum brightness as the reference image.
10. The electronic device of claim 6, wherein the processing module comprises:
a selection unit configured to select, in order from the first end, a third image after the second image;
a second fusion unit, configured to compare the contents of the first fusion image and the third image, perform third image alignment using a reference image used for obtaining the first fusion image as a reference image used for third image alignment, and fuse the third image obtained after alignment with the first fusion image to obtain a second fusion image;
and the processing unit is used for sequentially executing the alignment and fusion processes until the last image in the sequence is aligned and fused to obtain the target fusion image.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911309384.4A CN111028192B (en) | 2019-12-18 | 2019-12-18 | Image synthesis method and electronic equipment |
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| CN201911309384.4A CN111028192B (en) | 2019-12-18 | 2019-12-18 | Image synthesis method and electronic equipment |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| CN111028192A true CN111028192A (en) | 2020-04-17 |
| CN111028192B CN111028192B (en) | 2023-08-08 |
Family
ID=70210354
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| CN201911309384.4A Active CN111028192B (en) | 2019-12-18 | 2019-12-18 | Image synthesis method and electronic equipment |
Country Status (1)
| Country | Link |
|---|---|
| CN (1) | CN111028192B (en) |
Cited By (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112288664A (en) * | 2020-09-25 | 2021-01-29 | 北京迈格威科技有限公司 | High dynamic range image fusion method and device and electronic equipment |
| CN112288642A (en) * | 2020-09-21 | 2021-01-29 | 北京迈格威科技有限公司 | Ghost detection method, image fusion method and corresponding device |
| CN112700393A (en) * | 2020-12-29 | 2021-04-23 | 维沃移动通信(杭州)有限公司 | Image fusion method and device and electronic equipment |
| CN114511487A (en) * | 2022-02-16 | 2022-05-17 | 展讯通信(上海)有限公司 | Image fusion method and device, computer-readable storage medium, and terminal |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1657679A1 (en) * | 2004-11-10 | 2006-05-17 | Agfa-Gevaert | Method of superimposing images |
| EP2075756A1 (en) * | 2007-12-29 | 2009-07-01 | Morpho Inc. | Block-based image blending for camera shake compensation |
| CN102663768A (en) * | 2012-05-10 | 2012-09-12 | 合肥工业大学 | Method for adaptively determining optimal polarization orientation in polarization vision system |
| CN103530861A (en) * | 2013-10-22 | 2014-01-22 | 天津普达软件技术有限公司 | Method for splicing and fusing core images |
| CN103973958A (en) * | 2013-01-30 | 2014-08-06 | 阿里巴巴集团控股有限公司 | Image processing method and image processing equipment |
| CN105431883A (en) * | 2013-07-08 | 2016-03-23 | 株式会社电装 | Image generating device, image generating method, and non-transitory computer-readable storage medium for image generating |
| CN106204513A (en) * | 2016-08-15 | 2016-12-07 | 厦门美图之家科技有限公司 | The methods, devices and systems of image procossing |
| CN106612397A (en) * | 2016-11-25 | 2017-05-03 | 努比亚技术有限公司 | Image processing method and terminal |
| CN109863742A (en) * | 2017-01-25 | 2019-06-07 | 华为技术有限公司 | Image processing method and terminal device |
| CN110062160A (en) * | 2019-04-09 | 2019-07-26 | Oppo广东移动通信有限公司 | Image processing method and device |
| CN110072061A (en) * | 2019-05-31 | 2019-07-30 | 努比亚技术有限公司 | A kind of interactive mode image pickup method, mobile terminal and storage medium |
-
2019
- 2019-12-18 CN CN201911309384.4A patent/CN111028192B/en active Active
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| EP1657679A1 (en) * | 2004-11-10 | 2006-05-17 | Agfa-Gevaert | Method of superimposing images |
| EP2075756A1 (en) * | 2007-12-29 | 2009-07-01 | Morpho Inc. | Block-based image blending for camera shake compensation |
| CN102663768A (en) * | 2012-05-10 | 2012-09-12 | 合肥工业大学 | Method for adaptively determining optimal polarization orientation in polarization vision system |
| CN103973958A (en) * | 2013-01-30 | 2014-08-06 | 阿里巴巴集团控股有限公司 | Image processing method and image processing equipment |
| CN105431883A (en) * | 2013-07-08 | 2016-03-23 | 株式会社电装 | Image generating device, image generating method, and non-transitory computer-readable storage medium for image generating |
| CN103530861A (en) * | 2013-10-22 | 2014-01-22 | 天津普达软件技术有限公司 | Method for splicing and fusing core images |
| CN106204513A (en) * | 2016-08-15 | 2016-12-07 | 厦门美图之家科技有限公司 | The methods, devices and systems of image procossing |
| CN106612397A (en) * | 2016-11-25 | 2017-05-03 | 努比亚技术有限公司 | Image processing method and terminal |
| CN109863742A (en) * | 2017-01-25 | 2019-06-07 | 华为技术有限公司 | Image processing method and terminal device |
| CN110062160A (en) * | 2019-04-09 | 2019-07-26 | Oppo广东移动通信有限公司 | Image processing method and device |
| CN110072061A (en) * | 2019-05-31 | 2019-07-30 | 努比亚技术有限公司 | A kind of interactive mode image pickup method, mobile terminal and storage medium |
Non-Patent Citations (1)
| Title |
|---|
| 肖照林;周果清: "场景深度无关的虚拟孔径图像鬼影去除算法", 《西北工业大学学报》, vol. 35, no. 1, pages 154 - 159 * |
Cited By (5)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN112288642A (en) * | 2020-09-21 | 2021-01-29 | 北京迈格威科技有限公司 | Ghost detection method, image fusion method and corresponding device |
| CN112288664A (en) * | 2020-09-25 | 2021-01-29 | 北京迈格威科技有限公司 | High dynamic range image fusion method and device and electronic equipment |
| CN112288664B (en) * | 2020-09-25 | 2025-03-07 | 原力图新(重庆)科技有限公司 | High dynamic range image fusion method, device and electronic device |
| CN112700393A (en) * | 2020-12-29 | 2021-04-23 | 维沃移动通信(杭州)有限公司 | Image fusion method and device and electronic equipment |
| CN114511487A (en) * | 2022-02-16 | 2022-05-17 | 展讯通信(上海)有限公司 | Image fusion method and device, computer-readable storage medium, and terminal |
Also Published As
| Publication number | Publication date |
|---|---|
| CN111028192B (en) | 2023-08-08 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| CN109688322B (en) | Method and device for generating high dynamic range image and mobile terminal | |
| CN107770438B (en) | A kind of photographic method and mobile terminal | |
| CN110602473B (en) | White balance calibration method and device | |
| CN108307109B (en) | High dynamic range image preview method and terminal equipment | |
| CN107566730B (en) | A panoramic image shooting method and mobile terminal | |
| CN109167917B (en) | Image processing method and terminal equipment | |
| CN111083386B (en) | Image processing method and electronic device | |
| CN110445984B (en) | Shooting prompting method and electronic equipment | |
| CN111028192B (en) | Image synthesis method and electronic equipment | |
| CN110708475B (en) | A kind of exposure parameter determination method, electronic device and storage medium | |
| CN107623818B (en) | Image exposure method and mobile terminal | |
| CN108449541B (en) | Panoramic image shooting method and mobile terminal | |
| CN113888447A (en) | Image processing method, terminal and storage medium | |
| CN109474784B (en) | A preview image processing method and terminal device | |
| CN109729280A (en) | A kind of image processing method and mobile terminal | |
| CN110290331B (en) | Screen control method and terminal | |
| CN111131722A (en) | Image processing method, electronic device and medium | |
| CN108307123B (en) | Exposure adjusting method and mobile terminal | |
| CN110636225B (en) | Photographing method and electronic device | |
| CN109348212B (en) | Image noise determination method and terminal equipment | |
| CN108848321B (en) | Exposure optimization method, apparatus, and computer-readable storage medium | |
| CN108965701B (en) | A jitter correction method and terminal device | |
| CN108377297B (en) | Shooting parameter adjusting method, mobile terminal and computer readable storage medium | |
| CN107798662B (en) | Image processing method and mobile terminal | |
| CN108259808B (en) | Video frame compression method and mobile terminal |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| PB01 | Publication | ||
| PB01 | Publication | ||
| SE01 | Entry into force of request for substantive examination | ||
| SE01 | Entry into force of request for substantive examination | ||
| GR01 | Patent grant | ||
| GR01 | Patent grant |