[go: up one dir, main page]

WO2015081870A1 - 一种图像处理方法、装置及终端 - Google Patents

一种图像处理方法、装置及终端 Download PDF

Info

Publication number
WO2015081870A1
WO2015081870A1 PCT/CN2014/093024 CN2014093024W WO2015081870A1 WO 2015081870 A1 WO2015081870 A1 WO 2015081870A1 CN 2014093024 W CN2014093024 W CN 2014093024W WO 2015081870 A1 WO2015081870 A1 WO 2015081870A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
translation
amount
compensation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
PCT/CN2014/093024
Other languages
English (en)
French (fr)
Inventor
陈刚
张兼
罗巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to US15/101,759 priority Critical patent/US9870602B2/en
Priority to EP14867399.9A priority patent/EP3068124A4/en
Publication of WO2015081870A1 publication Critical patent/WO2015081870A1/zh
Anticipated expiration legal-status Critical
Ceased legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/211Ghost signal cancellation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • GPHYSICS
    • G06COMPUTING OR CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present invention relates to the field of image applications, and in particular, to an image processing method, apparatus, and terminal.
  • high-resolution images can be obtained by super-resolution algorithms, which refer to the fusion of multi-frame low-resolution images to generate a high-resolution image.
  • super-resolution algorithms which refer to the fusion of multi-frame low-resolution images to generate a high-resolution image.
  • there is a time difference when collecting multi-frame low-resolution images there is a difference in local motion between multi-frame images acquired at different times, and the local motion is within a time interval of two-frame image acquisition. , caused by the movement of objects in the scene.
  • Embodiments of the present invention provide an image processing method, apparatus, and terminal to solve the technical problem of "ghosting" in synthesizing a high-resolution image through a multi-frame low-resolution image in the prior art.
  • an image processing method is provided, which is applied to a terminal including a first camera and a second camera, where the first camera and the second camera are located on the same side of the terminal.
  • the method includes: acquiring a first image acquired by the first camera for the first area and a second image of the second camera acquired by the second camera at the same time; using the first image as a reference image, Performing translation compensation on the second image; synthesizing the first image and the translationally compensated second image into a third image, wherein the resolution of the third image is higher than the first image And a resolution of the second image.
  • the performing the translation compensation on the second image by using the first image as a reference image includes: determining the first image and the first a translation amount between the two images; the second image acquired by the second camera is subjected to translation compensation according to the translation amount.
  • the combining the first image and the translationally compensated second image into a third image includes: Deriving a result of the translation compensation to determine a common area of the first image and the second image; synthesizing the common area of the first image and the second image into the third image.
  • an image processing apparatus including: an acquiring module, configured to acquire, by a first camera, a first image collected by a first region and a second camera collected at a same time at a second region a second image, wherein the first camera and the second camera are located in a same plane of the image processing device; a translation compensation module is coupled to the acquisition module, configured to obtain the first After the image and the second image, using the first image as a reference image, performing translation compensation on the second image; and an image synthesis module connected to the translation compensation module for passing the translation compensation module After performing the translation compensation on the second image, synthesizing the first image and the translationally compensated second image into a third image, the resolution of the third image being higher than the first image and the first image The resolution of the two images.
  • the translation compensation module includes: a determining unit, configured to determine a translation amount between the first image and the second image;
  • the compensation unit is connected to the determining unit, and configured to perform translation compensation on the second image acquired by the second camera according to the translation amount after determining the translation amount based on the determining unit.
  • the image synthesizing module includes: a determining unit, configured to determine the first image according to a result of the translation compensation And a common area of the second image; a synthesizing unit connected to the determining unit, configured to: after the determining the unit, determine the common image of the first image and the second image according to the determining unit The area is synthesized into the third image.
  • a terminal includes: a first camera for acquiring a first image for a first area; and a second camera for collecting the first image at the first camera Obtaining a second image at the same time of the image, the first camera and the second camera are located on the same side of the terminal; the processor is connected to the first camera and the second camera, And performing, by using the first image as a reference image, performing translation compensation on the second image; and synthesizing the first image and the translationally compensated second image into a third image, the resolution of the third image The rate is higher than the resolution of the first image and the second image.
  • the optical axes of the first camera and the second camera are parallel and/or the first camera and the second camera are fixedly disposed at the terminal.
  • the processor by using the first image as a reference image, performing translation compensation on the second image, specifically: determining the first image and the The amount of translation between the second images; the second image acquired by the second camera The translation compensation is performed according to the shift amount.
  • the processor, the first image and the translationally compensated second image are combined into a third image, specifically: Determining a common area of the first image and the second image according to a result of the translation compensation; synthesizing the common area of the first image and the second image into the third image.
  • the first image is acquired by the first camera at the same time to obtain the first image
  • the second image is acquired by the second camera to obtain the second image
  • the first image is used as the reference image pair.
  • the second image is subjected to translation compensation, and finally the first image and the second image are combined into a third image, and the resolution of the third image is higher than the resolution of the first image and the second image due to the first image and the second image
  • the second image is compensated with the first image as the reference image, so that the second image and the first image
  • the shaking direction of the user's hand is the same, so that the ghost generated by the user's hand shake can be prevented.
  • the time taken to acquire the first image and the second image may be reduced, and when the third image is synthesized, no algorithm is needed to correct the local motion and the user's hand. Shake the "ghost" problem, which in turn improves the acquisition of the third image The speed can improve the user's experience.
  • FIG. 1 is a schematic diagram of a "ghosting" problem in the prior art when synthesizing a high resolution image through two low resolution images;
  • FIG. 2 is a schematic diagram of a first camera and a second camera disposed on the same side of a terminal in an image processing method according to an embodiment of the present invention
  • FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of performing translation compensation on a second image in an image processing method according to an embodiment of the present invention
  • FIG. 5 is a schematic diagram showing a positional relationship between d, B, f, and Z in a calculation formula of a translation amount in an image processing method according to an embodiment of the present invention
  • FIG. 6 is a flowchart of synthesizing a first image and a second image after translation compensation in an image processing method according to an embodiment of the present invention
  • FIG. 7a is a schematic diagram of a first image and a second image obtained by acquiring an image capturing method according to an embodiment of the present invention
  • FIG. 7b is a schematic diagram of performing a translation compensation on a second image and determining a common area of the first image and the second image in the image processing method according to an embodiment of the present invention
  • FIG. 7c is a schematic diagram of a combined area and a common area of a first image and a second image determined in an image processing method according to an embodiment of the present invention.
  • FIG. 8 is a flowchart of an image processing method according to Embodiment 1 of the present invention.
  • Embodiment 9 is a flowchart of an image processing method according to Embodiment 2 of the present invention.
  • FIG. 10 is a structural diagram of an image collection device according to an embodiment of the present invention.
  • FIG. 11 is a structural diagram of a terminal according to an embodiment of the present invention.
  • an image processing method is provided in the embodiment of the present invention, and the method is applied to the terminal including the first camera and the second camera.
  • the first camera and the second camera are located on the same side of the terminal, and the method includes: acquiring a first image acquired by the first camera for the first region and a second image captured by the second camera for the second region at the same time;
  • the first image is a reference image, and the second image is subjected to translation compensation; the first image and the translationally compensated second image are combined into a third image, and the resolution of the third image is higher than that of the first image and the second image. rate.
  • first image and the second image are images acquired at the same time, there is no object motion between the two frames, and the second image is compensated by the first image as a reference image, thereby making the first image
  • the two images overlap with the same object position in the first image; and since the first image and the second image are simultaneously acquired, when the first image and the second image are acquired, the shaking direction of the user's hand is the same, thereby preventing The "ghosting" produced by the user's hand shakes, thereby solving the "ghosting" problem that occurs when synthesizing high-resolution images through multi-frame low-resolution images;
  • the time taken to acquire the first image and the second image may be reduced, and when the third image is synthesized, no algorithm is needed to correct the local motion and the user's hand.
  • the “ghost problem” generated by the shaking increases the speed of acquiring the third image, which can improve the user experience.
  • an embodiment of the present invention provides an image processing method. Referring to FIG. 2, the method is applied to a terminal including a first camera 10 and a second camera 11, where the first camera 10 and the second camera 11 are located at the terminal. On the same side, the first camera 10 and the second camera 11 can be connected by a connector 12.
  • the method specifically includes the following steps:
  • Step S301 Acquire a first image acquired by the first camera 10 for the first region and a second image captured by the second camera 11 for the second region at the same time; wherein the first image and the second image are respectively a frame image.
  • the first camera 10 when acquiring the first image collected by the first camera 10 for the first area and the second image captured by the second camera 11 for the second area at the same time, the first camera 10 may be configured to take a photo of the first area. Previewing the first image and the second camera 11 to preview the second image when photographing the second area at the same time; or obtaining the first image taken by the first camera 10 for the first area and the second camera 11 at the same time A second image taken of the second area.
  • the focal lengths of the first camera 10 and the second camera 11 may be the same.
  • Step S302 using the first image as a reference image, and performing translation compensation on the second image;
  • Step S303 synthesize the first image and the second image after the translation compensation into a third image, and the resolution of the third image is higher than the resolution of the first image and the second image.
  • the first camera 10 and the second camera 11 can be completely independent cameras, and the first camera 10 and the second camera 11 can be simultaneously controlled by software, so that an object in the second image is relative to the first image.
  • One of the objects is absolutely stationary. For example, in a scene, user A is in motion. If the image is acquired at different times in the prior art, the location of user A and the second image in the first image are The location of the user A is different, and then a "ghost" is formed after the third image is synthesized; but in the present invention, although the user A is moving, after the translation compensation, the user in the first image and the second image A is in the same position, thus avoiding the "ghosting" problem caused by the movement of objects between two frames of images.
  • the first camera 10 and the second camera 11 can be arranged in various manners. Three preferred arrangements are listed below. Of course, in the specific implementation process, the following three situations are not limited.
  • the optical axes of the first camera 10 and the second camera 11 are parallel.
  • the optical axis refers to the vertical direction of the plane of the lens of the camera, that is, optical The symmetry axis of the system, the optical axes of the first camera 10 and the second camera 11 are parallel, that is, the vertical lines of the planes of the lenses of the first camera 10 and the second camera 11 are parallel, if the optical axes of the first camera 10 and the second camera 11 are Parallel, it is possible to prevent distortion, occlusion, and the like between the first image and the second image, thereby making the calculated amount of translation more accurate.
  • the first camera 10 and the second camera 11 are fixedly disposed at a terminal.
  • the optical axes of the first camera 10 and the second camera 11 are parallel and the first camera 10 and the second camera 11 are fixedly disposed at a terminal.
  • the relative position and posture of the first camera 10 and the second camera 11 can be prevented from changing, it is possible to prevent the relative position and posture of the first camera 10 and the second camera 11 from being changed.
  • the optical axes of a camera 10 and the second camera 11 are not parallel, so that the calculated amount of translation of the second image relative to the first image is more accurate, thereby further preventing the "ghosting" problem.
  • step S302 the first image is used as a reference image, and the second image is subjected to translation compensation.
  • the method further includes the following steps:
  • Step S401 determining a translation amount between the first image and the second image
  • Step S402 The second image acquired by the second camera 11 is subjected to translation compensation according to the amount of shift.
  • step S401 the amount of translation between the first image and the second image may be determined by the following formula:
  • d represents the amount of translation of the object at a distance Z from the plane of the first camera 10 and the second camera 11 relative to the first image in the second image;
  • B represents the distance between the first camera 10 and the second camera 11
  • Z represents the vertical distance of the object from the plane in which the first camera 10 and the second camera 11 are located, that is, the depth of the object
  • f represents the focal length of the first camera 10 or the focal length of the second camera 11.
  • FIG. 5 it is a schematic diagram of the positional relationship between d, B, f, and Z, wherein after the first image is acquired by the first camera 10 and the second image is acquired by the second camera 11, The depths generated by the first camera 10 and the second camera 11 are then used to determine the amount of translation of different objects by the above-described translation amount calculation formula.
  • the depth generated by the first camera 10 and the second camera 11 and the corresponding translation amount in the depth may be calibrated in advance by the above-mentioned translation amount calculation formula, and the calibration method may be as follows:
  • N sets of specific images are collected at discrete N different depth levels, each set containing two images, respectively from the first camera 10 and the second camera 11, and then calibrated between each set of images
  • the amount of translation so that N sets of translations are obtained, which is the amount of translation between pixels in the N depths that are calibrated.
  • the N depths and the corresponding N shift amounts can be pre-stored in the rom for use in actual photographing.
  • the translation amount corresponding to the depth is queried in the rom, and the translation amount is the translation amount of the second image relative to the first image at the depth (assuming that the first image is a reference image)
  • the depth of a point A in the scene is D
  • the amount of translation corresponding to D in the rom is M
  • the amount of translation of the pixel A in the second image relative to the pixel A in the first image is M. .
  • the calibration can be performed for each product at the time of shipment, so the accuracy of the determined translation amount is higher.
  • step S402 the second image acquired by the second camera 11 is compensated according to the translation amount. For example, if the first image is unchanged, and the coordinate of each point of the second image is subtracted from the translation amount of the corresponding depth, the second image after the translation compensation is obtained.
  • step S303 the first image and the second image after the translation compensation are combined into a third image.
  • the method includes the following steps:
  • Step S601 determining a common area of the first image and the second image according to a result of the translation compensation
  • Step S602 Combine the common areas of the first image and the second image into a third image.
  • step S601 for obtaining the first image 70a and the second image 70b, the amount of translation corresponding to the depth of each pixel in the second image 70b may be first determined, and then the second image 70b is obtained.
  • the second image 70b after the translation compensation can be obtained by subtracting the translation amount of the corresponding depth from each pixel in the pixel, as shown in FIG. 7b.
  • the first image 70a and the second image can be obtained.
  • the content of the same portion of the image 70b serves as the common area 71.
  • the common area of the first image 70a and the second image 70b is combined into a third image, which can be divided into multiple modes. Two of them are introduced below. Of course, in the specific implementation process, it is not limited. The following two situations.
  • the second image 70b Since the second image 70b is located at the same position as the same object of the first image 70a after the translation compensation of the second image 70b, the area where the same coordinates in the first image 70a and the second image 70b are directly determined as the common area can be directly determined. 71.
  • the area where the same coordinates are located in the first image 70a and the second image 70b can be determined as the common area 71. And storing; determining a maximum area included in coordinates of the first image 70a and the second image 70b as a combined area.
  • the first image 70a and the translationally compensated second image 70b may be synthesized into a third image by an interpolation method, such as a kernel regression interpolation method or an edge-based kernel regression interpolation method, and the like.
  • an interpolation method such as a kernel regression interpolation method or an edge-based kernel regression interpolation method, and the like.
  • the embodiment of the invention is not limited.
  • the terminal is a mobile phone as an example.
  • the mobile phone includes two cameras.
  • the two cameras are located on the same side of the mobile phone, and the optical axis is parallel and fixedly disposed on the mobile phone.
  • FIG. 8 for implementation of the present invention.
  • Step S801a The first camera 10 acquires and obtains a first image, and the first image resolution is: 3264px*2448px;
  • Step S801b at the same time as the first camera 10 acquires the first image, the second camera 11 acquires the second image, and the second image resolution is: 3264px*2448px;
  • Step S802 Perform translation compensation on the first image and the second image according to the scene depth information and the corresponding relationship between the pre-stored depth and the translation amount;
  • the first camera 10 can transmit the first image to the translation compensation module in the mobile phone
  • the second camera 11 transmits the second image to the translation compensation module in the mobile phone, and then the first image and the first image through the translation compensation module.
  • the second image is subjected to translation compensation; the specific steps are as follows: according to the depth information of the scene from the correspondence between the depth and the translation amount pre-stored in the mobile phone, the translation amount corresponding to the pixel point of each depth is determined; and then the pixel of the second image is The coordinates of the points are respectively subtracted from the translation amount of the corresponding depth, and the second image after the translation compensation is obtained; finally, the common area of the first image and the second image after the translation compensation is determined;
  • Step S803 After determining the common area of the first image and the panned compensated second image, synthesizing the first image and the second image after the translation compensation.
  • the first image, the second image after the translation compensation, and the coordinate information corresponding to the common area are transmitted to the image synthesis module of the mobile phone;
  • the compositing module crops the common area of the first image and the translated second image, and finally merges the cropped first image and the second image into a high resolution image by using an interpolation algorithm, that is, the third image
  • the third image resolution is, for example, 4160px*3120px.
  • the resolution sizes of the first image, the second image, and the third image are merely an example and are not intended to be limiting.
  • the terminal is a tablet computer.
  • the tablet computer includes a first camera 10 and a second camera 11.
  • the first camera 10 and the second camera 11 are located on the same side of the tablet.
  • the image processing method includes the following steps:
  • Step S901a the first camera 10 acquires and obtains the first image
  • Step S901b at the same time as the first camera 10 acquires the first image, the second camera 11 acquires and obtains the second image;
  • Step S902 The first camera 10 transmits the second image to the first image and the second camera 11 respectively to the translation compensation module in the mobile phone, and then passes the translation compensation module to the first image and the second image.
  • Step S903 After determining the common area of the first image and the panned compensated second image, transferring the first image, the second image after the translation compensation, and the coordinate information of the common area to the image synthesis module of the mobile phone, through image synthesis The module determines a combined area of the first image and the second image, and combines the combined area of the first image and the second image into a high resolution image, and finally crops the first image and the first image from the high resolution image
  • the third image can be obtained from the common area of the two images.
  • an embodiment of the present invention provides an image processing apparatus. Referring to FIG. 10, the following specifically includes the following structure:
  • the acquiring module 100 is configured to acquire a first image collected by the first camera 10 for the first region and a second image captured by the second camera 11 for the second region at the same time, where the first camera 10 and the second camera 11 are located in the image. Processing the same plane of the device;
  • the translation compensation module 101 is connected to the acquisition module 100, and after the first image and the second image are obtained by the acquisition module 100, the first image is used as a reference image, and the second image is subjected to translation compensation;
  • the image synthesis module 102 is coupled to the translation compensation module 101, and configured to synthesize the first image and the translationally compensated second image into a third image after the translation compensation is performed by the translation compensation module 91.
  • the resolution is higher than the resolution of the first image and the second image.
  • the optical axes of the first camera 10 and the second camera 11 are parallel and/or
  • the first camera 10 and the second camera 11 are fixedly disposed at a terminal.
  • the translation compensation module 101 specifically includes:
  • a determining unit configured to determine a translation amount between the first image and the second image
  • the compensation unit is connected to the determining unit, and after determining the shift amount based on the determining unit, the second image acquired by the second camera 11 is subjected to translation compensation according to the shift amount.
  • the determining unit is specifically configured to determine the shift amount by using the following formula:
  • d represents the amount of translation of the object at a distance Z from the plane of the first camera 10 and the second camera 11 relative to the first image in the second image;
  • B represents the distance between the first camera 10 and the second camera 11
  • Z represents the vertical distance of the object from the plane in which the first camera 10 and the second camera 11 are located
  • f represents the focal length of the first camera or the focal length of the second camera.
  • the image synthesizing module 102 specifically includes:
  • a determining unit configured to determine a common area of the first image and the second image according to a result of the translation compensation
  • a synthesizing unit coupled to the determining unit, configured to synthesize the common area of the first image and the second image into a third image after determining the common area according to the determining unit.
  • the image processing device is an image processing device used in the image processing method in the embodiment of the present invention. Therefore, those skilled in the art can understand the image processing device according to the embodiment of the present invention. The specific structure and modification of the image processing apparatus described in the embodiments of the present invention are not described in detail herein.
  • an embodiment of the present invention provides a terminal, such as a mobile phone, a tablet computer, a digital camera, and the like.
  • a terminal such as a mobile phone, a tablet computer, a digital camera, and the like.
  • the terminal includes:
  • a first camera 10 configured to acquire a first image from the first area
  • the second camera 11 is configured to acquire the second image at the same time when the first camera 10 captures the first image, and the first camera 10 and the second camera 11 are located on the same side of the terminal;
  • the processor 13 is connected to the first camera 10 and the second camera 11 for performing translation compensation on the second image with the first image as a reference image;
  • the first image and the panned compensated second image are combined into a third image, the resolution of the third image being higher than the resolution of the first image and the second image.
  • the first camera 10 and the second camera 11 can be connected by a connector 12 (as shown in FIG. 2).
  • the optical axes of the first camera 10 and the second camera 11 are parallel and/or
  • the first camera 10 and the second camera 11 are fixedly disposed at the terminal.
  • the processor 13 performs the translation compensation on the second image by using the first image as a reference image, and specifically includes:
  • the second image acquired by the second camera 11 is subjected to translation compensation according to the amount of shift.
  • the processor 13 determines the amount of translation between the first image and the second image, specifically:
  • d represents the amount of translation of the object at a distance Z from the plane of the first camera 10 and the second camera 11 relative to the first image in the second image;
  • B represents the distance between the first camera 10 and the second camera 11
  • Z represents the vertical distance of the object from the plane of the first camera 10 and the second camera 11, and f represents the focal length of the first camera or the focal length of the second camera.
  • the focal length of the first camera may be the same as the focal length of the second camera.
  • the processor 13 combines the first image and the second image after the translation compensation into a third image, and specifically includes:
  • the common area of the first image and the second image is synthesized into a third image.
  • first camera and the second camera are located on the same side of the terminal, and the first camera and the second camera may be located at the back of the terminal, and the pixels of the first camera and the second camera may be the same or different.
  • first camera and the second camera can also be in front of the terminal.
  • the terminal can be a mobile phone, a tablet, a wearable device, a wristband device, a digital camera, or glasses.
  • the terminal described in the embodiment of the present invention is a terminal used in the image processing method in the embodiment of the present invention
  • the image processing method introduced in the embodiment of the present invention belongs to the field.
  • a person skilled in the art can understand the specific structure and deformation of the terminal introduced in the embodiment of the present invention, and therefore will not be described in detail herein.
  • the first image is acquired by the first camera at the same time to obtain the first image
  • the second image is acquired by the second camera to obtain the second image
  • the first image is The reference image performs translation compensation on the second image
  • the resolution of the third image is higher than the resolution of the first image and the second image due to the first image and
  • the second image is an image acquired at the same time, so there is no object motion between the two frames, and the second image is compensated with the first image as a reference image, so that the second image and the first image
  • “ghosting” which solves the "ghosting” problem that arises when synthesizing high-resolution images from multi-frame low-resolution images
  • the time taken to acquire the first image and the second image may be reduced, and when the third image is synthesized, no algorithm is needed to correct the local motion and the user's hand.
  • the “ghost problem” generated by the shaking increases the speed of acquiring the third image, which can improve the user experience.
  • the optical axes of the first camera and the second camera are parallel, so that problems such as distortion and occlusion between the first image and the second image can be prevented, thereby making the calculated translation amount more accurate. In turn, the "ghosting" problem can be further prevented.
  • the relative positions and postures of the first camera and the second camera can be prevented from changing, thereby ensuring that the object of the same depth in the scene is translated relative to the first image.
  • the quantity is the same.
  • the correspondence between the depth and the amount of translation can be stored in advance, and the corresponding amount of translation can be directly determined by the actual depth of the scene when photographing, without acquiring two The image is then calculated, thereby increasing the speed at which the third image is obtained; and preventing the user from shaking the first camera and the second The camera's dithering direction is different, so it can further prevent the "ghosting" problem;
  • first camera and the second camera can be parallel to the optical axis and can be fixedly disposed at a terminal, it is possible to prevent the optical axes from being non-parallel due to the relative positional changes of the first camera and the second camera, and the optical axes are not parallel.
  • the amount of pre-stored translation is not accurate enough, so that the translation compensation of the second image is more accurate, and the "ghosting" problem can be further prevented.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage, CD-ROM, optical storage, etc.) including computer usable program code.
  • computer-usable storage media including but not limited to disk storage, CD-ROM, optical storage, etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

本发明涉及图像应用领域,公开了一种图像处理方法、装置及终端,以解决现有技术中在通过多帧低分辨率图像合成高分辨率图像时所产生的"鬼影"的技术问题,该方法应用于包含第一摄像头和第二摄像头的终端中,第一摄像头和第二摄像头位于终端的同一侧面,方法包括:获得第一摄像头对第一区域采集的第一图像和第二摄像头在同一时刻对第二区域采集的第二图像;以第一图像为参考图像,对第二图像进行平移补偿;将第一图像和平移补偿后的第二图像合成为第三图像,第三图像的分辨率高于第一图像和第二图像的分辨率。

Description

一种图像处理方法、装置及终端
本申请要求于2013年12月6日提交中国专利局,申请号为201310658550.8、发明名称为“一种图像处理方法、装置及终端”的中国专利申请,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及图像应用领域,特别涉及一种图像处理方法、装置及终端。
背景技术
在大量的图像应用领域,可以通过超分辨率算法得到高分辨率图像,超分辨率算法指的是将多帧低分辨率图像进行融合,进而生成一帧高分辨率图像。而由于在采集多帧低分辨率图像时会存在时间差异,故而导致存在在不同时间所采集的多帧图像之间会有局部运动的差异,局部运动则是由两帧图像采集的时间间隔内,场景中的物体发生的运动所致。
由于在两帧图像采集的时间间隔内,场景中的物体会发生运动,进而导致在将两帧低分辨率图像合成为高分辨率图像时,高分辨率图像中会存在鬼影问题,如图1所示,合成的照片存在“鬼影”。
发明内容
本发明实施例提供一种图像处理方法、装置及终端,以解决现有技术中在通过多帧低分辨率图像合成高分辨率图像时存在“鬼影”的技术问题。
根据本发明实施例的第一方面,提供一种图像处理方法,应用于包含第一摄像头和第二摄像头的终端中,所述第一摄像头和所述第二摄像头位于所述终端的同一侧面,所述方法包括:获取所述第一摄像头对第一区域采集的第一图像和所述第二摄像头在同一时刻对第二区域采集的第二图像;以所述第一图像为参考图像,对所述第二图像进行平移补偿;将所述第一图像和平移补偿后的第二图像合成为第三图像,第三图像的分辨率高于所述第一图像 和所述第二图像的分辨率。
结合第一方面,在第一种可能的实现方式中,所述以所述第一图像为参考图像,对所述第二图像进行平移补偿,具体包括:确定所述第一图像和所述第二图像之间的平移量;对所述第二摄像头采集的所述第二图像依据所述平移量进行平移补偿。
结合第一方面的第一种可能的实现方式,在第二种可能的实现方式中,通过以下公式确定所述平移量:d=B*f/Z;其中,d表示与第一摄像头和第二摄像头所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;B表示第一摄像头和第二摄像头的距离;Z表示所述物体与所述第一摄像头和所述第二摄像头所在平面的垂直距离,f表示第一摄像头的焦距或所述第二摄像头的焦距。
结合第一方面的第一种可能的实现方式,在第三种可能的实现方式中,所述将所述第一图像和平移补偿后的第二图像合成为第三图像,具体包括:根据所述平移补偿的结果确定所述第一图像和所述第二图像的公共区域;将所述第一图像和所述第二图像的所述公共区域合成为所述第三图像。
根据本发明实施例的第二方面,提供一种图像处理装置,包括:获取模块,用于获取第一摄像头对第一区域采集的第一图像和第二摄像头在同一时刻对第二区域采集的第二图像,其中所述第一摄像头和所述第二摄像头位于所述图像处理装置的同一平面;平移补偿模块,连接于所述获取模块,用于在通过所述获取模块获得所述第一图像和所述第二图像之后,以所述第一图像为参考图像,对所述第二图像进行平移补偿;图像合成模块,连接于所述平移补偿模块,用于在通过所述平移补偿模块对所述第二图像进行平移补偿之后,将所述第一图像和平移补偿后的第二图像合成为第三图像,所述第三图像的分辨率高于所述第一图像和所述第二图像的分辨率。
结合第二方面,在第一种可能的实现方式中,所述平移补偿模块,具体包括:确定单元,用于确定所述第一图像和所述第二图像之间的平移量;补 偿单元,连接于所述确定单元,用于在基于所述确定单元确定所述平移量之后,对所述第二摄像头采集的所述第二图像依据所述平移量进行平移补偿。
结合第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述确定单元具体用于,通过以下公式确定所述平移量:d=B*f/Z;其中,d表示与第一摄像头和第二摄像头所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;B表示第一摄像头和第二摄像头的距离;Z表示所述物体与所述第一摄像头和所述第二摄像头所在平面的垂直距离,f表示所述第一摄像头的焦距或所述第二摄像头的焦距。
结合第二方面的第一种可能的实现方式,在第三种可能的实现方式中,所述图像合成模块,具体包括:确定单元,用于根据所述平移补偿的结果确定所述第一图像和所述第二图像的公共区域;合成单元,连接于所述确定单元,用于根据所述确定单元确定所述公共区域之后,将所述第一图像和所述第二图像的所述公共区域合成为所述第三图像。
根据本发明实施例的第三方面,提供一种终端,包括:第一摄像头,用于对第一区域采集获得第一图像;第二摄像头,用于在所述第一摄像头采集所述第一图像的同一时刻对第二区域采集获得第二图像,所述第一摄像头与所述第二摄像头位于所述终端的同一侧面;处理器,连接于所述第一摄像头和所述第二摄像头,用于以所述第一图像为参考图像,对所述第二图像进行平移补偿;以及将所述第一图像和平移补偿后的第二图像合成为第三图像,所述第三图像的分辨率高于所述第一图像和所述第二图像的分辨率。
结合第三方面,在第一种可能的实现方式中,所述第一摄像头和所述第二摄像头光轴平行和/或所述第一摄像头与所述第二摄像头固定设置于所述终端。
结合第三方面,在第二种可能的实现方式中,所述处理器以所述第一图像为参考图像,对所述第二图像进行平移补偿,具体包括:确定所述第一图像和所述第二图像之间的平移量;对所述第二摄像头采集的所述第二图像依 据所述平移量进行平移补偿。
结合第三方面的第二种可能的实现方式,在第三种可能的实现方式中,所述处理器确定所述第一图像和所述第二图像之间的平移量,具体为:通过以下公式确定所述平移量:d=B*f/Z;其中,d表示与第一摄像头和第二摄像头所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;B表示第一摄像头和第二摄像头的距离;Z表示所述物体与所述第一摄像头和所述第二摄像头所在平面的垂直距离,f表示第一摄像头的焦距或所述第二摄像头的焦距。
结合第三方面的第二种可能的实现方式,在第四种可能的实现方式中,所述处理器将所述第一图像和平移补偿后的第二图像合成为第三图像,具体包括:根据所述平移补偿的结果确定所述第一图像和所述第二图像的公共区域;将所述第一图像和所述第二图像的所述公共区域合成为所述第三图像。
本发明有益效果如下:
由于在本发明实施例中,在同一时刻通过第一摄像头对第一区域进行采集获得第一图像并且通过第二摄像头对第二区域进行采集获得第二图像,然后以第一图像为参考图像对第二图像进行平移补偿,最后将第一图像和第二图像合成为第三图像,而第三图像的分辨率高于第一图像和第二图像的分辨率,由于第一图像和第二图像为在同一时刻所采集的图像,故而这两帧图像之间不会存在物体运动,并且以第一图像为参考图像对第二图像进行了平移补偿,从而使第二图像与第一图像中的相同物体位置重叠;并且由于第一图像和第二图像是同时采集的,故而在获取第一图像和第二图像时,用户手的抖动方向相同,故而能够防止因为用户手抖所产生的“鬼影”,从而解决了在通过多帧低分辨率图像合成高分辨率图像时所产生的“鬼影”问题;
进一步的,由于在同一时刻采集第一图像和第二图像,故而采集第一图像和第二图像的耗时可以缩小,并且在合成第三图像时,不需要通过算法来纠正局部运动和用户手抖所产生的“鬼影”问题,进而提升了获取第三图像 的速度,可以提高用户的体验度。
附图说明
图1为本现有技术中在通过两张低分辨率图像合成高分辨率图像时存在“鬼影”问题的示意图;
图2为本发明实施例图像处理方法中第一摄像头和第二摄像头设置于终端同一侧面的示意图;
图3为本发明实施例图像处理方法的流程图;
图4为本发明实施例图像处理方法中对第二图像进行平移补偿的流程图;
图5为本发明实施例图像处理方法中平移量计算公式中的d、B、f、Z之间的位置关系示意图;
图6为本发明实施例图像处理方法中将第一图像和平移补偿后的第二图像进行合成的流程图;
图7a为本发明实施例图像采集的方法中采集获得的第一图像和第二图像的示意图;
图7b为本发明实施例图像处理方法中将第二图像进行平移补偿且确定出得第一图像和第二图像的公共区域的示意图;
图7c为本发明实施例图像处理方法中确定出的第一图像和第二图像的组合区域和公共区域的示意图;
图8为本发明实施例一中图像处理方法的流程图;
图9为本发明实施例二中图像处理方法的流程图;
图10为本发明实施例中图像采集装置的结构图;
图11为本发明实施例中终端的结构图。
具体实施方式
为了解决现有技术中在通过多帧图像合成图像时存在“鬼影”的技术问题,本发明实施例中提供了一种图像处理方法,该方法应用于包含第一摄像头和第二摄像头的终端中,第一摄像头和第二摄像头位于终端的同一侧面,该方法包括:获取第一摄像头对第一区域采集的第一图像和第二摄像头在同一时刻对第二区域采集的第二图像;以第一图像为参考图像,对第二图像进行平移补偿;将第一图像和平移补偿后的第二图像合成为第三图像,第三图像的分辨率高于第一图像和第二图像的分辨率。
由于第一图像和第二图像为在同一时刻所采集的图像,故而这两帧图像之间不会存在物体运动,并且以第一图像为参考图像对第二图像进行了平移补偿,从而使第二图像与第一图像中的相同物体位置重叠;并且由于第一图像和第二图像是同时采集的,故而在获取第一图像和第二图像时,用户手的抖动方向相同,故而能够防止因为用户手抖所产生的“鬼影”,从而解决了在通过多帧低分辨率图像合成高分辨率图像时所产生的“鬼影”问题;
进一步的,由于在同一时刻采集第一图像和第二图像,故而采集第一图像和第二图像的耗时可以缩小,并且在合成第三图像时,不需要通过算法来纠正局部运动和用户手抖所产生的“鬼影问题”,进而提升了获取第三图像的速度,可以提高用户的体验度。
为使本发明实施例的目的、技术方案和优点更加清楚,下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。
第一方面,本发明实施例提供一种图像处理方法,请参考图2,该方法应用于包含第一摄像头10和第二摄像头11的终端中,第一摄像头10和第二摄像头11位于终端的同一侧面,第一摄像头10和第二摄像头11可以通过连接器12相连。
请参考图3,该方法具体包括以下步骤:
步骤S301:获取第一摄像头10对第一区域采集的第一图像和第二摄像头11在同一时刻对第二区域采集的第二图像;其中第一图像和第二图像分别为一帧图像。
可以理解的是,获取第一摄像头10对第一区域采集的第一图像和第二摄像头11在同一时刻对第二区域采集的第二图像,可以是获取第一摄像头10对第一区域拍照时的预览第一图像和第二摄像头11在同一时刻对第二区域拍照时的预览第二图像;也可以是获取第一摄像头10对第一区域拍照的第一图像和第二摄像头11在同一时刻对第二区域拍照的第二图像。
其中,第一摄像头10和第二摄像头11的焦距可以相同。
步骤S302:以第一图像为参考图像,对第二图像进行平移补偿;
步骤S303:将第一图像和平移补偿后的第二图像合成为第三图像,第三图像的分辨率高于第一图像和第二图像的分辨率。
步骤S301中,第一摄像头10和第二摄像头11可以为完全独立的摄像头,可以通过软件控制第一摄像头10和第二摄像头11同时采集,故而第二图像中的某一物体相对于第一图像中的某一物体是绝对静止的,例如:在一个场景中,用户A处于运动状态,如果是现有技术中在不同的时间采集图像,那么第一图像中用户A所在位置和第二图像中用户A所在位置则不相同,进而在合成第三图像之后会形成“鬼影”;但是在本发明中,尽管用户A在运动,但是在经过平移补偿之后,第一图像和第二图像中用户A都位于相同的位置,从而避免了因为两帧图像之间物体运动所导致的“鬼影”问题。
在具体实施过程中,第一摄像头10和第二摄像头11可以有多种设置方式,下面将列举其中的三种较优的设置方式,当然,在具体实施过程中,不限于以下三种情况。
第一种,第一摄像头10和第二摄像头11光轴平行。
举例来说,光轴就是指的摄像头的镜头所在平面的垂直方向,也即光学 系统的对称轴,第一摄像头10和第二摄像头11光轴平行,也就是第一摄像头10和第二摄像头11的镜头所在平面的垂直线平行,如果第一摄像头10和第二摄像头11光轴平行,那么能够防止第一图像和第二图像之间发生扭曲、遮挡等问题,进而使所计算的平移量更加准确。
第二种,第一摄像头10和第二摄像头11固定设置于一终端。
在这种情况下,可以保证第一摄像头10和第二摄像头11的相对位置、姿态不发生变化,即使是用户在使用过程中发生跌落、挤压等。在这种情况下,能够防止用户手抖时第一摄像头10和第二摄像头11的抖动方向不同,进而能够进一步的防止因为用户手抖带来的“鬼影”问题。
第三种,第一摄像头10和第二摄像头11光轴平行且第一摄像头10和第二摄像头11固定设置于一终端。
在这种情况下,由于能够防止第一摄像头10和第二摄像头11的相对位置、姿态发生变化,进而能够防止由于第一摄像头10和第二摄像头11的相对位置、姿态发生变化所导致的第一摄像头10和第二摄像头11的光轴不平行,从而所计算的第二图像相对于第一图像的平移量更加精确,进而能够进一步的防止“鬼影”问题。
步骤S302中,以第一图像为参考图像,对第二图像进行平移补偿,如图4所示,具体又包括以下步骤:
步骤S401:确定第一图像和第二图像之间的平移量;
步骤S402:对第二摄像头11采集的第二图像依据平移量进行平移补偿。
可选的,步骤S401中可以通过如下公式确定第一图像和第二图像之间的平移量:
d=B*f/Z;
其中,d表示与第一摄像头10和第二摄像头11所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;
B表示第一摄像头10和第二摄像头11之间的距离;
Z表示物体与第一摄像头10和第二摄像头11所在平面的垂直距离,也即物体的深度,f表示第一摄像头10的焦距或所述第二摄像头11的焦距。
如图5所示,为d、B、f、Z之间的位置关系示意图,其中,在通过第一摄像头10采集获得第一图像以及通过第二摄像头11采集获得第二图像之后,就可以获得第一摄像头10和第二摄像头11所产生的深度,然后通过上述平移量计算公式确定不同的物体的平移量。
进一步的,也可以预先通过上述平移量计算公式对第一摄像头10和第二摄像头11产生的深度和该深度上对应的平移量进行标定,标定方法可以如下:
在离散的N个不同的深度层次上采集N组特定图像(如棋盘格图像),每组包含两张图像,分别来自第一摄像头10和第二摄像头11,然后标定出每组图像之间的平移量,这样就获得了N组平移量,这N组平移量即为标定的N个深度上的像素之间的平移量。可以将这N个深度以及对应的N个平移量预存在rom中,供实际拍照时使用。
如果预先通过上述方式对平移量进行了标定,那么确定第一图像和第二图像之间的平移量的过程如下:
根据传入的被采集场景的深度信息,在rom中查询该深度对应的平移量,该平移量就是第二图像相对于第一图像在该深度上的平移量(假设一第一图像为参考图象),例如:场景中某点A的深度为D,在rom中查询D对应的平移量为M,那么第二图像中的像素A相对于第一图像中的像素A的平移量就为M。
在这种情况下,由于在生成第三图像时,并不需要再重新计算获得平移量,故而进一步的提高了获取第三图像的速度。
并且,由于场景中相同深度的物体在第二图像相对于第一图像的平移量是相同的,故而在出厂时可以针对每个产品进行标定,故而所确定的平移量的精度更高。
步骤S402中,对第二摄像头11采集的第二图像依据平移量进行平移补 偿,例如为:保持第一图像不变,将第二图像的每个点的坐标减去其对应深度的平移量,就可以获得进行平移补偿后的第二图像。
步骤S303中,将第一图像和平移补偿后的第二图像合成为第三图像,请参考图6,具体包括以下步骤:
步骤S601:根据平移补偿的结果确定第一图像和第二图像的公共区域;
步骤S602:将第一图像和第二图像的公共区域合成为第三图像。
步骤S601中,请按考图7a,为采集获得第一图像70a和第二图像70b,那么可以首先确定第二图像70b中每个像素点的深度所对应的平移量,接着将第二图像70b中的每个像素点分别减去其对应深度的平移量,就可以获得平移补偿后的第二图像70b,如图7b所示,在这种情况下,就可以将第一图像70a和第二图像70b的相同部分的内容作为公共区域71。
而步骤S602中,将第一图像70a和第二图像70b的公共区域合成为第三图像又可以分为多种方式,下面列举其中的两种进行介绍,当然,在具体实施过程中,不限于以下两种情况。
①将第一图像70a和平移补偿后的第二图像70b合成为第三图像,具体为:将第一图像70a和第二图像70b裁剪出公共区域71,将裁剪出公共区域的第一图像70a和第二图像70b合成为第三图像。
由于在对第二图像70b进行平移补偿之后,第二图像70b与第一图像70a的相同物体位于相同位置,进而可以直接确定出第一图像70a和第二图像70b中相同坐标所在区域作为公共区域71。
在这种情况下,由于仅仅需要将第一图像70a和第二图像70b的公共区域61进行合成,而不需要处理其它区域,故而具有能够提高终端的处理速度、并且降低终端的处理负担。
②将第一图像和第二图像的公共区域合成为第三图像,具体为:确定第一图像70a和第二图像70b的组合区域72(也即:第一图像70a和第二图像70b的区域的并集),如图7c所示;将第一图像70a和第二图像70b的组合区域 72进行合成,最后从合成结果中裁剪出第一图像70a和第二图像70b的公共区域71作为第三图像。
由于在对第二图像70b进行平移补偿之后,第二图像70b与第一图像70a的相同物体位于相同位置,故而可以确定出第一图像70a和第二图像70b中相同坐标所在区域作为公共区域71并存储;确定第一图像70a和第二图像70b的坐标所包含的最大区域作为组合区域。
在具体实施过程中,步骤S303中,可以通过插值方法将第一图像70a和平移补偿后的第二图像70b合成为第三图像,例如:核回归插值方法或者基于边缘的核回归插值方法等等,本发明实施例不作限制。
由于在上述方案中,在将第一图像和平移补偿后的第二图像合成为第三图像时,首先需要确定第一图像和第二图像之间的公共区域,而在合成高分辨率图像时,只有对第一图像和第二图像的公共区域进行合成才能达到高分辨率的效果,而非公共区域并不能通过图像合成的方式来提高分辨率,故而在这种情况下,所合成的第三图像更加精确。
以下通过几个具体的实施例来介绍本发明中的图像处理方法,下面的实施例主要介绍了该图像处理方法的几个可能的实现方式。需要说明的是,本发明中的实施例只用于解释本发明,而不能用于限制本发明。一切符合本发明思想的实施例均在本发明的保护范围之内,本领域技术人员自然知道应该如何根据本发明的思想进行变形。
本发明实施例一以该终端为手机为例进行介绍,该手机包括两个摄像头,该两个摄像头位于手机的同一侧面、光轴平行且固定设置于该手机,请参考图8为本发明实施例一中图像处理方法的流程图。
步骤S801a:第一摄像头10采集获得第一图像,第一图像分辨率为:3264px*2448px;
步骤S801b:在与第一摄像头10采集获得第一图像的同一时刻,第二摄像头11采集获得第二图像,第二图像分辨率为:3264px*2448px;
步骤S802:根据场景深度信息、以及预存的深度与平移量的对应关系对第一图像和第二图像进行平移补偿;
具体的,第一摄像头10可以将第一图像传递给手机中的平移补偿模块,并且第二摄像头11将第二图像传递给手机中的平移补偿模块,进而通过平移补偿模块对第一图像和第二图像进行平移补偿;其具体步骤如下:根据场景的深度信息从手机中预存的深度与平移量的对应关系中,确定出每个深度的像素点对应的平移量;接着将第二图像的像素点的坐标分别减去其对应深度的平移量,就获得平移补偿后的第二图像;最后确定第一图像和平移补偿后的第二图像的公共区域;
步骤S803:在确定第一图像和平移补偿后的第二图像的公共区域之后,将第一图像、平移补偿后的第二图像进行合成。
具体的,在确定第一图像和平移补偿后的第二图像的公共区域之后,将第一图像、平移补偿后的第二图像以及公共区域对应的坐标信息传递给手机的图像合成模块;通过图像合成模块对第一图像和平移后的第二图像的公共区域进行裁剪,最后将裁剪过的第一图像和第二图像通过插值算法融合为一张高分辨率的图像,也即:第三图像,第三图像分辨率例如为:4160px*3120px。在本实施例中,所列举的第一图像、第二图像和第三图像的分辨率大小仅仅为一个举例,并不作为限制。
本实施例以该终端为平板电脑为例进行介绍,该平板电脑包括第一摄像头10和第二摄像头11,第一摄像头10和第二摄像头11位于平板电脑的同一侧面,请参考图9,该图像处理方法包括以下步骤:
步骤S901a:第一摄像头10采集获得第一图像;
步骤S901b:在与第一摄像头10采集获得第一图像的同一时刻,第二摄像头11采集获得第二图像;
步骤S902:第一摄像头10将第一图像以及第二摄像头11将第二图像分别传递给手机中的平移补偿模块,然后通过平移补偿模块对第一图像和第二 图像进行平移补偿;其具体步骤如下:平移补偿模块根据采集场景的深度信息将第一图像划分为N个区域,然后根据公式d=B*f/Z分别计算每一个区域的平移量;
步骤S903:在确定第一图像和平移补偿后的第二图像的公共区域之后,将第一图像、平移补偿后的第二图像以及公共区域的坐标信息传递至手机的图像合成模块,通过图像合成模块确定第一图像和第二图像的组合区域,并将第一图像和第二图像的组合区域合成为一张高分辨率图像,最后从这张高分辨率图像中裁剪出第一图像和第二图像的公共区域,就可以获得第三图像。
第二方面,本发明实施例提供一种图像处理装置,请参考图10,具体包括以下结构:
获取模块100,用于获取第一摄像头10对第一区域采集的第一图像和第二摄像头11在同一时刻对第二区域采集的第二图像,其中第一摄像头10和第二摄像头11位于图像处理装置的同一平面;
平移补偿模块101,连接于获取模块100,用于在通过获取模块100获得第一图像和第二图像之后,以第一图像为参考图像,对第二图像进行平移补偿;
图像合成模块102,连接于平移补偿模块101,用于在通过平移补偿模块91对第二图像进行平移补偿之后,将第一图像和平移补偿后的第二图像合成为第三图像,第三图像的分辨率高于第一图像和第二图像的分辨率。
可选的,第一摄像头10和第二摄像头11光轴平行和/或
第一摄像头10与第二摄像头11固定设置于一终端。
可选的,平移补偿模块101,具体包括:
确定单元,用于确定第一图像和第二图像之间的平移量;
补偿单元,连接于确定单元,用于在基于确定单元确定平移量之后,对第二摄像头11采集的第二图像依据平移量进行平移补偿。
可选的,确定单元具体用于,通过以下公式确定平移量:
d=B*f/Z;
其中,d表示与第一摄像头10和第二摄像头11所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;
B表示第一摄像头10和第二摄像头11的距离;
Z表示物体与第一摄像头10和第二摄像头11所在平面的垂直距离,f表示第一摄像头的焦距或第二摄像头的焦距。
可选的,图像合成模块102,具体包括:
确定单元,用于根据平移补偿的结果确定第一图像和第二图像的公共区域;
合成单元,连接于确定单元,用于根据确定单元确定公共区域之后,将第一图像和第二图像的公共区域合成为第三图像。
由于本发明实施例所介绍的图像处理装置为实施例本发明实施例中图像处理方法所采用的图像处理装置,故而基于本发明实施例所介绍的图像处理方法,本领域所属技术人员能够了解本发明实施例所介绍的图像处理装置的具体结构及变形,故而在此不再详细介绍。
第三方面,本发明实施例提供一种终端,该终端例如为:手机、平板电脑、数码相机等等,请参考图11,该终端包括:
第一摄像头10,用于对第一区域采集获得第一图像;
第二摄像头11,用于在第一摄像头10采集第一图像的同一时刻对第二区域采集获得第二图像,第一摄像头10与第二摄像头11位于终端的同一侧面;
处理器13,连接于第一摄像头10和第二摄像头11,用于用于以第一图像为参考图像,对第二图像进行平移补偿;以及
将第一图像和平移补偿后的第二图像合成为第三图像,第三图像的分辨率高于第一图像和第二图像的分辨率。
其中,第一摄像头10和第二摄像头11可以通过连接器12相连(如图2所示)。
可选的,第一摄像头10和第二摄像头11光轴平行和/或
第一摄像头10与第二摄像头11固定设置于终端。
可选的,处理器13以第一图像为参考图像,对第二图像进行平移补偿,具体包括:
确定第一图像和第二图像之间的平移量;
对第二摄像头11采集的第二图像依据平移量进行平移补偿。
可选的,处理器13确定第一图像和第二图像之间的平移量,具体为:
通过以下公式确定平移量:
d=B*f/Z;
其中,d表示与第一摄像头10和第二摄像头11所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;
B表示第一摄像头10和第二摄像头11的距离;
Z表示物体与第一摄像头10和第二摄像头11所在平面的垂直距离,f表示第一摄像头的焦距或所述第二摄像头的焦距。
其中,第一摄像头的焦距可以与第二摄像头的焦距相同。
可选的,处理器13将第一图像和平移补偿后的第二图像合成为第三图像,具体包括:
根据平移补偿的结果确定第一图像和第二图像的公共区域;
将第一图像和第二图像的公共区域合成为第三图像。
可以理解的是,第一摄像头和第二摄像头位于终端的同一侧面,可以为,第一摄像头和第二摄像头位于终端的背面,第一摄像头和第二摄像头的像素可以相同也可以不相同。当然,第一摄像头和第二摄像头也可以终端的前面。该终端可以为,手机、平板电脑、穿戴式设备、腕带式设备、数码相机、或眼镜等。
由于本发明实施例所介绍的终端为实施例本发明实施例中图像处理方法所采用的终端,故而基于本发明实施例所介绍的图像处理方法,本领域所属 技术人员能够了解本发明实施例所介绍的终端的具体结构及变形,故而在此不再详细介绍。
本申请提供的一个或多个技术方案,至少具有如下技术效果或优点:
(1)由于在本发明实施例中,在同一时刻通过第一摄像头对第一区域进行采集获得第一图像并且通过第二摄像头对第二区域进行采集获得第二图像,然后以第一图像为参考图像对第二图像进行平移补偿,最后将第一图像和第二图像合成为第三图像,而第三图像的分辨率高于第一图像和第二图像的分辨率,由于第一图像和第二图像为在同一时刻所采集的图像,故而这两帧图像之间不会存在物体运动,并且以第一图像为参考图像对第二图像进行了平移补偿,从而使第二图像与第一图像中的相同物体位置重叠;并且由于第一图像和第二图像是同时采集的,故而在获取第一图像和第二图像时,用户手的抖动方向相同,故而能够防止因为用户手抖所产生的“鬼影”,从而解决了在通过多帧低分辨率图像合成高分辨率图像时所产生的“鬼影”问题;
进一步的,由于在同一时刻采集第一图像和第二图像,故而采集第一图像和第二图像的耗时可以缩小,并且在合成第三图像时,不需要通过算法来纠正局部运动和用户手抖所产生的“鬼影问题”,进而提升了获取第三图像的速度,可以提高用户的体验度。
(2)由于在本发明实施例中,第一摄像头和第二摄像头光轴平行,从而能够防止第一图像和第二图像之间发生扭曲、遮挡等问题,进而使所计算的平移量更加准确,进而能够进一步的防止“鬼影”问题。
而由于第一摄像头和第二摄像头固定设置,故而能够防止第一摄像头和第二摄像头的相对位置、姿态发生变化,从而能够保证场景中相同深度的物体在第二图像相对于第一图像的平移量是相同的,在这种情况下,可以预先存储深度与平移量之间的对应关系,在拍照时可以直接通过场景的实际深度来确定其对应的平移量,而不需要在采集获得两张图像之后再进行计算,从而提高了获得第三图像的速度;并且可以防止用户手抖时第一摄像头和第二 摄像头的抖动方向不同,故而能够进一步的防止“鬼影”问题;
而由于第一摄像头和第二摄像头既可以光轴平行又可以固定设置于一终端,故而能够防止因为第一摄像头和第二摄像头的相对位置变化导致光轴不平行,而光轴不平行则会导致预存的平移量不够准确,进而能够保证对第二图像进行平移补偿的更加准确,进而能够进一步的防止“鬼影”问题。
(3)由于在本发明实施例中,在将第一图像和平移补偿后的第二图像合成为第三图像时,需要确定第一图像和第二图像之间的公共区域,而在合成高分辨率图像时,只有对第一图像和第二图像的公共区域进行合成才能达到获取高分辨率图像的技术效果,而非公共区域并不能通过图像合成的方式达到提高分辨率的技术效果,故而在这种情况下,所合成的第三图像更加精确。
(4)由于在本发明实施例中,在将第一图像和第二图像合成为第三图像时,可以仅仅对第一图像和第二图像的公共区域进行合成处理,而不需要对其它区域进行处理,故而具有提高终端的处理速度、以及降低终端处理负担的技术效果。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用完全硬件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。
显然,本领域的技术人员可以对本发明实施例进行各种改动和变型而不脱离本发明实施例的精神和范围。这样,倘若本发明实施例的这些修改和变型属于本发明权利要求及其等同技术的范围之内,则本发明也意图包含这些改动和变型在内。

Claims (13)

  1. 一种图像处理方法,其特征在于,应用于包含第一摄像头和第二摄像头的终端中,所述第一摄像头和所述第二摄像头位于所述终端的同一侧面,所述方法包括:
    获取所述第一摄像头对第一区域采集的第一图像和所述第二摄像头在同一时刻对第二区域采集的第二图像;
    以所述第一图像为参考图像,对所述第二图像进行平移补偿;
    将所述第一图像和平移补偿后的第二图像合成为第三图像,所述第三图像的分辨率高于所述第一图像和所述第二图像的分辨率。
  2. 如权利要求1所述方法,其特征在于,所述以所述第一图像为参考图像,对所述第二图像进行平移补偿,具体包括:
    确定所述第一图像和所述第二图像之间的平移量;
    对所述第二摄像头采集的所述第二图像依据所述平移量进行平移补偿。
  3. 如权利要求2所述方法,其特征在于,通过以下公式确定所述平移量:
    d=B*f/Z;
    其中,d表示与第一摄像头和第二摄像头所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;
    B表示第一摄像头和第二摄像头的距离;
    Z表示所述物体与所述第一摄像头和所述第二摄像头所在平面的垂直距离,f表示所述第一摄像头的焦距或所述第二摄像头的焦距。
  4. 如权利要求3所述方法,其特征在于,所述将所述第一图像和平移补偿后的第二图像合成为第三图像,具体包括:
    根据所述平移补偿的结果确定所述第一图像和所述第二图像的公共区域;
    将所述第一图像和所述第二图像的所述公共区域合成为所述第三图像。
  5. 一种图像处理装置,其特征在于,包括:
    获取模块,用于获取第一摄像头对第一区域采集的第一图像和第二摄像头在同一时刻对第二区域采集的第二图像,其中所述第一摄像头和所述第二摄像头位于所述图像处理装置的同一平面;
    平移补偿模块,连接于所述获取模块,用于在通过所述获取模块获得所述第一图像和所述第二图像之后,以所述第一图像为参考图像,对所述第二图像进行平移补偿;
    图像合成模块,连接于所述平移补偿模块,用于在通过所述平移补偿模块对所述第二图像进行平移补偿之后,将所述第一图像和平移补偿后的第二图像合成为第三图像,所述第三图像的分辨率高于所述第一图像和所述第二图像的分辨率。
  6. 如权利要求5所述装置,其特征在于,所述平移补偿模块,具体包括:
    确定单元,用于确定所述第一图像和所述第二图像之间的平移量;
    补偿单元,连接于所述确定单元,用于在基于所述确定单元确定所述平移量之后,对所述第二摄像头采集的所述第二图像依据所述平移量进行平移补偿。
  7. 如权利要求6所述装置,其特征在于,所述确定单元具体用于,通过以下公式确定所述平移量:
    d=B*f/Z;
    其中,d表示与第一摄像头和第二摄像头所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;
    B表示第一摄像头和第二摄像头的距离;
    Z表示物体与所述第一摄像头和所述第二摄像头所在平面的垂直距离,f表示所述第一摄像头的焦距或所述第二摄像头的焦距。
  8. 如权利要求6所述装置,其特征在于,所述图像合成模块,具体包括:
    确定单元,用于根据所述平移补偿的结果确定所述第一图像和所述第二图像的公共区域;
    合成单元,连接于所述确定单元,用于根据所述确定单元确定所述公共区 域之后,将所述第一图像和所述第二图像的所述公共区域合成为所述第三图像。
  9. 一种终端,其特征在于,包括:
    第一摄像头,用于对第一区域采集获得第一图像;
    第二摄像头,用于在所述第一摄像头采集所述第一图像的同一时刻对第二区域采集获得第二图像,所述第一摄像头与所述第二摄像头位于所述终端的同一侧面;
    处理器,连接于所述第一摄像头和所述第二摄像头,用于以所述第一图像为参考图像,对所述第二图像进行平移补偿;以及
    将所述第一图像和平移补偿后的第二图像合成为第三图像,所述第三图像的分辨率高于所述第一图像和所述第二图像的分辨率。
  10. 如权利要求9所述终端,其特征在于,所述第一摄像头和所述第二摄像头光轴平行和/或
    所述第一摄像头与所述第二摄像头固定设置于所述终端。
  11. 如权利要求9所述终端,其特征在于,所述处理器以所述第一图像为参考图像,对所述第二图像进行平移补偿,具体包括:
    确定所述第一图像和所述第二图像之间的平移量;
    对所述第二摄像头采集的所述第二图像依据所述平移量进行平移补偿。
  12. 如权利要求11所述终端,其特征在于,所述处理器确定所述第一图像和所述第二图像之间的平移量,具体为:
    通过以下公式确定所述平移量:
    d=B*f/Z;
    其中,d表示与第一摄像头和第二摄像头所在平面的距离为Z的物体在第二图像中相对于在第一图像中的平移量;
    B表示第一摄像头和第二摄像头的距离;
    Z表示物体与所述第一摄像头和所述第二摄像头所在平面的垂直距离,f表示所述第一摄像头的焦距或所述第二摄像头的焦距。
  13. 如权利要求11所述终端,其特征在于,所述处理器将所述第一图像和平移补偿后的第二图像合成为第三图像,具体包括:
    根据所述平移补偿的结果确定所述第一图像和所述第二图像的公共区域;
    将所述第一图像和所述第二图像的所述公共区域合成为所述第三图像。
PCT/CN2014/093024 2013-12-06 2014-12-04 一种图像处理方法、装置及终端 Ceased WO2015081870A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/101,759 US9870602B2 (en) 2013-12-06 2014-12-04 Method and apparatus for fusing a first image and a second image
EP14867399.9A EP3068124A4 (en) 2013-12-06 2014-12-04 Image processing method, device and terminal

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201310658550.8 2013-12-06
CN201310658550.8A CN103685951A (zh) 2013-12-06 2013-12-06 一种图像处理方法、装置及终端

Publications (1)

Publication Number Publication Date
WO2015081870A1 true WO2015081870A1 (zh) 2015-06-11

Family

ID=50322103

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2014/093024 Ceased WO2015081870A1 (zh) 2013-12-06 2014-12-04 一种图像处理方法、装置及终端

Country Status (4)

Country Link
US (1) US9870602B2 (zh)
EP (1) EP3068124A4 (zh)
CN (1) CN103685951A (zh)
WO (1) WO2015081870A1 (zh)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103685951A (zh) * 2013-12-06 2014-03-26 华为终端有限公司 一种图像处理方法、装置及终端
CN103888672A (zh) * 2014-03-31 2014-06-25 宇龙计算机通信科技(深圳)有限公司 一种终端及终端拍摄方法
US20150286878A1 (en) * 2014-04-08 2015-10-08 Bendix Commercial Vehicle Systems Llc Generating an Image of the Surroundings of an Articulated Vehicle
CN105556935B (zh) 2014-05-15 2019-04-19 华为技术有限公司 用于多帧降噪的方法和终端
KR101991754B1 (ko) * 2014-08-29 2019-09-30 후아웨이 테크놀러지 컴퍼니 리미티드 이미지 처리 방법 및 장치, 그리고 전자 기기
KR101921672B1 (ko) * 2014-10-31 2019-02-13 후아웨이 테크놀러지 컴퍼니 리미티드 이미지 처리 방법 및 장치
JP6685995B2 (ja) * 2015-03-05 2020-04-22 ソニー株式会社 画像処理装置および画像処理方法
CN105049558B (zh) * 2015-07-03 2017-10-03 广东欧珀移动通信有限公司 终端
CN106612392A (zh) * 2015-10-21 2017-05-03 中兴通讯股份有限公司 一种基于双摄像头拍摄图像的方法和装置
CN105472245B (zh) * 2015-12-21 2019-09-24 联想(北京)有限公司 一种拍照方法、电子设备
EP3429186B1 (en) * 2016-03-30 2022-08-24 Huawei Technologies Co., Ltd. Image registration method and device for terminal
CN106161997A (zh) * 2016-06-30 2016-11-23 上海华力微电子有限公司 提高cmos图像传感器像素的方法及系统
CN106027912A (zh) * 2016-07-15 2016-10-12 深圳市金立通信设备有限公司 一种拍摄模式选择方法及终端
CN107347139B (zh) * 2017-06-30 2019-01-29 维沃移动通信有限公司 一种图像数据的处理方法和移动终端
CN108470327B (zh) * 2018-03-27 2022-05-17 成都西纬科技有限公司 图像增强方法、装置、电子设备及存储介质
CN108492325B (zh) * 2018-03-27 2020-06-30 长春理工大学 一种非共轴成像的图像配准装置、图像配准方法及系统
CN109410130B (zh) * 2018-09-28 2020-12-04 华为技术有限公司 图像处理方法和图像处理装置
CN110035206B (zh) * 2019-03-26 2020-12-11 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN109963082B (zh) * 2019-03-26 2021-01-08 Oppo广东移动通信有限公司 图像拍摄方法、装置、电子设备、计算机可读存储介质
CN112954251B (zh) * 2019-12-10 2023-03-21 RealMe重庆移动通信有限公司 视频处理方法、视频处理装置、存储介质与电子设备
CN112351271A (zh) * 2020-09-22 2021-02-09 北京迈格威科技有限公司 一种摄像头的遮挡检测方法、装置、存储介质和电子设备
WO2022103429A1 (en) * 2020-11-12 2022-05-19 Innopeak Technology, Inc. Image fusion with base-detail decomposition and flexible color and details adjustment
WO2021184029A1 (en) 2020-11-12 2021-09-16 Innopeak Technology, Inc. Systems and methods for fusing color image and near-infrared image
WO2021184027A1 (en) * 2020-11-12 2021-09-16 Innopeak Technology, Inc. Tuning color image fusion towards original input color with adjustable details

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1879401A (zh) * 2003-11-11 2006-12-13 精工爱普生株式会社 图像处理装置、图像处理方法、其程序以及记录媒体
CN102314678A (zh) * 2011-09-06 2012-01-11 苏州科雷芯电子科技有限公司 图像分辨率提高装置及方法
CN202143153U (zh) * 2011-07-27 2012-02-08 天津三星光电子有限公司 数码相机
CN102496158A (zh) * 2011-11-24 2012-06-13 中兴通讯股份有限公司 一种图像信息处理方法及装置
CN103685951A (zh) * 2013-12-06 2014-03-26 华为终端有限公司 一种图像处理方法、装置及终端

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6208765B1 (en) * 1998-06-19 2001-03-27 Sarnoff Corporation Method and apparatus for improving image resolution
JP2002524937A (ja) 1998-08-28 2002-08-06 サーノフ コーポレイション 高解像度カメラと低解像度カメラとを用いて高解像度像を合成する方法および装置
US7525576B2 (en) * 2003-02-17 2009-04-28 Axis, Ab Method and apparatus for panning and tilting a camera
KR100597587B1 (ko) * 2004-04-13 2006-07-06 한국전자통신연구원 보정 영상 신호 처리를 이용한 주시각 제어 장치 및 그방법과 그를 이용한 평행축 입체 카메라 시스템
US20070103544A1 (en) * 2004-08-26 2007-05-10 Naofumi Nakazawa Panorama image creation device and panorama image imaging device
US20100103175A1 (en) * 2006-10-25 2010-04-29 Tokyo Institute Of Technology Method for generating a high-resolution virtual-focal-plane image
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window
KR100985003B1 (ko) * 2008-03-10 2010-10-04 (주)세미솔루션 멀티 ccd 센서의 영상처리 장치 및 그 영상처리 방법
CN102037717B (zh) * 2008-05-20 2013-11-06 派力肯成像公司 使用具有异构成像器的单片相机阵列的图像拍摄和图像处理
CN101930602B (zh) * 2009-06-18 2012-05-30 张云超 一种高分辨率图像的生成方法、系统及电子终端
CN102438153B (zh) * 2010-09-29 2015-11-25 华为终端有限公司 多摄像机图像校正方法和设备
US9204026B2 (en) * 2010-11-01 2015-12-01 Lg Electronics Inc. Mobile terminal and method of controlling an image photographing therein
CN102075679A (zh) * 2010-11-18 2011-05-25 无锡中星微电子有限公司 一种图像采集方法和装置
US8878950B2 (en) * 2010-12-14 2014-11-04 Pelican Imaging Corporation Systems and methods for synthesizing high resolution images using super-resolution processes
US9516225B2 (en) * 2011-12-02 2016-12-06 Amazon Technologies, Inc. Apparatus and method for panoramic video hosting
CN103167223A (zh) * 2011-12-09 2013-06-19 富泰华工业(深圳)有限公司 具有广角拍摄功能的移动装置及其影像撷取方法
US10681304B2 (en) * 2012-06-08 2020-06-09 Apple, Inc. Capturing a panoramic image using a graphical user interface having a scan guidance indicator

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1879401A (zh) * 2003-11-11 2006-12-13 精工爱普生株式会社 图像处理装置、图像处理方法、其程序以及记录媒体
CN202143153U (zh) * 2011-07-27 2012-02-08 天津三星光电子有限公司 数码相机
CN102314678A (zh) * 2011-09-06 2012-01-11 苏州科雷芯电子科技有限公司 图像分辨率提高装置及方法
CN102496158A (zh) * 2011-11-24 2012-06-13 中兴通讯股份有限公司 一种图像信息处理方法及装置
CN103685951A (zh) * 2013-12-06 2014-03-26 华为终端有限公司 一种图像处理方法、装置及终端

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3068124A4 *

Also Published As

Publication number Publication date
US20160307300A1 (en) 2016-10-20
EP3068124A1 (en) 2016-09-14
EP3068124A4 (en) 2017-01-04
US9870602B2 (en) 2018-01-16
CN103685951A (zh) 2014-03-26

Similar Documents

Publication Publication Date Title
WO2015081870A1 (zh) 一种图像处理方法、装置及终端
EP3067746B1 (en) Photographing method for dual-camera device and dual-camera device
US10616549B2 (en) Application processor for disparity compensation between images of two cameras in digital photographing apparatus
JP5683025B2 (ja) 立体画像撮影装置および立体画像撮影方法
JP5725975B2 (ja) 撮像装置及び撮像方法
JP5230013B2 (ja) 撮像装置
WO2015161698A1 (zh) 一种图像拍摄终端和图像拍摄方法
JP2019030007A (ja) 複数のカメラを用いて映像を取得するための電子装置及びこれを用いた映像処理方法
WO2015081563A1 (zh) 一种生成图片的方法及一种双镜头设备
US10349040B2 (en) Storing data retrieved from different sensors for generating a 3-D image
WO2016184131A1 (zh) 基于双摄像头拍摄图像的方法、装置及计算机存储介质
TWI502548B (zh) 即時影像處理方法及其裝置
CN103488039A (zh) 一种3d摄像模组及具有该摄像模组的电子设备
JP6021489B2 (ja) 撮像装置、画像処理装置およびその方法
EP3190566A1 (en) Spherical virtual reality camera
TW201824178A (zh) 全景即時影像處理方法
JP5509986B2 (ja) 画像処理装置、画像処理システム、及び画像処理プログラム
CN119011796B (zh) 摄像透视vst中环境图像数据的处理方法、头显设备和存储介质
JP6732440B2 (ja) 画像処理装置、画像処理方法、及びそのプログラム
JPWO2019082415A1 (ja) 画像処理装置、撮像装置、画像処理装置の制御方法、画像処理プログラムおよび記録媒体
JP2019047145A (ja) 画像処理装置、撮像装置、画像処理装置の制御方法およびプログラム
CN119071462A (zh) 使用具有不同视场的相机的立体捕获
US9832377B2 (en) Data acquiring method and electronic device thereof
JP6645949B2 (ja) 情報処理装置、情報処理システム、および情報処理方法
JP2020086651A (ja) 画像処理装置および画像処理方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 14867399

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2014867399

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2014867399

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 15101759

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE