US20160142627A1 - Image capturing device and digital zooming method thereof - Google Patents
Image capturing device and digital zooming method thereof Download PDFInfo
- Publication number
- US20160142627A1 US20160142627A1 US14/571,021 US201414571021A US2016142627A1 US 20160142627 A1 US20160142627 A1 US 20160142627A1 US 201414571021 A US201414571021 A US 201414571021A US 2016142627 A1 US2016142627 A1 US 2016142627A1
- Authority
- US
- United States
- Prior art keywords
- image
- primary
- rectified
- generate
- rectified image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H04N5/23229—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G06T3/0093—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/18—Image warping, e.g. rearranging pixels individually
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G06T7/0051—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/69—Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/95—Computational photography systems, e.g. light-field imaging systems
- H04N23/951—Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20228—Disparity calculation for image-based rendering
Definitions
- image blur and distortion may occur.
- digital zoom is performed on a single image via an image processing technique.
- image processing technique it details may not be preserved.
- zooming factor increases, the image appears more blur and distorted.
- the wide-angle lens may be used for capturing a wide-angle image
- the telephoto lens may be used for capturing a narrow-angle image.
- Either one of the wide-angle image and the narrow-angle image would be set as a target image for digital zoom.
- the target image needs to be switched to the other image, the viewed image may appear flickering or unsmooth.
- the invention is directed to an image capturing device and a digital zooming method thereof, where a digital zoomed image with high quality would be provided throughout a digital zooming process.
- the invention is directed to a digital zooming method of an image capturing device, adapted to an image capturing device having a primary lens and a secondary lens.
- the method includes the following steps: capturing a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image; performing image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image; performing feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image; when a zooming factor is between 1 and a primary-secondary image factor, performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map, wherein the primary-secondary
- the step of performing feature point detection on the primary rectified image and the secondary rectified image so as to detect the overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining the pixel displacements and the depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image includes: detecting a plurality of feature points from the primary rectified image and the secondary rectified image; identifying a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image; obtaining the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtaining each of the pixel displacements; and performing stereo matching on each of the feature points in the primary rectified image and the secondary rectified image to obtain the depth map corresponding to each of the feature points.
- the primary lens and the secondary lens have different fields of view and same distortion levels.
- the field of view of the primary lens is greater than the field of view of the secondary lens.
- the primary-secondary image factor is fixed and prior known.
- the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- the zooming factor is less than 1, only the primary rectified image and the secondary rectified image to generate the primary warped image and the
- the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map.
- the zooming factor is less than 1, only the primary rectified image would be shrunken.
- the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
- the primary lens and the secondary lens have same fields of view and different distortion levels, and the distortion level of the primary lens is much less than the distortion level of the secondary lens.
- the method further includes: performing image cropping on a center region of the secondary rectified image to generate a cropped secondary rectified image; and setting the cropped secondary rectified image as the secondary rectified image.
- the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map.
- the zooming factor is less than 1, only the primary rectified image would be shrunken.
- the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
- the step of performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate the digital zoomed image includes: setting a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor; performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight; and substituting the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
- the invention is also directed to an image capturing device including a primary lens, a secondary lens, a storage unit, and at least one processing unit, where the processing unit is coupled to the lens and the storage unit.
- the storage unit is configured to record a plurality of modules.
- the processing unit is configured to access and execute the modules recorded in the storage unit.
- the modules include an image capturing module, an image preprocessing module, a feature analyzing module, an image zooming-warping module, and an image fusion module.
- the image capturing module is configured to capture a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image.
- the image preprocessing module is configured to perform image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image.
- the feature analyzing module is configured to perform feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtain a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image.
- the image zooming-warping module is configured to perform image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map.
- the image fusion module is configured to perform image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
- the image preprocessing module obtains a plurality of rectification parameters associated with the primary lens and the secondary lens, and rectifies the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
- the feature analyzing module detects a plurality of feature points from the primary rectified image and the secondary rectified image, identifies a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image, obtains the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtains each of the pixel displacements, and performs stereo matching on each of the feature point correspondences in the primary rectified image and the secondary rectified image to obtain the depth map.
- the primary lens and the secondary lens have different fields of view and same distortion levels.
- the field of view of the primary lens is greater than the field of view of the secondary lens.
- the primary-secondary image factor is fixed and prior known.
- the image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- the primary lens and the secondary lens have same fields of view and same distortion levels.
- the image preprocessing module further performs image binning on the primary rectified image to generate a binned primary rectified image, performs image cropping on the secondary rectified image to generate a cropped secondary rectified image, and sets the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image, where a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same.
- the image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- the primary lens and the secondary lens have same fields of view and different distortion levels, and the distortion level of the primary lens is much less than the distortion level of the secondary lens.
- a primary-secondary center image factor between a center region of the primary rectified image and a center region of the secondary rectified image is fixed and prior known.
- the image preprocessing module further performs image cropping on the center region of the secondary rectified image to generate a cropped secondary rectified image, and sets the cropped secondary rectified image as the secondary rectified image.
- the image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- the image fusion module sets a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor, performs image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight, and substitutes the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
- the image capturing device and the digital zooming method proposed in the invention by analyzing different imaging properties and distortion levels of the dual lenses, image zooming and image warping are automatically performed on images captured by the dual lenses according to a zooming factor to generate two warped images with similar focal lengths, sizes, and fields of view.
- the two warped images are fused by their weighted sum, and a digital zoomed image corresponding to the zooming factor would be obtained thereafter.
- the image capturing device and the digital zooming method proposed in the invention may provide a digital zoomed image with high quality throughout a digital zooming process.
- FIG. 1 illustrates a block diagram of an image capturing device according to an embodiment of the invention.
- FIG. 2 illustrates a flowchart of a digital zooming method of an image capturing device according to an embodiment of the invention.
- FIG. 3 is a schematic diagram of a primary rectified image and a secondary rectified image according to an embodiment of the invention.
- FIG. 4 is a schematic diagram of a primary rectified image and a secondary rectified image according to another embodiment of the invention.
- FIG. 5 is a schematic diagram of a primary image and a secondary image according to an embodiment of the invention.
- FIG. 6 illustrates a functional block diagram of a digital zooming method of an image capturing device according to an embodiment of the invention.
- FIG. 1 illustrates a block diagram of an image capturing device according to an embodiment of the invention. It should, however, be noted that this is merely an illustrative example and the invention is not limited in this regard. All components of the image capturing device and their configurations are first introduced in FIG. 1 . The detailed functionalities of the components are disclosed along with FIG. 2 .
- an image capturing device 100 includes a primary lens 10 a , a secondary lens 10 b , a storage unit 20 , and at least one processing units 30 .
- the image capturing device 100 is, for example, a digital camera, a digital camcorder, a digital single lens reflex camera or other devices provided with an image capturing feature such as a smart phone, a tablet computer, a personal digital assistant, and so on.
- the invention is not limited herein.
- the primary lens 10 a and the secondary lens 10 b include optical sensing elements for sensing light intensity entering the primary lens 10 a and the secondary lens 10 b to thereby generate images.
- the optical sensing elements are, for example, charge-coupled-device (CCD) elements, complementary metal-oxide semiconductor (CMOS) elements, and yet the invention is not limited thereto.
- CCD charge-coupled-device
- CMOS complementary metal-oxide semiconductor
- focal lengths, sensing sizes, fields of view, and distortion levels of the primary lens 10 a and the secondary lens 10 b may be the same or different. The invention is not limited herein.
- the storage unit 20 may be one or a combination of a stationary or mobile random access memory (RAM), a read-only memory (ROM), a flash memory, a hard drive or other similar devices.
- the storage unit 20 is configured to record a plurality of modules executable by the processing unit 30 , where the modules may be loaded into the processing unit 30 for performing digital zoom on an image captured by the image capturing device 100 .
- the processing unit 30 may be, for example, a central processing unit (CPU) or other programmable devices for general purpose or special purpose such as a microprocessor and a digital signal processor (DSP), a programmable controller, an application specific integrated circuit (ASIC), a programmable logic device (PLD) or other similar devices or a combination of above-mentioned devices.
- the processing unit 30 is coupled to the primary lens 10 a , the secondary lens 10 b , and the storage unit 20 , and capable of accessing and executing the modules recorded in the storage unit 20 .
- the aforesaid modules include an image capturing module 121 , an image preprocessing module 122 , a feature analysing module 123 , an image zooming-warping module 124 , and an image fusion module 125 , where the modules may be loaded into the processing unit 30 for digital zoom.
- FIG. 2 illustrates a flowchart of a digital zooming method of an image capturing device according to an embodiment of the invention, and the method in FIG. 2 may be implemented by the components of the image capturing device 100 in FIG. 1 .
- the image capturing module 121 of the image capturing device 100 captures a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image (Step S 202 ).
- the image capturing module 121 would generate the primary image corresponding to the primary lens 10 a and the secondary image corresponding to the secondary lens 10 b .
- the primary image may be used for previewing purposes; it is an image with a lower quality and a larger field of view.
- the secondary image may not be used for previewing purposes; it is an image with a higher quality and a smaller field of view, and may be used as an auxiliary for digital zoom in the follow-up steps. More detailed information on the primary image and the secondary image will be given later on.
- the image preprocessing module 122 performs image rectification on the primary image and the secondary image to respectively generate a primary rectified image and a secondary rectified image (Step S 204 ).
- the image preprocessing module 122 may calibrate the shift amount of the brightness, the color, and the geometric position of each of the primary image and the secondary image respectively due to the primary lens 10 a and the secondary lens 10 b.
- the image preprocessing module 122 may obtain a plurality of rectification parameters associated with the primary lens 10 a and the secondary lens 10 b .
- rectification parameters may be intrinsic parameters and extrinsic parameters of a camera for image rectification.
- the intrinsic parameters may be used for describing the transformation between camera coordinates and image coordinates. That is, the camera coordinates may be projected onto a projective plane according to the pinhole imaging principle.
- the intrinsic parameters may be, for example, focal length, image center, principal point, distortion coefficients, and so forth.
- the extrinsic parameters are used for describing the transformation between world coordinates and camera coordinates.
- the extrinsic parameters may be, for example, parameters in associated with the position and the viewing angle of the image capturing device 100 in a three-dimensional coordinate system such as a rotation matrix and a translation vector.
- the rectification parameters may also be parameters in associated with illumination compensation or color correction.
- the invention is not limited herein.
- the image preprocessing module 122 may rectify the primary image and the secondary image according to the aforesaid rectification parameters.
- the primary image and the secondary image being rectified may be referred to as a “primary rectified image” and a “secondary rectified image” respectively.
- the feature analyzing module 123 performs feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtains a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image (Step S 206 ).
- Each of the overlapping regions is an overlapping portion between the field of view of the primary rectified image and that of the secondary rectified image.
- the feature analyzing module 123 may detect a plurality of feature points from the primary rectified image and the secondary rectified image by leveraging a feature detection algorithm such as edge detection, corner detection, blob detection, and so forth. Next, the feature analyzing module 123 may identify a plurality of feature point correspondences and obtain the overlapping regions respectively in the primary rectified image and the secondary rectified image.
- a feature detection algorithm such as edge detection, corner detection, blob detection, and so forth.
- the feature analyzing module 123 may identify a plurality of feature point correspondences according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image so as to calculate a homography matrix.
- the feature analyzing module 123 may not only obtain the pixel displacements of the two overlapping regions via the homography matrix, but may also perform stereo matching due to the similar fields of view so as to estimate the depth map.
- the feature analyzing module 123 may determine the displacement and shift properties of each feature point correspondence to obtain the pixel displacements thereof. On the other hand, the feature analyzing module 123 may perform stereo matching on each of the feature point correspondences to obtain the pixel depth map. In other words, the feature analyzing module 123 may calculate the depth information of each of the feature point correspondences in the overlapping regions respectively in the primary rectified image and the secondary rectified image and store the depth information in a form of a depth map.
- the image zooming-warping module 124 performs image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map (Step S 208 ).
- the “primary-secondary image factor” is referred to as a ratio of the secondary rectified image to the primary rectified image, and is fixed and prior known.
- the “zooming factor” is an enlargement level to be adjusted on the primary rectified image; it may be set by the user or may be a default value of the image capturing device 100 .
- the image zooming-warping module 124 may perform image zooming and image warping on the primary rectified image and the secondary rectified image according to the zooming factor as well as relative displacement, shift, and depth information between the two overlapping regions so as to generate two images with two overlapping regions having similar views and appearances, where the zooming factor of each of the two generated images also meets the user's need. Moreover, a warping level of each of the generated images is associated with the depth map. As the depth value increases, the warping level decreases; as the depth value decreases, the warping level increases. More details on the image zooming and image warping processes will be described in the follow-up embodiments.
- the image fusion module 125 performs image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image (Step S 210 ).
- the image fusion module 125 may set a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor.
- the image fusion module 125 may perform image fusion on each color pixel of the overlapping regions respectively in the primary warped image and the secondary warped image by a weighted sum based on the first weight and the second weight, where the resulting image is referred to as a “fused overlapping region.” Then, the image fusion module 125 may substitute the overlapping region in the primary warped image by the fused overlapping image so as to generate the digital zoomed image with high quality.
- the image zooming-warping module 124 may only shrink the primary rectified image to generate a shrunken primary rectified image. Next, the image zooming-warping module 124 may perform image warping on the shrunken primary rectified image to generate the primary warped image and set the primary warped image as the digital zoomed image.
- the image zooming-warping module 124 may enlarge the secondary rectified image to generate an enlarged secondary rectified image according to the zooming factor. Next, the image zooming-warping module 124 may perform image warping on the shrunken secondary rectified image to generate the secondary warped image and set the secondary rectified image as the digital zoomed image.
- the image capturing device proposed in the invention may be adapted to different sets of lenses.
- the digital zooming method corresponding to three different sets of lenses will be illustrated hereinafter.
- FIG. 3 is a schematic diagram of a primary rectified image and a secondary rectified image according to an embodiment of the invention.
- the primary lens 10 a and the secondary lens 10 b have different fields of view and same distortion levels, where the field of view of the primary lens is greater than that of the secondary lens.
- the field of view of the primary image is greater than that of the secondary image, and yet an image quality of the secondary image surpasses that of the primary image.
- the primary image and the secondary image are respectively a wide-angle image and a narrow-angle image, and thus the secondary image is larger and presented with clarity.
- the secondary image is used as an auxiliary for digital zoom but not used for previewing purposes.
- the image preprocessing module 122 may perform image rectification on the primary image and the secondary image to generate a primary rectified image 310 a and a secondary rectified image 310 b .
- a region 315 a and a region 315 b are two overlapping regions detected respectively from the primary rectified image 310 a and the secondary image 310 b by the feature analyzing module 123 .
- the secondary rectified image 310 b is a narrow-angle image
- the secondary rectified image 310 and the overlapping region 315 b in the secondary rectified image 310 b are the same. That is, the entire secondary rectified image 310 b and the overlapping region 315 a in the primary rectified image 310 a would be identical.
- the image zooming-warping module 124 may enlarge the primary rectified image to generate an enlarged primary rectified image according to the zooming factor. Throughout the enlargement process, a center region of the primary rectified image would gradually alike to the secondary rectified image. The image zooming-warping module 124 may further shrink the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor. Next, the image zooming-warping module 124 may perform image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate a primary warped image and a secondary warped image according to the pixel displacements and the depth map.
- the image fusion module 125 may set a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to a zooming factor.
- the first weight and the second weight are allocated based on a zooming factor and a factor corresponding to each of the primary image and the secondary image.
- the factors corresponding to the primary image and the secondary image are respectively 1 and 2.
- the zooming factor is 1 (i.e., the median of 1 and 2)
- the image fusion module 125 would set the first weight and the second weight respectively to 0.5.
- the zooming factor is 1.2
- the image fusion module 125 would set the first weight and the second weight respectively to 0.8 and 0.2.
- the image fusion module 125 is not restricted to set the two weights based on a linear relationship. In other embodiments, the image fusion module 125 may set the two weights based on other formulas. The invention is not limited herein. After performing image fusion, the image fusion module 125 may generate a digital zooming image which is a relatively smooth, clear, and enlarged image.
- FIG. 4 is a schematic diagram of a primary rectified image and a secondary rectified image according to another embodiment of the invention.
- the primary lens 10 a and the secondary lens 10 b of the image capturing device 100 have same fields of view and distortion levels. Hence, the field of view in the primary image and that in the secondary image captured by the primary lens 10 a and the secondary lens 10 b are the same.
- the image preprocessing module 122 of the image capturing device 100 may perform image binning and image cropping on a primary rectified image 400 a and a secondary rectified image 400 b to generate two images with different fields of view.
- the embodiment is adapted to a digital photo-shooting equipment with a thumbnail image preview feature.
- the image preprocessing module 122 may perform image binning on the primary rectified image to generate a binned primary rectified image 410 a with a smaller size.
- the size of the binned primary rectified image 410 a may be 1 ⁇ 4 of the size of the primary rectified image 400 a .
- the image preprocessing 122 may perform 2 ⁇ 2 pixel binning on the primary rectified image 400 a to bin each four neighboring pixels of the primary image 400 a into one and thereby generate the binned primary rectified image 410 a .
- the binned primary rectified image 410 a may be transferred faster and yet with a lower resolution.
- the image preprocessing module 122 may perform image cropping on the secondary rectified image to generate a cropped secondary rectified image 410 a .
- the size of the cropped secondary rectified image 410 b may also be 1 ⁇ 4 of the size of the secondary rectified image 400 b .
- the image preprocessing module 122 may crop a center region 405 b with 1 ⁇ 4 of the size of the secondary rectified image 400 b and thereby generate the cropped secondary rectified image 410 b.
- the image preprocessing module 122 may simulate the binned primary rectified image 410 a and the secondary rectified image 400 b with same sizes and different field of views, and further set the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image.
- similar image processing steps as illustrated in FIG. 3 may be performed on the redefined primary rectified image and secondary rectified image to generate a digital zoomed image, where a region 415 a in the binned primary rectified image 410 a and the cropped secondary rectified image 410 b are two overlapping regions. Details on the image processing steps may refer to the related description in the previous paragraphs and may not be repeated herein.
- FIG. 5 is a schematic diagram of a primary image and a secondary image according to an embodiment of the invention.
- the primary lens 10 a and the secondary lens 10 b have same fields of view and different distortion levels, where the distortion level of the primary lens is much less than that of the secondary lens.
- the primary lens 10 a is a lens with no special distortion while the secondary lens 10 b is a distorted lens with a special design.
- a primary image 500 a captured by the primary lens 10 a of the image capturing module 121 is a normal image.
- a secondary image 500 b captured by the secondary lens 10 b of the image capturing module 121 is an image with distortion.
- a center region of an original scene e.g., a region 505 a in the primary image 500 a
- Such center region may result in a region 505 b in the secondary image 500 b having a lower distortion level.
- an outer region of the original scene (e.g., a region 508 a in the primary image 500 a ) may be projected onto a remaining outer region of the sensing elements of the secondary lens 10 b .
- Such outer region may result in a region 508 b in the secondary image 500 b having a higher distortion level.
- the image preprocessing module 122 may perform image rectification on the primary 500 a and the secondary image 500 b to generate a primary rectified image and a secondary rectified image.
- the image preprocessing module 122 may crop the larger proportion of the center region in the primary rectified image (i.e., the region corresponding to the region 505 b of the secondary image) to generate a cropped secondary rectified image, and further set the cropped secondary rectified image as the secondary rectified image.
- the secondary rectified image has a relatively smaller field of view as compared with the primary rectified image, and yet has a higher resolution.
- the image zooming-warping module 124 may enlarge the primary rectified image to generate an enlarged primary rectified image and shrink the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor. Next, the image zoom-warping module 124 may perform image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map of the overlapping regions obtained from the feature analyzing module 123 .
- the image capturing device 100 may perform depth estimation on the entire primary rectified image by using the secondary rectified image and the corrected secondary image.
- the aforesaid digital zooming method of the image processing device may be summarized by a functional block diagram as illustrated in FIG. 6 according to an embodiment of the invention.
- the image capturing device 100 may capture a scene by using the primary lens 10 a and the secondary lens 10 b to generate a primary image 601 a and a secondary image 601 b .
- image rectification S 603 may be performed on the primary image 601 a and the secondary image 610 b to respectively generate a primary rectified image 603 a and a secondary rectified image 603 b .
- Feature point detection S 605 may then be performed on the primary rectified image 603 a and the secondary rectified image 603 b to obtain pixel displacements and a depth map of overlapping regions respectively in the two images.
- image zooming and image warping 5607 may be performed on the primary rectified image 603 a and the secondary rectified image 603 b according to a zooming factor 606 as well as the pixel displacements and the pixel depth map obtained from feature detection S 605 to generate a primary warped image 607 a and a secondary warped image 607 b .
- image fusion may be performed on the primary warped image 607 a and the secondary warped image 607 b , and a smooth and clear digital zoomed image 611 may be output thereafter.
- the image capturing device and the digital zooming method proposed in the invention by analyzing different imaging properties and distortion levels of the dual lenses, image zooming and image warping are automatically performed on images captured by the dual lenses according to a zooming factor to generate two warped images with similar focal lengths, sizes, and fields of view.
- the two warped images are fused by their weighted sum, and a digital zoomed image corresponding to the zooming factor would be obtained thereafter.
- the image capturing device and the digital zooming method proposed in the invention may provide a digital zoomed image with high quality throughout a digital zooming process.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Studio Devices (AREA)
Abstract
Description
- This application claims the priority benefit of Taiwan application serial no. 103139384, filed on Nov. 13, 2014. The entirety of the above-mentioned patent application is hereby incorporated by reference herein and made a part of this specification.
- 1. Field of the Invention
- The invention generally relates to an image capturing device and a digital zooming method thereof, in particular, to an image capturing device with dual lenses and a digital zooming method thereof.
- 2. Description of Related Art
- With development in technology, various smart image capturing devices, such as tablet computers, personal digital assistants and smart phones, have become indispensable tools for people nowadays. Camera lenses equipped in high-end smart image capturing devices provide same or better specifications than those of traditional consumer cameras, and some even provide three-dimensional image capturing features or near-equivalent pixel qualities to those of digital single lens reflex cameras.
- When such image capturing devices perform digital zoom to enlarge an image, image blur and distortion may occur. In terms of a single lens, digital zoom is performed on a single image via an image processing technique. However, after the image is enlarged, it details may not be preserved. As a zooming factor increases, the image appears more blur and distorted.
- On the other hand, in terms of dual lenses, the wide-angle lens may be used for capturing a wide-angle image, and the telephoto lens may be used for capturing a narrow-angle image. Either one of the wide-angle image and the narrow-angle image would be set as a target image for digital zoom. However, throughout the digital zooming process, if the target image needs to be switched to the other image, the viewed image may appear flickering or unsmooth.
- Accordingly, to present an image that meets the user's expectation during an image digital process is one of the problems to be solved.
- Accordingly, the invention is directed to an image capturing device and a digital zooming method thereof, where a digital zoomed image with high quality would be provided throughout a digital zooming process.
- The invention is directed to a digital zooming method of an image capturing device, adapted to an image capturing device having a primary lens and a secondary lens. The method includes the following steps: capturing a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image; performing image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image; performing feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image; when a zooming factor is between 1 and a primary-secondary image factor, performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map, wherein the primary-secondary image factor is a ratio of the secondary rectified image to the primary rectified image; and performing image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
- According to an embodiment of the invention, the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image includes: obtaining a plurality of rectification parameters associated with the primary lens and the secondary lens; and rectifying the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
- According to an embodiment of the invention, the step of performing feature point detection on the primary rectified image and the secondary rectified image so as to detect the overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtaining the pixel displacements and the depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image includes: detecting a plurality of feature points from the primary rectified image and the secondary rectified image; identifying a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image; obtaining the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtaining each of the pixel displacements; and performing stereo matching on each of the feature points in the primary rectified image and the secondary rectified image to obtain the depth map corresponding to each of the feature points.
- According to an embodiment of the invention, the primary lens and the secondary lens have different fields of view and same distortion levels. The field of view of the primary lens is greater than the field of view of the secondary lens. The primary-secondary image factor is fixed and prior known. When the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map. When the zooming factor is less than 1, only the primary rectified image would be shrunken. When the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
- According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and same distortion levels. After the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image, the digital zooming method further includes: performing image binning on the primary rectified image to generate a binned primary rectified image; performing image cropping on the secondary rectified image to generate a cropped secondary rectified image, wherein a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same; and setting the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image. When the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map. When the zooming factor is less than 1, only the primary rectified image would be shrunken. When the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
- According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and different distortion levels, and the distortion level of the primary lens is much less than the distortion level of the secondary lens. After the step of performing image rectification on the primary image and the secondary image to generate the primary rectified image and the secondary rectified image, the method further includes: performing image cropping on a center region of the secondary rectified image to generate a cropped secondary rectified image; and setting the cropped secondary rectified image as the secondary rectified image. When the zooming factor is between 1 and the primary-secondary image factor, the step of performing image zooming and image warping on the primary rectified image and the secondary rectified image to generate the primary warped image and the secondary warped image according to the zooming factor, the pixel displacements, and the depth map includes: enlarging the primary rectified image to generate an enlarged primary rectified image according to the zooming factor; shrinking the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor; and performing image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate the primary warped image and the secondary warped image according to the pixel displacements and the pixel depth map, wherein a warping level is associated with the depth map. When the zooming factor is less than 1, only the primary rectified image would be shrunken. When the zooming factor is greater than the primary-secondary image factor, only the secondary rectified image would be enlarged.
- According to an embodiment of the invention, the step of performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate the digital zoomed image includes: setting a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor; performing image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight; and substituting the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
- The invention is also directed to an image capturing device including a primary lens, a secondary lens, a storage unit, and at least one processing unit, where the processing unit is coupled to the lens and the storage unit. The storage unit is configured to record a plurality of modules. The processing unit is configured to access and execute the modules recorded in the storage unit. The modules include an image capturing module, an image preprocessing module, a feature analyzing module, an image zooming-warping module, and an image fusion module. The image capturing module is configured to capture a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image. The image preprocessing module is configured to perform image rectification on the primary image and the secondary image to generate a primary rectified image and a secondary rectified image. The feature analyzing module is configured to perform feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtain a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image. The image zooming-warping module is configured to perform image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map. The image fusion module is configured to perform image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image.
- According to an embodiment of the invention, the image preprocessing module obtains a plurality of rectification parameters associated with the primary lens and the secondary lens, and rectifies the primary image and the secondary image to generate the primary rectified image and the secondary rectified image according to the rectification parameters.
- According to an embodiment of the invention, the feature analyzing module detects a plurality of feature points from the primary rectified image and the secondary rectified image, identifies a plurality of feature point correspondences to calculate a homography matrix according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image, obtains the overlapping regions respectively in the primary rectified image and the secondary rectified image according to the homography matrix and accordingly obtains each of the pixel displacements, and performs stereo matching on each of the feature point correspondences in the primary rectified image and the secondary rectified image to obtain the depth map.
- According to an embodiment of the invention, the primary lens and the secondary lens have different fields of view and same distortion levels. The field of view of the primary lens is greater than the field of view of the secondary lens. The primary-secondary image factor is fixed and prior known. The image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and same distortion levels. The image preprocessing module further performs image binning on the primary rectified image to generate a binned primary rectified image, performs image cropping on the secondary rectified image to generate a cropped secondary rectified image, and sets the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image, where a size of the binned primary rectified image and a size of the cropped secondary rectified image are the same. The image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- According to an embodiment of the invention, the primary lens and the secondary lens have same fields of view and different distortion levels, and the distortion level of the primary lens is much less than the distortion level of the secondary lens. A primary-secondary center image factor between a center region of the primary rectified image and a center region of the secondary rectified image is fixed and prior known. The image preprocessing module further performs image cropping on the center region of the secondary rectified image to generate a cropped secondary rectified image, and sets the cropped secondary rectified image as the secondary rectified image. The image zooming-warping module enlarges the primary rectified image to generate an enlarged primary rectified image according to the zooming factor, shrinks the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor, and performs image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map, wherein a warping level is associated with the depth map.
- According to an embodiment of the invention, the image fusion module sets a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor, performs image fusion on the overlapping regions respectively in the primary warped image and the secondary warped image to generate a fused overlapping image based on the first weight and the second weight, and substitutes the overlapping regions respectively in the primary warped image and the secondary warped image by the fused overlapping image to generate the digital zoomed image.
- In summary, in the image capturing device and the digital zooming method proposed in the invention, by analyzing different imaging properties and distortion levels of the dual lenses, image zooming and image warping are automatically performed on images captured by the dual lenses according to a zooming factor to generate two warped images with similar focal lengths, sizes, and fields of view. The two warped images are fused by their weighted sum, and a digital zoomed image corresponding to the zooming factor would be obtained thereafter. As compared with the existing digital zooming techniques, the image capturing device and the digital zooming method proposed in the invention may provide a digital zoomed image with high quality throughout a digital zooming process.
- The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention and, together with the description, serve to explain the principles of the invention.
-
FIG. 1 illustrates a block diagram of an image capturing device according to an embodiment of the invention. -
FIG. 2 illustrates a flowchart of a digital zooming method of an image capturing device according to an embodiment of the invention. -
FIG. 3 is a schematic diagram of a primary rectified image and a secondary rectified image according to an embodiment of the invention. -
FIG. 4 is a schematic diagram of a primary rectified image and a secondary rectified image according to another embodiment of the invention. -
FIG. 5 is a schematic diagram of a primary image and a secondary image according to an embodiment of the invention. -
FIG. 6 illustrates a functional block diagram of a digital zooming method of an image capturing device according to an embodiment of the invention. - Reference will now be made in detail to the present embodiments of the invention, examples of which are illustrated in the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the description to refer to the same or like parts. In addition, the specifications and the like shown in the drawing figures are intended to be illustrative, and not restrictive. Therefore, specific structural and functional detail disclosed herein are not to be interpreted as limiting, but merely as a representative basis for teaching one skilled in the art to variously employ the invention.
-
FIG. 1 illustrates a block diagram of an image capturing device according to an embodiment of the invention. It should, however, be noted that this is merely an illustrative example and the invention is not limited in this regard. All components of the image capturing device and their configurations are first introduced inFIG. 1 . The detailed functionalities of the components are disclosed along withFIG. 2 . - Referring to
FIG. 1 , animage capturing device 100 includes aprimary lens 10 a, asecondary lens 10 b, astorage unit 20, and at least oneprocessing units 30. In the present embodiment, theimage capturing device 100 is, for example, a digital camera, a digital camcorder, a digital single lens reflex camera or other devices provided with an image capturing feature such as a smart phone, a tablet computer, a personal digital assistant, and so on. The invention is not limited herein. - The
primary lens 10 a and thesecondary lens 10 b include optical sensing elements for sensing light intensity entering theprimary lens 10 a and thesecondary lens 10 b to thereby generate images. The optical sensing elements are, for example, charge-coupled-device (CCD) elements, complementary metal-oxide semiconductor (CMOS) elements, and yet the invention is not limited thereto. Moreover, focal lengths, sensing sizes, fields of view, and distortion levels of theprimary lens 10 a and thesecondary lens 10 b may be the same or different. The invention is not limited herein. - The
storage unit 20 may be one or a combination of a stationary or mobile random access memory (RAM), a read-only memory (ROM), a flash memory, a hard drive or other similar devices. Thestorage unit 20 is configured to record a plurality of modules executable by theprocessing unit 30, where the modules may be loaded into theprocessing unit 30 for performing digital zoom on an image captured by theimage capturing device 100. - The
processing unit 30 may be, for example, a central processing unit (CPU) or other programmable devices for general purpose or special purpose such as a microprocessor and a digital signal processor (DSP), a programmable controller, an application specific integrated circuit (ASIC), a programmable logic device (PLD) or other similar devices or a combination of above-mentioned devices. Theprocessing unit 30 is coupled to theprimary lens 10 a, thesecondary lens 10 b, and thestorage unit 20, and capable of accessing and executing the modules recorded in thestorage unit 20. - The aforesaid modules include an
image capturing module 121, animage preprocessing module 122, afeature analysing module 123, an image zooming-warpingmodule 124, and animage fusion module 125, where the modules may be loaded into theprocessing unit 30 for digital zoom. -
FIG. 2 illustrates a flowchart of a digital zooming method of an image capturing device according to an embodiment of the invention, and the method inFIG. 2 may be implemented by the components of theimage capturing device 100 inFIG. 1 . - Referring to both
FIG. 1 andFIG. 2 , theimage capturing module 121 of theimage capturing device 100 captures a scene by using the primary lens and the secondary lens to generate a primary image and a secondary image (Step S202). In other words, when the user desires to capture the scene by using theimage capturing device 100, theimage capturing module 121 would generate the primary image corresponding to theprimary lens 10 a and the secondary image corresponding to thesecondary lens 10 b. In the present embodiment, the primary image may be used for previewing purposes; it is an image with a lower quality and a larger field of view. On the other hand, the secondary image may not be used for previewing purposes; it is an image with a higher quality and a smaller field of view, and may be used as an auxiliary for digital zoom in the follow-up steps. More detailed information on the primary image and the secondary image will be given later on. - Next, the
image preprocessing module 122 performs image rectification on the primary image and the secondary image to respectively generate a primary rectified image and a secondary rectified image (Step S204). To be specific, theimage preprocessing module 122 may calibrate the shift amount of the brightness, the color, and the geometric position of each of the primary image and the secondary image respectively due to theprimary lens 10 a and thesecondary lens 10 b. - In the present embodiment, the
image preprocessing module 122 may obtain a plurality of rectification parameters associated with theprimary lens 10 a and thesecondary lens 10 b. Such rectification parameters may be intrinsic parameters and extrinsic parameters of a camera for image rectification. The intrinsic parameters may be used for describing the transformation between camera coordinates and image coordinates. That is, the camera coordinates may be projected onto a projective plane according to the pinhole imaging principle. The intrinsic parameters may be, for example, focal length, image center, principal point, distortion coefficients, and so forth. The extrinsic parameters are used for describing the transformation between world coordinates and camera coordinates. The extrinsic parameters may be, for example, parameters in associated with the position and the viewing angle of theimage capturing device 100 in a three-dimensional coordinate system such as a rotation matrix and a translation vector. The rectification parameters may also be parameters in associated with illumination compensation or color correction. The invention is not limited herein. Theimage preprocessing module 122 may rectify the primary image and the secondary image according to the aforesaid rectification parameters. The primary image and the secondary image being rectified may be referred to as a “primary rectified image” and a “secondary rectified image” respectively. - Next, the
feature analyzing module 123 performs feature point detection on the primary rectified image and the secondary rectified image so as to detect overlapping regions respectively in the primary rectified image and the secondary rectified image, and further obtains a plurality of pixel displacements and a depth map of the overlapping regions respectively in the primary rectified image and the secondary rectified image (Step S206). Each of the overlapping regions is an overlapping portion between the field of view of the primary rectified image and that of the secondary rectified image. - To be specific, the
feature analyzing module 123 may detect a plurality of feature points from the primary rectified image and the secondary rectified image by leveraging a feature detection algorithm such as edge detection, corner detection, blob detection, and so forth. Next, thefeature analyzing module 123 may identify a plurality of feature point correspondences and obtain the overlapping regions respectively in the primary rectified image and the secondary rectified image. - In an embodiment, the
feature analyzing module 123 may identify a plurality of feature point correspondences according to color information of a plurality of neighboring points of each of the feature points in the primary rectified image and the secondary rectified image so as to calculate a homography matrix. Thefeature analyzing module 123 may not only obtain the pixel displacements of the two overlapping regions via the homography matrix, but may also perform stereo matching due to the similar fields of view so as to estimate the depth map. - To be specific, the
feature analyzing module 123 may determine the displacement and shift properties of each feature point correspondence to obtain the pixel displacements thereof. On the other hand, thefeature analyzing module 123 may perform stereo matching on each of the feature point correspondences to obtain the pixel depth map. In other words, thefeature analyzing module 123 may calculate the depth information of each of the feature point correspondences in the overlapping regions respectively in the primary rectified image and the secondary rectified image and store the depth information in a form of a depth map. - Next, when a zooming factor is between 1 and a primary-secondary image factor, the image zooming-warping
module 124 performs image zooming and image warping on the primary rectified image and the secondary rectified image to generate a primary warped image and a secondary warped image according to the zooming factor, the pixel displacements, and the depth map (Step S208). The “primary-secondary image factor” is referred to as a ratio of the secondary rectified image to the primary rectified image, and is fixed and prior known. The “zooming factor” is an enlargement level to be adjusted on the primary rectified image; it may be set by the user or may be a default value of theimage capturing device 100. The image zooming-warpingmodule 124 may perform image zooming and image warping on the primary rectified image and the secondary rectified image according to the zooming factor as well as relative displacement, shift, and depth information between the two overlapping regions so as to generate two images with two overlapping regions having similar views and appearances, where the zooming factor of each of the two generated images also meets the user's need. Moreover, a warping level of each of the generated images is associated with the depth map. As the depth value increases, the warping level decreases; as the depth value decreases, the warping level increases. More details on the image zooming and image warping processes will be described in the follow-up embodiments. - Next, the
image fusion module 125 performs image fusion on overlapping regions respectively in the primary warped image and the secondary warped image to generate a digital zoomed image (Step S210). To be specific, theimage fusion module 125 may set a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to the zooming factor. Next, theimage fusion module 125 may perform image fusion on each color pixel of the overlapping regions respectively in the primary warped image and the secondary warped image by a weighted sum based on the first weight and the second weight, where the resulting image is referred to as a “fused overlapping region.” Then, theimage fusion module 125 may substitute the overlapping region in the primary warped image by the fused overlapping image so as to generate the digital zoomed image with high quality. - In another embodiment, when the zooming factor is less than 1, the image zooming-warping
module 124 may only shrink the primary rectified image to generate a shrunken primary rectified image. Next, the image zooming-warpingmodule 124 may perform image warping on the shrunken primary rectified image to generate the primary warped image and set the primary warped image as the digital zoomed image. On the other hand, when the zooming factor is greater than the primary-secondary image factor, the image zooming-warpingmodule 124 may enlarge the secondary rectified image to generate an enlarged secondary rectified image according to the zooming factor. Next, the image zooming-warpingmodule 124 may perform image warping on the shrunken secondary rectified image to generate the secondary warped image and set the secondary rectified image as the digital zoomed image. - The image capturing device proposed in the invention may be adapted to different sets of lenses. The digital zooming method corresponding to three different sets of lenses will be illustrated hereinafter.
-
FIG. 3 is a schematic diagram of a primary rectified image and a secondary rectified image according to an embodiment of the invention. It should be noted that, theprimary lens 10 a and thesecondary lens 10 b have different fields of view and same distortion levels, where the field of view of the primary lens is greater than that of the secondary lens. Herein, the field of view of the primary image is greater than that of the secondary image, and yet an image quality of the secondary image surpasses that of the primary image. In other words, the primary image and the secondary image are respectively a wide-angle image and a narrow-angle image, and thus the secondary image is larger and presented with clarity. However, the secondary image is used as an auxiliary for digital zoom but not used for previewing purposes. - Referring to
FIG. 3 , after theimage capturing module 122 captures a primary image and a secondary image by using theprimary lens 10 a and thesecondary lens 10 b, theimage preprocessing module 122 may perform image rectification on the primary image and the secondary image to generate a primary rectifiedimage 310 a and a secondary rectifiedimage 310 b. Aregion 315 a and aregion 315 b are two overlapping regions detected respectively from the primary rectifiedimage 310 a and thesecondary image 310 b by thefeature analyzing module 123. In the present embodiment, since the secondary rectifiedimage 310 b is a narrow-angle image, the secondary rectified image 310 and theoverlapping region 315 b in the secondary rectifiedimage 310 b are the same. That is, the entire secondary rectifiedimage 310 b and theoverlapping region 315 a in the primary rectifiedimage 310 a would be identical. - In the present embodiment, the image zooming-warping
module 124 may enlarge the primary rectified image to generate an enlarged primary rectified image according to the zooming factor. Throughout the enlargement process, a center region of the primary rectified image would gradually alike to the secondary rectified image. The image zooming-warpingmodule 124 may further shrink the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor. Next, the image zooming-warpingmodule 124 may perform image warping on the enlarged primary rectified image and the shrunken secondary rectified image to generate a primary warped image and a secondary warped image according to the pixel displacements and the depth map. - The
image fusion module 125 may set a first weight and a second weight respectively corresponding to the primary warped image and the secondary warped image according to a zooming factor. The first weight and the second weight are allocated based on a zooming factor and a factor corresponding to each of the primary image and the secondary image. In the present embodiment, assume that the factors corresponding to the primary image and the secondary image are respectively 1 and 2. When the zooming factor is 1 (i.e., the median of 1 and 2), theimage fusion module 125 would set the first weight and the second weight respectively to 0.5. In another embodiment, when the zooming factor is 1.2, theimage fusion module 125 would set the first weight and the second weight respectively to 0.8 and 0.2. However, theimage fusion module 125 is not restricted to set the two weights based on a linear relationship. In other embodiments, theimage fusion module 125 may set the two weights based on other formulas. The invention is not limited herein. After performing image fusion, theimage fusion module 125 may generate a digital zooming image which is a relatively smooth, clear, and enlarged image. -
FIG. 4 is a schematic diagram of a primary rectified image and a secondary rectified image according to another embodiment of the invention. - Referring to
FIG. 4 , in the present embodiment, theprimary lens 10 a and thesecondary lens 10 b of theimage capturing device 100 have same fields of view and distortion levels. Hence, the field of view in the primary image and that in the secondary image captured by theprimary lens 10 a and thesecondary lens 10 b are the same. In the present embodiment, theimage preprocessing module 122 of theimage capturing device 100 may perform image binning and image cropping on a primary rectifiedimage 400 a and a secondary rectifiedimage 400 b to generate two images with different fields of view. Additionally, the embodiment is adapted to a digital photo-shooting equipment with a thumbnail image preview feature. - To be specific, the
image preprocessing module 122 may perform image binning on the primary rectified image to generate a binned primary rectifiedimage 410 a with a smaller size. In an embodiment, the size of the binned primary rectifiedimage 410 a may be ¼ of the size of the primary rectifiedimage 400 a. In other words, theimage preprocessing 122 may perform 2×2 pixel binning on the primary rectifiedimage 400 a to bin each four neighboring pixels of theprimary image 400 a into one and thereby generate the binned primary rectifiedimage 410 a. As compared with the primary rectifiedimage 400 a, the binned primary rectifiedimage 410 a may be transferred faster and yet with a lower resolution. - On the other hand, the
image preprocessing module 122 may perform image cropping on the secondary rectified image to generate a cropped secondary rectifiedimage 410 a. In the present embodiment, the size of the cropped secondary rectifiedimage 410 b may also be ¼ of the size of the secondary rectifiedimage 400 b. In other words, theimage preprocessing module 122 may crop acenter region 405 b with ¼ of the size of the secondary rectifiedimage 400 b and thereby generate the cropped secondary rectifiedimage 410 b. - Hence, the
image preprocessing module 122 may simulate the binned primary rectifiedimage 410 a and the secondary rectifiedimage 400 b with same sizes and different field of views, and further set the binned primary rectified image and the cropped secondary rectified image respectively as the primary rectified image and the secondary rectified image. Next, similar image processing steps as illustrated inFIG. 3 may be performed on the redefined primary rectified image and secondary rectified image to generate a digital zoomed image, where aregion 415 a in the binned primary rectifiedimage 410 a and the cropped secondary rectifiedimage 410 b are two overlapping regions. Details on the image processing steps may refer to the related description in the previous paragraphs and may not be repeated herein. -
FIG. 5 is a schematic diagram of a primary image and a secondary image according to an embodiment of the invention. It should be noted that, in the present embodiment, theprimary lens 10 a and thesecondary lens 10 b have same fields of view and different distortion levels, where the distortion level of the primary lens is much less than that of the secondary lens. In the present embodiment, theprimary lens 10 a is a lens with no special distortion while thesecondary lens 10 b is a distorted lens with a special design. - Referring to
FIG. 5 , in the present embodiment, aprimary image 500 a captured by theprimary lens 10 a of theimage capturing module 121 is a normal image. Asecondary image 500 b captured by thesecondary lens 10 b of theimage capturing module 121 is an image with distortion. What is special is that a center region of an original scene (e.g., aregion 505 a in theprimary image 500 a) may be projected onto a center region of the sensing elements of thesecondary lens 10 b with a larger proportion. Such center region may result in aregion 505 b in thesecondary image 500 b having a lower distortion level. On the other hand, an outer region of the original scene (e.g., aregion 508 a in theprimary image 500 a) may be projected onto a remaining outer region of the sensing elements of thesecondary lens 10 b. Such outer region may result in aregion 508 b in thesecondary image 500 b having a higher distortion level. - Next, the
image preprocessing module 122 may perform image rectification on the primary 500 a and thesecondary image 500 b to generate a primary rectified image and a secondary rectified image. Theimage preprocessing module 122 may crop the larger proportion of the center region in the primary rectified image (i.e., the region corresponding to theregion 505 b of the secondary image) to generate a cropped secondary rectified image, and further set the cropped secondary rectified image as the secondary rectified image. The secondary rectified image has a relatively smaller field of view as compared with the primary rectified image, and yet has a higher resolution. The image zooming-warpingmodule 124 may enlarge the primary rectified image to generate an enlarged primary rectified image and shrink the secondary rectified image to generate a shrunken secondary rectified image according to the zooming factor. Next, the image zoom-warpingmodule 124 may perform image warping on the enlarged primary rectified image and the shrunken secondary rectified image to respectively generate the primary warped image and the secondary warped image according to the pixel displacements and the depth map of the overlapping regions obtained from thefeature analyzing module 123. - It should be noted that, since the secondary rectified image and the primary rectified image have the same fields of view, after lens distortion correction is performed on the
outer region 508 b in thesecondary image 500 b, theimage capturing device 100 may perform depth estimation on the entire primary rectified image by using the secondary rectified image and the corrected secondary image. - The aforesaid digital zooming method of the image processing device may be summarized by a functional block diagram as illustrated in
FIG. 6 according to an embodiment of the invention. - Referring to
FIG. 6 , in the proposed method, theimage capturing device 100 may capture a scene by using theprimary lens 10 a and thesecondary lens 10 b to generate aprimary image 601 a and asecondary image 601 b. Next, image rectification S603 may be performed on theprimary image 601 a and the secondary image 610 b to respectively generate a primary rectifiedimage 603 a and a secondary rectifiedimage 603 b. Feature point detection S605 may then be performed on the primary rectifiedimage 603 a and the secondary rectifiedimage 603 b to obtain pixel displacements and a depth map of overlapping regions respectively in the two images. Next, image zooming and image warping 5607 may be performed on the primary rectifiedimage 603 a and the secondary rectifiedimage 603 b according to azooming factor 606 as well as the pixel displacements and the pixel depth map obtained from feature detection S605 to generate a primarywarped image 607 a and a secondarywarped image 607 b. Finally, image fusion may be performed on the primarywarped image 607 a and the secondarywarped image 607 b, and a smooth and clear digital zoomedimage 611 may be output thereafter. - In summary, in the image capturing device and the digital zooming method proposed in the invention, by analyzing different imaging properties and distortion levels of the dual lenses, image zooming and image warping are automatically performed on images captured by the dual lenses according to a zooming factor to generate two warped images with similar focal lengths, sizes, and fields of view. The two warped images are fused by their weighted sum, and a digital zoomed image corresponding to the zooming factor would be obtained thereafter. As compared with the existing digital zooming techniques, the image capturing device and the digital zooming method proposed in the invention may provide a digital zoomed image with high quality throughout a digital zooming process.
- It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present invention without departing from the scope or spirit of the invention. In view of the foregoing, it is intended that the present invention cover modifications and variations of this invention provided they fall within the scope of the following claims and their equivalents.
Claims (22)
Applications Claiming Priority (3)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| TW103139384A | 2014-11-13 | ||
| TW103139384A TWI554103B (en) | 2014-11-13 | 2014-11-13 | Image capturing device and digital zoom method thereof |
| TW103139384 | 2014-11-13 |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US9325899B1 US9325899B1 (en) | 2016-04-26 |
| US20160142627A1 true US20160142627A1 (en) | 2016-05-19 |
Family
ID=55754849
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/571,021 Active US9325899B1 (en) | 2014-11-13 | 2014-12-15 | Image capturing device and digital zooming method thereof |
Country Status (2)
| Country | Link |
|---|---|
| US (1) | US9325899B1 (en) |
| TW (1) | TWI554103B (en) |
Cited By (12)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160150211A1 (en) * | 2014-11-20 | 2016-05-26 | Samsung Electronics Co., Ltd. | Method and apparatus for calibrating image |
| US20160360183A1 (en) * | 2015-06-02 | 2016-12-08 | Etron Technology, Inc. | Monitor system and operation method thereof |
| US10317646B2 (en) | 2016-10-14 | 2019-06-11 | Largan Precision Co., Ltd. | Optical imaging module, image capturing apparatus and electronic device |
| US10321112B2 (en) * | 2016-07-18 | 2019-06-11 | Samsung Electronics Co., Ltd. | Stereo matching system and method of operating thereof |
| WO2019148996A1 (en) * | 2018-01-31 | 2019-08-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium, and electronic apparatus |
| CN111641775A (en) * | 2020-04-14 | 2020-09-08 | 北京迈格威科技有限公司 | Multi-shooting zoom control method, device and electronic system |
| US10848746B2 (en) | 2018-12-14 | 2020-11-24 | Samsung Electronics Co., Ltd. | Apparatus including multiple cameras and image processing method |
| US10957029B2 (en) * | 2016-11-17 | 2021-03-23 | Sony Corporation | Image processing device and image processing method |
| CN113055592A (en) * | 2021-03-11 | 2021-06-29 | Oppo广东移动通信有限公司 | Image display method and device, electronic equipment and computer readable storage medium |
| US11050915B2 (en) * | 2017-07-17 | 2021-06-29 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for zooming by switching between dual cameras, mobile terminal, and storage apparatus |
| US11430089B2 (en) | 2019-07-10 | 2022-08-30 | Samsung Electronics Co., Ltd. | Image processing method and image processing system for generating a corrected image |
| US12452536B2 (en) | 2022-01-28 | 2025-10-21 | Samsung Electronics Co., Ltd. | Electronic device and electronic device control method |
Families Citing this family (45)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US10275935B2 (en) | 2014-10-31 | 2019-04-30 | Fyusion, Inc. | System and method for infinite synthetic image generation from multi-directional structured image array |
| US9940541B2 (en) | 2015-07-15 | 2018-04-10 | Fyusion, Inc. | Artificially rendering images using interpolation of tracked control points |
| US10262426B2 (en) | 2014-10-31 | 2019-04-16 | Fyusion, Inc. | System and method for infinite smoothing of image sequences |
| US10726593B2 (en) | 2015-09-22 | 2020-07-28 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US10176592B2 (en) | 2014-10-31 | 2019-01-08 | Fyusion, Inc. | Multi-directional structured image array capture on a 2D graph |
| US12261990B2 (en) | 2015-07-15 | 2025-03-25 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
| US10852902B2 (en) | 2015-07-15 | 2020-12-01 | Fyusion, Inc. | Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity |
| US10242474B2 (en) | 2015-07-15 | 2019-03-26 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US11006095B2 (en) | 2015-07-15 | 2021-05-11 | Fyusion, Inc. | Drone based capture of a multi-view interactive digital media |
| US10222932B2 (en) | 2015-07-15 | 2019-03-05 | Fyusion, Inc. | Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations |
| US12495134B2 (en) | 2015-07-15 | 2025-12-09 | Fyusion, Inc. | Drone based capture of multi-view interactive digital media |
| US10147211B2 (en) | 2015-07-15 | 2018-12-04 | Fyusion, Inc. | Artificially rendering images using viewpoint interpolation and extrapolation |
| US11095869B2 (en) | 2015-09-22 | 2021-08-17 | Fyusion, Inc. | System and method for generating combined embedded multi-view interactive digital media representations |
| US11783864B2 (en) | 2015-09-22 | 2023-10-10 | Fyusion, Inc. | Integration of audio into a multi-view interactive digital media representation |
| TWI639338B (en) | 2016-08-15 | 2018-10-21 | 香港商立景創新有限公司 | Image capturing apparatus and image smooth zooming method thereof |
| KR102560780B1 (en) | 2016-10-05 | 2023-07-28 | 삼성전자주식회사 | Image processing system including plurality of image sensors and electronic device including thereof |
| US11202017B2 (en) | 2016-10-06 | 2021-12-14 | Fyusion, Inc. | Live style transfer on a mobile device |
| US10810720B2 (en) | 2016-11-03 | 2020-10-20 | Huawei Technologies Co., Ltd. | Optical imaging method and apparatus |
| US10356300B2 (en) * | 2016-12-23 | 2019-07-16 | Mediatek Inc. | Seamless zooming on dual camera |
| US10437879B2 (en) | 2017-01-18 | 2019-10-08 | Fyusion, Inc. | Visual search using multi-view interactive digital media representations |
| US20180227482A1 (en) | 2017-02-07 | 2018-08-09 | Fyusion, Inc. | Scene-aware selection of filters and effects for visual digital media content |
| CN106713772B (en) * | 2017-03-31 | 2018-08-17 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
| TWI672677B (en) | 2017-03-31 | 2019-09-21 | 鈺立微電子股份有限公司 | Depth map generation device for merging multiple depth maps |
| DE102017205630A1 (en) * | 2017-04-03 | 2018-10-04 | Conti Temic Microelectronic Gmbh | Camera apparatus and method for detecting a surrounding area of a vehicle |
| US10410314B2 (en) * | 2017-04-27 | 2019-09-10 | Apple Inc. | Systems and methods for crossfading image data |
| US10313651B2 (en) | 2017-05-22 | 2019-06-04 | Fyusion, Inc. | Snapshots at predefined intervals or angles |
| US10373290B2 (en) * | 2017-06-05 | 2019-08-06 | Sap Se | Zoomable digital images |
| US11069147B2 (en) | 2017-06-26 | 2021-07-20 | Fyusion, Inc. | Modification of multi-view interactive digital media representation |
| US10834310B2 (en) | 2017-08-16 | 2020-11-10 | Qualcomm Incorporated | Multi-camera post-capture image processing |
| US10404916B2 (en) | 2017-08-30 | 2019-09-03 | Qualcomm Incorporated | Multi-source video stabilization |
| US10257436B1 (en) * | 2017-10-11 | 2019-04-09 | Adobe Systems Incorporated | Method for using deep learning for facilitating real-time view switching and video editing on computing devices |
| US10516830B2 (en) | 2017-10-11 | 2019-12-24 | Adobe Inc. | Guided image composition on mobile devices |
| US10497122B2 (en) | 2017-10-11 | 2019-12-03 | Adobe Inc. | Image crop suggestion and evaluation using deep-learning |
| CN107835372A (en) | 2017-11-30 | 2018-03-23 | 广东欧珀移动通信有限公司 | Imaging method, device, mobile terminal and storage medium based on dual camera |
| US10764512B2 (en) | 2018-03-26 | 2020-09-01 | Mediatek Inc. | Method of image fusion on camera device equipped with multiple cameras |
| US10592747B2 (en) | 2018-04-26 | 2020-03-17 | Fyusion, Inc. | Method and apparatus for 3-D auto tagging |
| TWI680436B (en) * | 2018-12-07 | 2019-12-21 | 財團法人工業技術研究院 | Depth camera calibration device and method thereof |
| CN110072047B (en) | 2019-01-25 | 2020-10-09 | 北京字节跳动网络技术有限公司 | Image deformation control method, device and hardware device |
| WO2020263982A1 (en) * | 2019-06-22 | 2020-12-30 | Trackonomy Systems, Inc. | Image based locationing |
| CN113808510B (en) * | 2020-06-15 | 2024-04-09 | 明基智能科技(上海)有限公司 | Image adjustment method |
| WO2022076483A1 (en) | 2020-10-05 | 2022-04-14 | Trackonomy Systems, Inc. | System and method of utilizing 3d vision for asset management and tracking |
| US11405563B2 (en) * | 2020-11-10 | 2022-08-02 | Qualcomm Incorporated | Spatial alignment transform without FOV loss |
| EP4092572A1 (en) * | 2021-05-20 | 2022-11-23 | Wooptix S.L. | Method for depth estimation for a variable focus camera |
| CN113487484B (en) * | 2021-07-09 | 2022-08-12 | 上海智砹芯半导体科技有限公司 | Image splicing method and device, electronic equipment and computer readable storage medium |
| CN114785969B (en) * | 2022-05-30 | 2025-04-01 | 维沃移动通信有限公司 | Shooting method and device |
Family Cites Families (15)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US6650368B1 (en) * | 1999-10-26 | 2003-11-18 | Hewlett-Packard Development Company, Lp. | Digital camera and method of enhancing zoom effects |
| JP2007028283A (en) * | 2005-07-19 | 2007-02-01 | Matsushita Electric Ind Co Ltd | Imaging device |
| JP4573724B2 (en) * | 2005-08-01 | 2010-11-04 | イーストマン コダック カンパニー | Imaging apparatus having a plurality of optical systems |
| JP4624245B2 (en) * | 2005-11-29 | 2011-02-02 | イーストマン コダック カンパニー | Imaging device |
| JP4692770B2 (en) * | 2006-12-27 | 2011-06-01 | 富士フイルム株式会社 | Compound eye digital camera |
| US7683962B2 (en) | 2007-03-09 | 2010-03-23 | Eastman Kodak Company | Camera using multiple lenses and image sensors in a rangefinder configuration to provide a range map |
| US7859588B2 (en) | 2007-03-09 | 2010-12-28 | Eastman Kodak Company | Method and apparatus for operating a dual lens camera to augment an image |
| KR101441586B1 (en) * | 2008-10-06 | 2014-09-23 | 삼성전자 주식회사 | Apparatus and method for capturing image |
| US8553106B2 (en) | 2009-05-04 | 2013-10-08 | Digitaloptics Corporation | Dual lens digital zoom |
| TWI435160B (en) * | 2010-10-29 | 2014-04-21 | Altek Corp | Method for composing three dimensional image with long focal length and three dimensional imaging system |
| WO2012086326A1 (en) * | 2010-12-24 | 2012-06-28 | 富士フイルム株式会社 | 3-d panoramic image creating apparatus, 3-d panoramic image creating method, 3-d panoramic image creating program, 3-d panoramic image replay apparatus, 3-d panoramic image replay method, 3-d panoramic image replay program, and recording medium |
| US8508649B2 (en) * | 2011-02-14 | 2013-08-13 | DigitalOptics Corporation Europe Limited | Compact distorted zoom lens for small angle of view |
| TWI516110B (en) * | 2012-01-02 | 2016-01-01 | 華晶科技股份有限公司 | Image capturing device and image capturing method thereof |
| JP5799194B2 (en) * | 2012-07-10 | 2015-10-21 | パナソニックIpマネジメント株式会社 | Display control device |
| TW201427412A (en) * | 2012-12-19 | 2014-07-01 | Sintai Optical Shenzhen Co Ltd | Image capture device and anti-shake control method thereof |
-
2014
- 2014-11-13 TW TW103139384A patent/TWI554103B/en not_active IP Right Cessation
- 2014-12-15 US US14/571,021 patent/US9325899B1/en active Active
Cited By (19)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20160150211A1 (en) * | 2014-11-20 | 2016-05-26 | Samsung Electronics Co., Ltd. | Method and apparatus for calibrating image |
| US10506213B2 (en) * | 2014-11-20 | 2019-12-10 | Samsung Electronics Co., Ltd. | Method and apparatus for calibrating image |
| US20200053338A1 (en) * | 2014-11-20 | 2020-02-13 | Samsung Electronics Co., Ltd. | Method and apparatus for calibrating image |
| US11140374B2 (en) * | 2014-11-20 | 2021-10-05 | Samsung Electronics Co., Ltd. | Method and apparatus for calibrating image |
| US20160360183A1 (en) * | 2015-06-02 | 2016-12-08 | Etron Technology, Inc. | Monitor system and operation method thereof |
| US10382744B2 (en) * | 2015-06-02 | 2019-08-13 | Eys3D Microelectronics, Co. | Monitor system and operation method thereof |
| US10321112B2 (en) * | 2016-07-18 | 2019-06-11 | Samsung Electronics Co., Ltd. | Stereo matching system and method of operating thereof |
| US10317646B2 (en) | 2016-10-14 | 2019-06-11 | Largan Precision Co., Ltd. | Optical imaging module, image capturing apparatus and electronic device |
| US11226472B2 (en) | 2016-10-14 | 2022-01-18 | Largan Precision Co., Ltd. | Optical imaging module, image capturing apparatus and electronic device |
| DE112017005807B4 (en) * | 2016-11-17 | 2025-05-08 | Sony Corporation | Image processing device and image processing method |
| US10957029B2 (en) * | 2016-11-17 | 2021-03-23 | Sony Corporation | Image processing device and image processing method |
| US11050915B2 (en) * | 2017-07-17 | 2021-06-29 | Huizhou Tcl Mobile Communication Co., Ltd. | Method for zooming by switching between dual cameras, mobile terminal, and storage apparatus |
| WO2019148996A1 (en) * | 2018-01-31 | 2019-08-08 | Oppo广东移动通信有限公司 | Image processing method and device, storage medium, and electronic apparatus |
| US10848746B2 (en) | 2018-12-14 | 2020-11-24 | Samsung Electronics Co., Ltd. | Apparatus including multiple cameras and image processing method |
| US11430089B2 (en) | 2019-07-10 | 2022-08-30 | Samsung Electronics Co., Ltd. | Image processing method and image processing system for generating a corrected image |
| US20230059657A1 (en) * | 2020-04-14 | 2023-02-23 | Megvii (Beijing) Technology Co., Ltd. | Multi-camera zoom control method and apparatus, and electronic system and storage medium |
| CN111641775A (en) * | 2020-04-14 | 2020-09-08 | 北京迈格威科技有限公司 | Multi-shooting zoom control method, device and electronic system |
| CN113055592A (en) * | 2021-03-11 | 2021-06-29 | Oppo广东移动通信有限公司 | Image display method and device, electronic equipment and computer readable storage medium |
| US12452536B2 (en) | 2022-01-28 | 2025-10-21 | Samsung Electronics Co., Ltd. | Electronic device and electronic device control method |
Also Published As
| Publication number | Publication date |
|---|---|
| US9325899B1 (en) | 2016-04-26 |
| TWI554103B (en) | 2016-10-11 |
| TW201618531A (en) | 2016-05-16 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9325899B1 (en) | Image capturing device and digital zooming method thereof | |
| CN108600576B (en) | Image processing apparatus, method and system, and computer-readable recording medium | |
| CN107977940B (en) | Background blurring processing method, device and equipment | |
| US10306141B2 (en) | Image processing apparatus and method therefor | |
| CN105657237A (en) | Image acquisition device and digital zooming method thereof | |
| TW202236840A (en) | Image fusion for scenes with objects at multiple depths | |
| CN106683071A (en) | Image splicing method and image splicing device | |
| JP5846172B2 (en) | Image processing apparatus, image processing method, program, and imaging system | |
| JP5392198B2 (en) | Ranging device and imaging device | |
| CN112261292B (en) | Image acquisition method, terminal, chip and storage medium | |
| CN103888663B (en) | image processing apparatus, image pickup apparatus and image processing method | |
| US20120057747A1 (en) | Image processing system and image processing method | |
| JP2017157043A (en) | Image processing device, imaging device, and image processing method | |
| WO2018196854A1 (en) | Photographing method, photographing apparatus and mobile terminal | |
| TWI599809B (en) | Lens module array, image sensing device and fusing method for digital zoomed images | |
| CN108513057A (en) | Image processing method and device | |
| US8908012B2 (en) | Electronic device and method for creating three-dimensional image | |
| JP2016053978A (en) | Image processor | |
| CN105472263A (en) | Image capturing method and image capturing device using same | |
| JP5796611B2 (en) | Image processing apparatus, image processing method, program, and imaging system | |
| US9743007B2 (en) | Lens module array, image sensing device and fusing method for digital zoomed images | |
| JP7458769B2 (en) | Image processing device, imaging device, image processing method, program and recording medium | |
| CN110784642B (en) | Image processing apparatus, control method thereof, storage medium, and imaging apparatus | |
| JP6579764B2 (en) | Image processing apparatus, image processing method, and program | |
| JP6079838B2 (en) | Image processing apparatus, program, image processing method, and imaging system |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: ALTEK SEMICONDUCTOR CORP., TAIWAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:CHOU, HONG-LONG;TSENG, YI-HONG;CHANG, WEN-YAN;AND OTHERS;REEL/FRAME:034510/0924 Effective date: 20141127 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |