US20160065862A1 - Image Enhancement Based on Combining Images from a Single Camera - Google Patents
Image Enhancement Based on Combining Images from a Single Camera Download PDFInfo
- Publication number
- US20160065862A1 US20160065862A1 US14/860,481 US201514860481A US2016065862A1 US 20160065862 A1 US20160065862 A1 US 20160065862A1 US 201514860481 A US201514860481 A US 201514860481A US 2016065862 A1 US2016065862 A1 US 2016065862A1
- Authority
- US
- United States
- Prior art keywords
- initial images
- images
- image
- fading
- cross
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2621—Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
-
- G06K9/00228—
-
- G06K9/00234—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G06T5/006—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/162—Detection; Localisation; Normalisation using pixel segmentation or colour matching
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/45—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
-
- H04N5/2258—
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/74—Circuits for processing colour signals for obtaining special effects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Definitions
- This application relates generally to image enhancement and more specifically to computer-implemented systems and methods for image enhancement based on combining images from multiple cameras.
- Initial images may be captured using a simple camera, such as those having short focal length lenses typically used in camera phones, tablets, and laptops.
- An object or, more specifically, a center line of the object is identified in each image.
- the object is typically present on the foreground of the initial images.
- detecting the foreground portion of each image may be performed before the center line identification.
- the initial images may be aligned and cross-faded.
- the foreground portion may be separated from the background portion.
- the background portion may be blurred or, more generally, processed separately from the foreground portions.
- a method of combining multiple related images to enhance image quality involves receiving at least two initial images captured using a single camera provided on one device; each of the at least two initial images comprising an object representation of an object, the object representation provided on a foreground portion of each of the at least two initial images; each of the at least two initial images corresponding to a different imaging angle relative to the object; detecting the object in each of the at least two initial images; determining an object center line of the object in each of the at least two initial images; and cross-fading the at least two initial images along the object center line, wherein the cross-fading yields a combined image.
- FIG. 1 illustrates a schematic top view of an object and different image capturing devices positioned at different distances and angles relative to the object, in accordance with some embodiments.
- FIG. 2A illustrates an image of an object captured from far away using a long focal length lens, in accordance with some embodiments.
- FIG. 2B illustrates an image of the same object (as in FIG. 2A ) captured at a short distance away from the object using a short focal length lens, in accordance with some embodiments.
- FIG. 3 illustrates a top view of a device equipped with two cameras and an equivalent single camera device showing relative positions of the devices to an object, in accordance with some embodiments.
- FIGS. 4A and 4B illustrate two initial images prior to combining these images, in accordance with some embodiments.
- FIG. 5 illustrates a combined images resulting from cross-fading of the two initial images shown in FIGS. 4A and 4B , in accordance with some embodiments.
- FIG. 6 is a process flowchart of a method for processing an image, in accordance with some embodiments.
- FIG. 7A is a schematic representation of various modules of an image capturing and processing device, in accordance with some embodiments.
- FIG. 7B is a schematic process flow utilizing stereo disparity of two images, in accordance with some embodiments.
- FIG. 7C is a schematic process flow that does not utilize stereo disparity, in accordance with some embodiments.
- FIG. 8 is a diagrammatic representation of an example machine in the form of a computer system, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
- a camera phone is a mobile phone, which is able to capture images, such as still photographs and/or video.
- the camera phones generally have lenses and sensors that are simpler than dedicated digital cameras, in particular, high end digital cameras such as DSLR camera.
- the camera phones are typically equipped with shorter focal length and fixed focus lenses and smaller sensors, which limit their performance.
- FIG. 1 shows the difference in viewing angles for a close camera and a far camera, illustrating a schematic top view of an object 102 and different image capturing devices 110 and 114 positioned at different distances relative to the object 102 , in accordance with some embodiments. For clarity, a few features of object 102 are identified, such as a right ear 104 a , a left ear 104 b , and a nose 106 .
- device 114 is shifted to the left from the object 102 , it is still able to capture both ears 104 a and 104 b while not being turned too much with respect to the nose.
- device 114 (that needs to be equipped with a longer focal length lens, e.g., a telephoto lens, relative to device 110 ) will take a high quality and undistorted image of object 102 .
- a short focal length camera/device 110 which is similarly shifted to the left from the object 102 attempts to take a similar image, it will only be able to capture left ear 104 b .
- nose 106 is being captured at a sharp angle, which may result in distortions of its proportion relative to other parts.
- FIGS. 2A and 2B Actual results of using long and short focal length lenses are presented in FIGS. 2A and 2B , respectively.
- FIG. 2A illustrates an image of an object captured from far away using a long focal length (telephoto) lens (similar to device 114 in FIG. 1 ), in accordance with some embodiments
- FIG. 2B illustrates an image of the same object captured at a short distance away from the object using a short focal length (wide angle) lens (similar to device 110 in FIG. 1 ), in accordance with different embodiments.
- telephoto telephoto
- FIG. 2B illustrates an image of the same object captured at a short distance away from the object using a short focal length (wide angle) lens (similar to device 110 in FIG. 1 ), in accordance with different embodiments.
- Some embodiments may include cameras that may be operated at short camera-to-subject distances, with short lenses, and may produce images that look as though the camera were further away with a long lens, thus minimizing such perspective distortion effect and creating a flattering image of the subject.
- Initial images may be captured using simple cameras, such as short focal length cameras and cameras with short lenses, typically used on camera phones, tablets, and laptops.
- the initial images may be taken using two different cameras positioned at a certain distance from each other.
- An object or, more specifically, a center line of the object is identified in each image.
- the object is typically present on the foreground of the initial images. As such, detecting the foreground portion of each image may be performed before the center line identification.
- the initial images may be aligned and cross-faded.
- the foreground portion may be separated from the background portion.
- the background portion may be blurred or, more generally, processed separately from the foreground portions.
- the steps in the above-described process need not all be done in the order specified, but may be done in a different order for convenience or efficiency depending on the particular application and its specific requirements.
- FIG. 3 illustrates a top view 300 of a device 310 equipped with two cameras 312 a and 312 b and an equivalent single camera device 314 showing relative positions of devices 310 and 314 to an object (head) 302 , in accordance with some embodiments.
- Cameras 312 a and 312 b taken together, can see both sides of object 302 , similar to the nearly-equivalent distant camera device 314 , whereas each of cameras 312 a and 312 b in isolation may not be able see both sides of the head 302 .
- left camera 312 a may have a better view of left ear 304 a and insufficient view of right ear 304 b
- right camera 312 b may have a better view of right ear 304 b and insufficient view of left ear 304 a
- the combined image included adequate representations of right and left ears 304 a and 304 b.
- a method of combining the images from the left and right cameras into a composite image involves detecting the foreground object (i.e., subject) in two camera images. This may be done, for example, using stereo disparity and/or face detection on the two images. The method may proceed with aligning and, in some embodiments, scaling the two images at the center of the foreground object. The two images are then cross-faded into a combined (or composite) image, such that the left side of the image comes from the left camera, while the right side of the image comes from the right camera. The cross-fade region may be narrow enough that the images have good alignment within it. The method optionally involves blurring the background in the composite image.
- two camera systems that may be used for capturing initial images are different from stereo 3D camera, which present both images to the eyes of the viewer and create a full 3D experience for the viewer. Instead, only one combined image is provided in the described methods and systems and initially captured stereo images are not shown to the viewer. The initial images are combined so as to create the appearance of a single higher-quality image shot from further away.
- Some applications of these methods may include, for example, a video-conferencing system running on a laptop or desktop computer, stand-alone video-conferencing system, video-conferencing system on a mobile device such as a smart-phone, front-facing camera for taking pictures of oneself on a smart-phone/mobile device, a standalone still camera, stand-alone video camera, any camera where an undistorted image is needed but it is impossible or impractical to move the camera back far enough from the subject, and the like.
- two or more cameras may be used.
- the composite image may be composed of the left portion of the left image, center portion of the center image, and right portion of right image, resulting in reduced perspective distortion compared to the image obtained from a single distant camera.
- FIGS. 4A and 4B illustrate an example of two initial images 400 and 410 that are combined to enhance quality of the resulting image, in accordance with some embodiments.
- initial image 400 will be referred to as a left image
- initial image 410 will be referred to as a right image.
- the left and right images may be obtained using two cameras or lenses provided on the same device (e.g., devices described above with reference to FIG. 3 ) and captured at substantially the same time, such that the object maintains the same orientation (i.e., does not move) in both images.
- the same camera or lens may be used to capture the left and right images by moving the object or the camera with respect to each other.
- Each initial image includes slightly different representations of the same object, i.e., left image 400 includes object representation 402 , while right image 410 includes object representation 412 .
- object representation 402 has a more visible left ear, while the right ear is barely visible.
- object representation 412 has a more visible right ear, while the left ear is only slightly visible.
- object representation 402 shows the actual object (person) being turned (e.g., looking) slightly to the right, while object representation shows the actual object looking straight and may be turned slightly to the left.
- the difference of object representations is called stereo disparity.
- Differences in the representations of the objects of two or more initial images may be used in order to enhance these object representations and yield a combined imaged with the enhanced representation.
- too much difference due to the spacing of the cameras may cause problems with alignment and cross-fading, resulting in lower quality representations than even in the initial images.
- too much difference in imaging angles may cause such problems.
- the cameras are positioned at a distance of between about between about 30 millimeters and 150 millimeters from each other.
- object representations 402 and 412 caused by different imaging angles with respect to the object is described above with reference to FIG. 3 .
- the representations may vary depending on proximity of the object to the camera.
- the main object may be present on a foreground, while some additional objects may be present on a background.
- the images 400 and 410 include object representations 402 and 412 that appear on the foreground and, for example, window edge representations 404 and 414 that appear on the background. While both sets of representations are of the same two actual objects (i.e., the person and the window edge) that maintained the same relative positions while capturing these images, the positions of their representations are different.
- window edge representations 404 is positioned around the left portion of the head in left image 400
- window edge representations 414 is positioned around the right portion of the head in right image 410 .
- relative positions of object representations depend on their distances from the image capturing lenses.
- the initial images may be decomposed into foreground portions and background portions and each type may be processed independently from each other as further described below.
- the process may involve determining an object center line in each of the initial image.
- the object center line may represent a center of the object representation or correspond to some other features of the object representation (e.g., a nose, separation between eyes).
- Object center lines generally do not correspond to centers of initial images and portions of the initial images divided by the center lines may be different.
- object center line 406 divides image 400 into left portions 408 and right portion 409 .
- object center line 416 divides image 410 into left portions 418 and right portion 419 . Both center lines 406 and 416 extend vertically through the centers of the noses of the object representations 402 and 412 , respectively.
- FIG. 5 illustrates a combined image 500 generated from initial images 400 and 410 illustrated in FIGS. 4A and 4B , in accordance with some embodiments.
- object center line 506 generally corresponds to center lines 406 and 416 of initial images 400 and 410 .
- Left portion 508 of combined image 500 represents a modified version of left portion 408 of left image 400
- right portion 509 represents a modified version of right portion 419 of right image 410 .
- These modifications may come from cross-fading to provide a more uniform combined image and transition between two portions 508 and 509 .
- left portion 408 of left image 400 may be cross-faded with left portion 418 of right image 410 to form left portion 508 of combine image.
- left portion 418 Only a part of left portion 418 , in particular the part extending along center line 416 may be used for cross-fading.
- right portion 419 of right image 410 may be cross-faded with right portion 409 of left image 400 or, more specifically, with a part of right portion 409 extending along center line 406 to form right portion 509 .
- the quality of combined image 500 depends on how well center lines 406 and 416 are identified and how well the cross-fading is performed.
- Object representation 502 on combined image 500 includes clear view of both ears, which was missing in either one of initial images 400 and 410 .
- the object in object representation 502 appears to be looking straight and not to the left or right as appears in initial images 400 and 410 .
- representations of background objects in combined image 500 may not be as successful.
- window edge representations 404 and 414 of the same actual window edge appear as two different representations 504 a and 504 b .
- Such problems may be confusing and distracting.
- the background may be blurred or completely replaced (e.g., with an alternate background image).
- processing of foreground and background portions of initial images may be performed separately to address the above referenced problems.
- separate object center lines may be identified for different objects, e.g., objects on the foreground and objects on the background.
- the cross-fading may be performed independently along these different object center lines.
- objects may move and may change their distances to cameras.
- separation between background object and foreground objects may be performed dynamically.
- more than two (i.e., the background and foreground) depth zones may be identified for initial images and portions of images falling into each depth zone may be processed independently. While this approach creates additional computational complexity, it creates more enhanced combined images and may be particularly suitable for still images.
- techniques described herein can be used for both still and moving images (e.g., video conferencing on smart-phones or on personal computers or video conferencing terminals).
- FIG. 6 is a process flowchart of a method 600 for processing an image, in accordance with some embodiments.
- Method 600 may commence with capturing one or more images during operation 601 .
- multiple cameras are used to capture different images.
- image capturing devices having multiple cameras are described above.
- the same camera may be used to capture multiple images, for example, with different imaging angles.
- Multiple images from multiple cameras used in the same processing should be distinguished from multiple images processed sequentially as, for example, during processing of video images.
- an image capturing device may be physically separated from an image processing device. These devices may be connected using a network, a cable, or some other means. In some embodiments, the image capturing device and the image processing device may operate independent and may have no direct connection. For example, an image may be captured and stored for a period of time. At some later time, the image may be processed when it is so desired by a user. In a specific example, image processing functions may be provided as a part of a graphic software package.
- two images may be captured during operation 601 by different cameras or, more specifically, different optical lenses provided on the same device. These images may be referred to as stereo images.
- the two cameras are separated by between about 30 millimeters and 150 millimeters. As described above, this distance is the most suitable when the object is within 300 millimeters and 900 millimeters from the camera.
- One or more images captured during operation 601 may be captured using a camera having a relatively small apertures which increases the depth of field. In other words, this camera may be provide very little depth separation and both background and foreground portions of the image may have similar sharpness.
- Method 600 may proceed with detecting at least the foreground portion in the one or more images during operation 602 .
- This detecting operation may be based on one or more of the following techniques: stereo disparity, motion parallax, local focus, color grouping, and face detection. These techniques will now be described in more detail.
- the motion parallax may be used for video images. It is a depth cue that results from a relative motion of objects captured in the image and the capturing device.
- a parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight. It may be represented by the angle or semi-angle of inclination between those two lines. Nearby objects have a larger parallax than more distant objects when observed from different positions, which allows using the parallax values to determine distances and separate foreground and background portions of an image.
- the face detection technique determines the locations and sizes of human faces in arbitrary images. Face detection techniques are well known in the art, see e.g., G. Bradski, A. Kaehler, “ Learning Open CV ”, September 2008, incorporated by reference herein. Open Source Computer Vision Library (OpenCV) provides an open source library of programming functions mainly directed to real-time computer vision and cover various application areas including face recognition (including face detection) and stereopsis (including stereo disparity), and therefore such well known programming functions and techniques will not be described in all details here. According to a non limiting example, a classifier may be used according to various approach to classify portions of an image as either face or non-face.
- OpenCV Open Source Computer Vision Library
- the image processed during operation 602 has stereo disparity.
- Stereo disparity is the difference between corresponding points on left and right images and is well known in the art, see e.g., M. Okutomi, T. Kanade, “ A Multiple - Baseline Stereo ”, IEEE Transactions on Pattern Analysis and Machine Intelligence, April 1993, Vol. 15 no. 4, incorporated by reference herein, and will therefore not be described in all details here.
- the OpenCV library provides programming functions directed to stereo disparity.
- the stereo disparity may be used during detecting operation 602 to determine proximity of each pixel or patch in the stereo images to the camera and therefore to identify at least the background portion of the image.
- Operation 603 involves detecting the object in each initial image. This operation may involve one or more techniques described above that are used for detecting the foreground portion. Generally, the object is positioned on the foreground of the image. In the context of video conferences, the object may be a person and face recognition techniques may be used to detect the object.
- Operation 604 involves determining an object center line of the object in each initial image as described above with reference to FIGS. 4A and 4B . In some embodiments, other alignment and/or scaling techniques may be used during operation 604 . The method continues with cross-fading the two initial images along the object center line thereby yielding a combined image during operation 605 . A few aspects of this operation are described above with reference to FIG. 5 .
- the foreground portion may be separated from the background portion.
- the background may be processed separately from the foreground portion in operation 607 .
- Other image portion types may be identified, such as a face portion, an intermediate portion (i.e., a portion between the foreground and background portion), in some embodiments. The purpose of separating the original image into multiple portions is so that at least one of these portions can be processed independently from other portions.
- the processing in operation 607 may involve one or more of the following techniques: defocussing (i.e., blurring), changing sharpness, changing colors, suppressing, and changing saturation.
- Blurring may be based on different techniques, such as a circular blur or a Gaussian blur. Blurring techniques are well known in the art, see e.g., G. Bradski, A. Kaehler, “ Learning Open CV ”, September 2008, incorporated by reference herein, wherein blurring is also called smoothing, and Potmesil, M.; Chakravarty, I. (1982), “Synthetic Image Generation with a Lens and Aperture Camera Model”, ACM Transactions on Graphics, 1, ACM, pp.
- an elliptical or box blur may be used.
- the Gaussian blur which is sometimes referred to as Gaussian smoothing, used a Gaussian function to blur the image.
- the Gaussian blur is known in the art, see e.g., “ Learning OpenCV ”, ibid.
- the image is processed such that sharpness is changed for the foreground or background portion of the image.
- Changing sharpness of the image may involve changing the edge contrast of the image.
- the sharpness changes may involve low-pass filtering and resampling.
- the image is processed such that the background portion of the image is blurred. This reduces distraction and focuses attention on the foreground.
- the foreground portion may remain unchanged.
- the foreground portion of the image may be sharpened.
- the processed image is displayed to a user as reflected by optional operation 608 .
- the user may choose to perform additional adjustments by, for example, changing the settings used during operation 606 . These settings may be used for future processing of other images.
- the processed image may be displayed on the device used to capture the original image (during operation 602 ) or some other device. For example, the processed image may be transmitted to another computer system as a part of teleconferencing.
- the image is a frame of a video (e.g., a real time video used in the context of video conferencing).
- Some or all of operations 602 - 608 may be repeated for each frame of the video as reflected by decision block 610 . In this case, the same settings may be used for most frames in the video.
- results of certain processes e.g., face detection
- FIG. 7A is a schematic representation of various modules of an image capturing and processing device 700 , in accordance with some embodiments.
- device 700 includes a first camera 702 , a processing module 706 , and a storage module 708 .
- Device 700 may also include an optional second camera 704 (and may have a third camera, not shown).
- One or both cameras 702 and 704 may be equipped with lenses having relatively small lens apertures that result in a large depth of field. As such, the background of the resulting image can be very distracting, competing for the viewer's attention.
- FIGS. 3-5 Various details of camera positions are described above with reference to FIGS. 3-5 .
- processing module 706 is configured for detecting at least one of a foreground portion or a background portion of the stereo image.
- Processing module 706 may also be configured for detecting an object in each of the two initial images, determining an object center line of the object in each of the two initial images, aligning the two initial images along the object center line, and cross-fading the two initial images along the object center line yielding a combined image.
- the detecting operation separates the stereo image into at least the foreground portion and the background portion.
- Storage module 708 is configured for storing initial images as well as combined images, and one or more setting used for the detecting and processing operations.
- Storage module 708 may include a tangible computer memory, such as flash memory or other types of memory.
- FIG. 7B is a schematic process flow 710 utilizing a device with two cameras 712 and 714 , in accordance with some embodiments.
- Camera 712 may be a left camera, while camera 714 may be a right camera.
- Cameras 712 and 714 generate a stereo image from which stereo disparity may be determined (block 715 ). This stereo disparity may be used for detection of at least the foreground portion of the stereo image (block 716 ). Face detection may also be used along with stereo disparity for the detection.
- operation 718 involves aligning and crossfading the images captured by cameras 712 and 714 .
- This operation yields a combined image, which may be further processed by separating the foreground and background portions and processing the background portion separately from the foreground portion, e.g., detecting and suppressing the background portion and/or enhancing the detected foreground portion (block 719 ).
- the foreground and background portions may both be detected in block 716 , obviating the need to detect the foreground portion in block 719 .
- FIG. 7C is another schematic process flow 720 utilizing a device with two cameras 722 and 724 , in accordance with some embodiments.
- camera 722 may be a left camera
- camera 724 may be a right camera.
- images captured with cameras 722 and 724 may not be stereo images from which stereo disparity may be determined.
- Still detection of at least the foreground portion of the stereo images may be performed during operation 726 .
- Various techniques that do not require stereo disparity may be used, such as motion parallax, local focus, color grouping, and face detection.
- Operation 728 involves aligning and crossfading the images captured by cameras 722 and 724 .
- This operation yields a combined image, which may be further processed by separating the foreground and background portions and processing the background portion separately from the foreground portion, e.g., detecting and suppressing the background portion and/or enhancing the detected foreground portion (block 729 ).
- the foreground and background portions may both be detected in operation 726 , obviating the need to detect the background in block 729 .
- FIG. 8 is a diagrammatic representation of an example machine in the form of a computer system 800 , within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed.
- the machine operates as a standalone device or may be connected (e.g., networked) to other machines.
- the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment.
- the machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- PC personal computer
- PDA Personal Digital Assistant
- MP3 Moving Picture Experts Group Audio Layer 3
- MP3 Moving Picture Experts Group Audio Layer 3
- web appliance e.g., a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.
- MP3 Moving Picture Experts Group Audio Layer 3
- machine shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or
- the example computer system 800 includes a processor or multiple processors 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and a main memory 805 and static memory 814 , which communicate with each other via a bus 825 .
- the computer system 800 may further include a video display unit 806 (e.g., a liquid crystal display (LCD)).
- a processor or multiple processors 802 e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both
- main memory 805 and static memory 814 which communicate with each other via a bus 825 .
- the computer system 800 may further include a video display unit 806 (e.g., a liquid crystal display (LCD)).
- LCD liquid crystal display
- the computer system 800 may also include an alpha-numeric input device 812 (e.g., a keyboard), a cursor control device 816 (e.g., a mouse), a voice recognition or biometric verification unit, a drive unit 820 (also referred to as disk drive unit 820 herein), a signal generation device 826 (e.g., a speaker), and a network interface device 815 .
- the computer system 800 may further include a data encryption module (not shown) to encrypt data.
- the disk drive unit 820 includes a computer-readable medium 822 on which is stored one or more sets of instructions and data structures (e.g., instructions 810 ) embodying or utilizing any one or more of the methodologies or functions described herein.
- the instructions 810 may also reside, completely or at least partially, within the main memory 805 and/or within the processors 802 during execution thereof by the computer system 800 .
- the main memory 805 and the processors 802 may also constitute machine-readable media.
- the instructions 810 may further be transmitted or received over a network 824 via the network interface device 815 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)).
- HTTP Hyper Text Transfer Protocol
- While the computer-readable medium 822 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions.
- the term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions.
- computer-readable medium shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like.
- the example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Studio Devices (AREA)
- Image Processing (AREA)
Abstract
Provided are systems and methods for image enhancement based on combining multiple related images, such as images of the same object taken from different imaging angles. This approach allows simulating images captured from longer distances using telephoto lenses. Initial images may be captured using a simple camera equipped with shorter focal length lenses, typically used on camera phones, tablets, and laptops. The initial images may be taken using a single camera. An object or, more specifically, a center line of the object is identified in each image. The object is typically present in the foreground portion of the initial images. The initial images may be cross-faded along the object center line to yield a combined image. Separating of the foreground and background portions of each image may be separated and separately processed, such as blurring the background portion and sharpening the foreground portion.
Description
- This application is a continuation of application Ser. No. 13/738,874, filed Jan. 10, 2013, which is a continuation-in-part of application Ser. No. 13/719,079, filed Dec. 18, 2012, which claims the benefit of U.S. Provisional Patent Application No. 61/583,144, filed Jan. 4, 2012, and U.S. Provisional Patent Application No. 61/590,656, filed Jan. 25, 2012; and application Ser. No. 13/738,874, filed Jan. 10, 2013 claims the benefit of U.S. Provisional Patent Application No. 61/590,656, filed Jan. 25, 2012; all applications are incorporated herein by reference in their entirety.
- This application relates generally to image enhancement and more specifically to computer-implemented systems and methods for image enhancement based on combining images from multiple cameras.
- Many modern electronic devices, such as smart phones and laptops, are equipped with cameras. However, the quality of photo and video images produced by these cameras is often less than desirable. One problem is that these electronic devices use relatively inexpensive cameras and lenses in comparison, for example, with professional cameras. Another problem is a relatively small size of the mobile devices (the thickness of the mobile devices, in particular) requires the optical lens to be small as well. Furthermore, mobile devices are often operated at closed proximity to the object, e.g., between 300 mm and 900 mm and are equipped with a short focal length lens. As such, the produced images often suffer from perspective distortion resulting from using short focal length cameras at close distance to the subject.
- Provided are computer-implemented systems and methods for image enhancements based on combining multiple related images, such as images of the same object taken from different angles and/or distances. According to various embodiments, this approach allows multiple images from a camera to be combined to simulate a single image from a more distant camera. Initial images may be captured using a simple camera, such as those having short focal length lenses typically used in camera phones, tablets, and laptops. An object or, more specifically, a center line of the object is identified in each image. The object is typically present on the foreground of the initial images. As such, detecting the foreground portion of each image may be performed before the center line identification. The initial images may be aligned and cross-faded. The foreground portion may be separated from the background portion. The background portion may be blurred or, more generally, processed separately from the foreground portions. The above-described steps in the process need not all be done in the order specified, but may be done in a different order for convenience or efficiency depending on the particular application and its specific requirements.
- In some embodiments, a method of combining multiple related images to enhance image quality involves receiving at least two initial images captured using a single camera provided on one device; each of the at least two initial images comprising an object representation of an object, the object representation provided on a foreground portion of each of the at least two initial images; each of the at least two initial images corresponding to a different imaging angle relative to the object; detecting the object in each of the at least two initial images; determining an object center line of the object in each of the at least two initial images; and cross-fading the at least two initial images along the object center line, wherein the cross-fading yields a combined image.
-
FIG. 1 illustrates a schematic top view of an object and different image capturing devices positioned at different distances and angles relative to the object, in accordance with some embodiments. -
FIG. 2A illustrates an image of an object captured from far away using a long focal length lens, in accordance with some embodiments. -
FIG. 2B illustrates an image of the same object (as inFIG. 2A ) captured at a short distance away from the object using a short focal length lens, in accordance with some embodiments. -
FIG. 3 illustrates a top view of a device equipped with two cameras and an equivalent single camera device showing relative positions of the devices to an object, in accordance with some embodiments. -
FIGS. 4A and 4B illustrate two initial images prior to combining these images, in accordance with some embodiments. -
FIG. 5 illustrates a combined images resulting from cross-fading of the two initial images shown inFIGS. 4A and 4B , in accordance with some embodiments. -
FIG. 6 is a process flowchart of a method for processing an image, in accordance with some embodiments. -
FIG. 7A is a schematic representation of various modules of an image capturing and processing device, in accordance with some embodiments. -
FIG. 7B is a schematic process flow utilizing stereo disparity of two images, in accordance with some embodiments. -
FIG. 7C is a schematic process flow that does not utilize stereo disparity, in accordance with some embodiments. -
FIG. 8 is a diagrammatic representation of an example machine in the form of a computer system, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. - In the following description, numerous specific details are set forth in order to provide a thorough understanding of the presented concepts. The presented concepts may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail so as to not unnecessarily obscure the described concepts. While some concepts will be described in conjunction with the specific embodiments, it will be understood that these embodiments are not intended to be limiting.
- Many modern devices are equipped with cameras, which provide additional functionality to these devices. At the same times, the devices are getting progressively smaller to make their use more convenient. Examples include camera phones, tablet computers, laptop computers, digital cameras, and other like devices. A camera phone example will now be briefly described to provide some context to this disclosure. A camera phone is a mobile phone, which is able to capture images, such as still photographs and/or video. Currently, the majority of mobile phones in use are camera phones. The camera phones generally have lenses and sensors that are simpler than dedicated digital cameras, in particular, high end digital cameras such as DSLR camera. The camera phones are typically equipped with shorter focal length and fixed focus lenses and smaller sensors, which limit their performance.
- Cost and size constraints limit optical features that can be implemented on the above referenced devices. Specifically, the thin form factors of many devices make it very difficult to use long lenses (with wide apertures for capturing high-quality limited-depth-of-field effects (i.e. sharp subject, blurry background)). For this reason, close-up pictures shot with camera phones are usually taken too close to the subject, leading to strong perspective distortion.
- Provided are computer-implemented systems and methods combining multiple low quality images into one higher quality image thereby producing image enhancement. This approach allows simulating images captured from longer distances by combining multiple images captured from short distances.
FIG. 1 shows the difference in viewing angles for a close camera and a far camera, illustrating a schematic top view of anobject 102 and different 110 and 114 positioned at different distances relative to theimage capturing devices object 102, in accordance with some embodiments. For clarity, a few features ofobject 102 are identified, such as aright ear 104 a, aleft ear 104 b, and anose 106. Despite the fact thatdevice 114 is shifted to the left from theobject 102, it is still able to capture both 104 a and 104 b while not being turned too much with respect to the nose. As such, device 114 (that needs to be equipped with a longer focal length lens, e.g., a telephoto lens, relative to device 110) will take a high quality and undistorted image ofears object 102. However, when a short focal length camera/device 110, which is similarly shifted to the left from theobject 102 attempts to take a similar image, it will only be able to captureleft ear 104 b. Furthermore,nose 106 is being captured at a sharp angle, which may result in distortions of its proportion relative to other parts. - Actual results of using long and short focal length lenses are presented in
FIGS. 2A and 2B , respectively. Specifically,FIG. 2A illustrates an image of an object captured from far away using a long focal length (telephoto) lens (similar todevice 114 inFIG. 1 ), in accordance with some embodiments, whileFIG. 2B illustrates an image of the same object captured at a short distance away from the object using a short focal length (wide angle) lens (similar todevice 110 inFIG. 1 ), in accordance with different embodiments. - It is common to take pictures of subjects from short distances, for example, on the order of two feet away or less. This may occur, for example, when using a camera mounted on the bezel of a laptop computer screen during a video-conference, when taking a hand-held picture of oneself using a cell-phone camera, and similar photography with a portable device. When the lens-to-subject distance is short, there may be an unflattering perspective distortion of the subject (e.g., usually the face of the subject) which has the appearance of, for example, making the nose look large, ears recede behind the head, and face and neck to look unnaturally thin.
- Some embodiments may include cameras that may be operated at short camera-to-subject distances, with short lenses, and may produce images that look as though the camera were further away with a long lens, thus minimizing such perspective distortion effect and creating a flattering image of the subject. Initial images may be captured using simple cameras, such as short focal length cameras and cameras with short lenses, typically used on camera phones, tablets, and laptops. The initial images may be taken using two different cameras positioned at a certain distance from each other. An object or, more specifically, a center line of the object is identified in each image. The object is typically present on the foreground of the initial images. As such, detecting the foreground portion of each image may be performed before the center line identification. The initial images may be aligned and cross-faded. The foreground portion may be separated from the background portion. The background portion may be blurred or, more generally, processed separately from the foreground portions. The steps in the above-described process need not all be done in the order specified, but may be done in a different order for convenience or efficiency depending on the particular application and its specific requirements.
-
FIG. 3 illustrates atop view 300 of adevice 310 equipped with two 312 a and 312 b and an equivalentcameras single camera device 314 showing relative positions of 310 and 314 to an object (head) 302, in accordance with some embodiments.devices 312 a and 312 b, taken together, can see both sides ofCameras object 302, similar to the nearly-equivalentdistant camera device 314, whereas each of 312 a and 312 b in isolation may not be able see both sides of thecameras head 302. Specifically, leftcamera 312 a may have a better view ofleft ear 304 a and insufficient view ofright ear 304 b, whileright camera 312 b may have a better view ofright ear 304 b and insufficient view ofleft ear 304 a. When two images taken by bothleft camera 312 a andright camera 312 b are combined, the combined image included adequate representations of right and left 304 a and 304 b.ears - In some embodiments, a method of combining the images from the left and right cameras into a composite image involves detecting the foreground object (i.e., subject) in two camera images. This may be done, for example, using stereo disparity and/or face detection on the two images. The method may proceed with aligning and, in some embodiments, scaling the two images at the center of the foreground object. The two images are then cross-faded into a combined (or composite) image, such that the left side of the image comes from the left camera, while the right side of the image comes from the right camera. The cross-fade region may be narrow enough that the images have good alignment within it. The method optionally involves blurring the background in the composite image.
- It should be noted that two camera systems that may be used for capturing initial images are different from stereo 3D camera, which present both images to the eyes of the viewer and create a full 3D experience for the viewer. Instead, only one combined image is provided in the described methods and systems and initially captured stereo images are not shown to the viewer. The initial images are combined so as to create the appearance of a single higher-quality image shot from further away.
- Some applications of these methods may include, for example, a video-conferencing system running on a laptop or desktop computer, stand-alone video-conferencing system, video-conferencing system on a mobile device such as a smart-phone, front-facing camera for taking pictures of oneself on a smart-phone/mobile device, a standalone still camera, stand-alone video camera, any camera where an undistorted image is needed but it is impossible or impractical to move the camera back far enough from the subject, and the like.
- In some embodiments two or more cameras may be used. For example, with three cameras (e.g., left, center, and right) the composite image may be composed of the left portion of the left image, center portion of the center image, and right portion of right image, resulting in reduced perspective distortion compared to the image obtained from a single distant camera.
-
FIGS. 4A and 4B illustrate an example of two 400 and 410 that are combined to enhance quality of the resulting image, in accordance with some embodiments. For simplicity,initial images initial image 400 will be referred to as a left image, whileinitial image 410 will be referred to as a right image. The left and right images may be obtained using two cameras or lenses provided on the same device (e.g., devices described above with reference toFIG. 3 ) and captured at substantially the same time, such that the object maintains the same orientation (i.e., does not move) in both images. In some embodiments, the same camera or lens may be used to capture the left and right images by moving the object or the camera with respect to each other. - Each initial image includes slightly different representations of the same object, i.e.,
left image 400 includesobject representation 402, whileright image 410 includesobject representation 412. There are slight differences in these object representations. For example, objectrepresentation 402 has a more visible left ear, while the right ear is barely visible. It should be noted that all special orientations are referred to the images; the actual object orientations may be different. On the other hand, objectrepresentation 412 has a more visible right ear, while the left ear is only slightly visible. Furthermore, objectrepresentation 402 shows the actual object (person) being turned (e.g., looking) slightly to the right, while object representation shows the actual object looking straight and may be turned slightly to the left. When two initial images are used, the difference of object representations is called stereo disparity. - Differences in the representations of the objects of two or more initial images may be used in order to enhance these object representations and yield a combined imaged with the enhanced representation. However, too much difference due to the spacing of the cameras may cause problems with alignment and cross-fading, resulting in lower quality representations than even in the initial images. For example, too much difference in imaging angles may cause such problems. In some embodiments, the cameras are positioned at a distance of between about between about 30 millimeters and 150 millimeters from each other.
- The difference between
402 and 412 caused by different imaging angles with respect to the object is described above with reference toobject representations FIG. 3 . It should be noted that when representations of multiple objects are present in two or more initial images, the representations may vary depending on proximity of the object to the camera. For example, the main object may be present on a foreground, while some additional objects may be present on a background. The 400 and 410 includeimages 402 and 412 that appear on the foreground and, for example,object representations 404 and 414 that appear on the background. While both sets of representations are of the same two actual objects (i.e., the person and the window edge) that maintained the same relative positions while capturing these images, the positions of their representations are different. For example,window edge representations window edge representations 404 is positioned around the left portion of the head inleft image 400, whilewindow edge representations 414 is positioned around the right portion of the head inright image 410. In other words, relative positions of object representations depend on their distances from the image capturing lenses. To address this discrepancy, the initial images may be decomposed into foreground portions and background portions and each type may be processed independently from each other as further described below. - The process may involve determining an object center line in each of the initial image. The object center line may represent a center of the object representation or correspond to some other features of the object representation (e.g., a nose, separation between eyes). Object center lines generally do not correspond to centers of initial images and portions of the initial images divided by the center lines may be different. For example,
object center line 406 dividesimage 400 intoleft portions 408 andright portion 409. In a similar manner,object center line 416 dividesimage 410 intoleft portions 418 andright portion 419. Both 406 and 416 extend vertically through the centers of the noses of thecenter lines 402 and 412, respectively.object representations -
FIG. 5 illustrates a combinedimage 500 generated from 400 and 410 illustrated ininitial images FIGS. 4A and 4B , in accordance with some embodiments. Specifically,object center line 506 generally corresponds to 406 and 416 ofcenter lines 400 and 410.initial images Left portion 508 of combinedimage 500 represents a modified version ofleft portion 408 ofleft image 400, whileright portion 509 represents a modified version ofright portion 419 ofright image 410. These modifications may come from cross-fading to provide a more uniform combined image and transition between two 508 and 509. For example,portions left portion 408 ofleft image 400 may be cross-faded withleft portion 418 ofright image 410 to formleft portion 508 of combine image. Only a part ofleft portion 418, in particular the part extending alongcenter line 416 may be used for cross-fading. In a similar manner,right portion 419 ofright image 410 may be cross-faded withright portion 409 ofleft image 400 or, more specifically, with a part ofright portion 409 extending alongcenter line 406 to formright portion 509. - The quality of combined
image 500 depends on how well 406 and 416 are identified and how well the cross-fading is performed.center lines Object representation 502 on combinedimage 500 includes clear view of both ears, which was missing in either one of 400 and 410. The object ininitial images object representation 502 appears to be looking straight and not to the left or right as appears in 400 and 410. However, representations of background objects in combinedinitial images image 500 may not be as successful. For example, 404 and 414 of the same actual window edge appear as twowindow edge representations 504 a and 504 b. Such problems may be confusing and distracting. To address these problems, the background may be blurred or completely replaced (e.g., with an alternate background image). Furthermore, processing of foreground and background portions of initial images may be performed separately to address the above referenced problems. For examples, separate object center lines may be identified for different objects, e.g., objects on the foreground and objects on the background. The cross-fading may be performed independently along these different object center lines. It should be noted that when processing videos, objects may move and may change their distances to cameras. As such separation between background object and foreground objects may be performed dynamically. Furthermore, more than two (i.e., the background and foreground) depth zones may be identified for initial images and portions of images falling into each depth zone may be processed independently. While this approach creates additional computational complexity, it creates more enhanced combined images and may be particularly suitable for still images. It should be noted that techniques described herein can be used for both still and moving images (e.g., video conferencing on smart-phones or on personal computers or video conferencing terminals).different representations -
FIG. 6 is a process flowchart of amethod 600 for processing an image, in accordance with some embodiments.Method 600 may commence with capturing one or more images duringoperation 601. In some embodiments, multiple cameras are used to capture different images. Various examples of image capturing devices having multiple cameras are described above. In other embodiments, the same camera may be used to capture multiple images, for example, with different imaging angles. Multiple images from multiple cameras used in the same processing should be distinguished from multiple images processed sequentially as, for example, during processing of video images. - It should be noted that an image capturing device may be physically separated from an image processing device. These devices may be connected using a network, a cable, or some other means. In some embodiments, the image capturing device and the image processing device may operate independent and may have no direct connection. For example, an image may be captured and stored for a period of time. At some later time, the image may be processed when it is so desired by a user. In a specific example, image processing functions may be provided as a part of a graphic software package.
- In some embodiments, two images may be captured during
operation 601 by different cameras or, more specifically, different optical lenses provided on the same device. These images may be referred to as stereo images. In some embodiments, the two cameras are separated by between about 30 millimeters and 150 millimeters. As described above, this distance is the most suitable when the object is within 300 millimeters and 900 millimeters from the camera. One or more images captured duringoperation 601 may be captured using a camera having a relatively small apertures which increases the depth of field. In other words, this camera may be provide very little depth separation and both background and foreground portions of the image may have similar sharpness. -
Method 600 may proceed with detecting at least the foreground portion in the one or more images duringoperation 602. This detecting operation may be based on one or more of the following techniques: stereo disparity, motion parallax, local focus, color grouping, and face detection. These techniques will now be described in more detail. - The motion parallax may be used for video images. It is a depth cue that results from a relative motion of objects captured in the image and the capturing device. In general, a parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight. It may be represented by the angle or semi-angle of inclination between those two lines. Nearby objects have a larger parallax than more distant objects when observed from different positions, which allows using the parallax values to determine distances and separate foreground and background portions of an image.
- The face detection technique determines the locations and sizes of human faces in arbitrary images. Face detection techniques are well known in the art, see e.g., G. Bradski, A. Kaehler, “Learning Open CV”, September 2008, incorporated by reference herein. Open Source Computer Vision Library (OpenCV) provides an open source library of programming functions mainly directed to real-time computer vision and cover various application areas including face recognition (including face detection) and stereopsis (including stereo disparity), and therefore such well known programming functions and techniques will not be described in all details here. According to a non limiting example, a classifier may be used according to various approach to classify portions of an image as either face or non-face.
- In some embodiments, the image processed during
operation 602 has stereo disparity. Stereo disparity is the difference between corresponding points on left and right images and is well known in the art, see e.g., M. Okutomi, T. Kanade, “A Multiple-Baseline Stereo”, IEEE Transactions on Pattern Analysis and Machine Intelligence, April 1993, Vol. 15 no. 4, incorporated by reference herein, and will therefore not be described in all details here. As described above, the OpenCV library provides programming functions directed to stereo disparity. - The stereo disparity may be used during detecting
operation 602 to determine proximity of each pixel or patch in the stereo images to the camera and therefore to identify at least the background portion of the image. -
Operation 603 involves detecting the object in each initial image. This operation may involve one or more techniques described above that are used for detecting the foreground portion. Generally, the object is positioned on the foreground of the image. In the context of video conferences, the object may be a person and face recognition techniques may be used to detect the object. -
Operation 604 involves determining an object center line of the object in each initial image as described above with reference toFIGS. 4A and 4B . In some embodiments, other alignment and/or scaling techniques may be used duringoperation 604. The method continues with cross-fading the two initial images along the object center line thereby yielding a combined image duringoperation 605. A few aspects of this operation are described above with reference toFIG. 5 . - In
operation 606, the foreground portion may be separated from the background portion. In various embodiments, the background may be processed separately from the foreground portion inoperation 607. Other image portion types may be identified, such as a face portion, an intermediate portion (i.e., a portion between the foreground and background portion), in some embodiments. The purpose of separating the original image into multiple portions is so that at least one of these portions can be processed independently from other portions. - The processing in
operation 607 may involve one or more of the following techniques: defocussing (i.e., blurring), changing sharpness, changing colors, suppressing, and changing saturation. Blurring may be based on different techniques, such as a circular blur or a Gaussian blur. Blurring techniques are well known in the art, see e.g., G. Bradski, A. Kaehler, “Learning Open CV”, September 2008, incorporated by reference herein, wherein blurring is also called smoothing, and Potmesil, M.; Chakravarty, I. (1982), “Synthetic Image Generation with a Lens and Aperture Camera Model”, ACM Transactions on Graphics, 1, ACM, pp. 85-108, incorporated by reference herein, which also describes various blur generation techniques. In some embodiments, an elliptical or box blur may be used. The Gaussian blur, which is sometimes referred to as Gaussian smoothing, used a Gaussian function to blur the image. The Gaussian blur is known in the art, see e.g., “Learning OpenCV”, ibid. - In some embodiments, the image is processed such that sharpness is changed for the foreground or background portion of the image. Changing sharpness of the image may involve changing the edge contrast of the image. The sharpness changes may involve low-pass filtering and resampling.
- In some embodiments, the image is processed such that the background portion of the image is blurred. This reduces distraction and focuses attention on the foreground. The foreground portion may remain unchanged. Alternatively, the foreground portion of the image may be sharpened.
- In some embodiments, the processed image is displayed to a user as reflected by
optional operation 608. The user may choose to perform additional adjustments by, for example, changing the settings used duringoperation 606. These settings may be used for future processing of other images. The processed image may be displayed on the device used to capture the original image (during operation 602) or some other device. For example, the processed image may be transmitted to another computer system as a part of teleconferencing. - In some embodiments, the image is a frame of a video (e.g., a real time video used in the context of video conferencing). Some or all of operations 602-608 may be repeated for each frame of the video as reflected by
decision block 610. In this case, the same settings may be used for most frames in the video. Furthermore, results of certain processes (e.g., face detection) may be adapted for other frames. -
FIG. 7A is a schematic representation of various modules of an image capturing andprocessing device 700, in accordance with some embodiments. Specifically,device 700 includes afirst camera 702, aprocessing module 706, and astorage module 708.Device 700 may also include an optional second camera 704 (and may have a third camera, not shown). One or both 702 and 704 may be equipped with lenses having relatively small lens apertures that result in a large depth of field. As such, the background of the resulting image can be very distracting, competing for the viewer's attention. Various details of camera positions are described above with reference tocameras FIGS. 3-5 . - In various embodiments,
processing module 706 is configured for detecting at least one of a foreground portion or a background portion of the stereo image.Processing module 706 may also be configured for detecting an object in each of the two initial images, determining an object center line of the object in each of the two initial images, aligning the two initial images along the object center line, and cross-fading the two initial images along the object center line yielding a combined image. As noted above, the detecting operation separates the stereo image into at least the foreground portion and the background portion. -
Storage module 708 is configured for storing initial images as well as combined images, and one or more setting used for the detecting and processing operations.Storage module 708 may include a tangible computer memory, such as flash memory or other types of memory. -
FIG. 7B is aschematic process flow 710 utilizing a device with two 712 and 714, in accordance with some embodiments.cameras Camera 712 may be a left camera, whilecamera 714 may be a right camera. 712 and 714 generate a stereo image from which stereo disparity may be determined (block 715). This stereo disparity may be used for detection of at least the foreground portion of the stereo image (block 716). Face detection may also be used along with stereo disparity for the detection. Specifically,Cameras operation 718 involves aligning and crossfading the images captured by 712 and 714. This operation yields a combined image, which may be further processed by separating the foreground and background portions and processing the background portion separately from the foreground portion, e.g., detecting and suppressing the background portion and/or enhancing the detected foreground portion (block 719). In some embodiments, the foreground and background portions may both be detected incameras block 716, obviating the need to detect the foreground portion inblock 719. -
FIG. 7C is anotherschematic process flow 720 utilizing a device with two 722 and 724, in accordance with some embodiments. Likewise,cameras camera 722 may be a left camera, whilecamera 724 may be a right camera. However, images captured with 722 and 724 may not be stereo images from which stereo disparity may be determined. Still detection of at least the foreground portion of the stereo images may be performed duringcameras operation 726. Various techniques that do not require stereo disparity may be used, such as motion parallax, local focus, color grouping, and face detection.Operation 728 involves aligning and crossfading the images captured by 722 and 724. This operation yields a combined image, which may be further processed by separating the foreground and background portions and processing the background portion separately from the foreground portion, e.g., detecting and suppressing the background portion and/or enhancing the detected foreground portion (block 729). In some embodiments, the foreground and background portions may both be detected incameras operation 726, obviating the need to detect the background inblock 729. -
FIG. 8 is a diagrammatic representation of an example machine in the form of acomputer system 800, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. - The
example computer system 800 includes a processor or multiple processors 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), and amain memory 805 andstatic memory 814, which communicate with each other via abus 825. Thecomputer system 800 may further include a video display unit 806 (e.g., a liquid crystal display (LCD)). Thecomputer system 800 may also include an alpha-numeric input device 812 (e.g., a keyboard), a cursor control device 816 (e.g., a mouse), a voice recognition or biometric verification unit, a drive unit 820 (also referred to asdisk drive unit 820 herein), a signal generation device 826 (e.g., a speaker), and anetwork interface device 815. Thecomputer system 800 may further include a data encryption module (not shown) to encrypt data. - The
disk drive unit 820 includes a computer-readable medium 822 on which is stored one or more sets of instructions and data structures (e.g., instructions 810) embodying or utilizing any one or more of the methodologies or functions described herein. Theinstructions 810 may also reside, completely or at least partially, within themain memory 805 and/or within theprocessors 802 during execution thereof by thecomputer system 800. Themain memory 805 and theprocessors 802 may also constitute machine-readable media. - The
instructions 810 may further be transmitted or received over anetwork 824 via thenetwork interface device 815 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)). - While the computer-
readable medium 822 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAM), read only memory (ROM), and the like. - The example embodiments described herein may be implemented in an operating environment comprising software installed on a computer, in hardware, or in a combination of software and hardware.
- Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the system and method described herein. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Claims (20)
1. A method of combining multiple related images to enhance image quality, the method comprising:
receiving at least two initial images captured using a single camera provided on one device;
each of the at least two initial images comprising an object representation of an object,
the object representation provided on a foreground portion of each of the at least two initial images; and
each of the at least two initial images corresponding to a different imaging angle relative to the object;
detecting the object in each of the at least two initial images;
determining an object center line of the object in each of the at least two initial images; and
cross-fading the at least two initial images along the object center line,
wherein the cross-fading yields a combined image.
2. The method of claim 1 , wherein the object comprises at least a face of a user.
3. The method of claim 2 , wherein, for each of the at least two initial images, the single camera is positioned at at least one of a different distance and a different angle relative to the face of the user.
4. The method of claim 3 , wherein the device is one of a camera phone or a tablet computer system.
5. The method of claim 1 , wherein the two initial images are stereo images, and the detecting comprises analyzing the stereo disparity of the stereo image.
6. The method of claim 1 , wherein the detecting further comprises face detection.
7. The method of claim 1 , wherein the detecting the object comprises one or more techniques selected from the group consisting of motion parallax, local focus, color grouping, and face detection.
8. The method of claim 1 , wherein the detecting the object comprises face detection.
9. The method of claim 1 , wherein the combined image comprises a combined foreground portion and a combined background portion, the combined foreground portion comprises a combined object created by cross-fading the objects of the at least two initial images.
10. The method of claim 10 , further comprising changing one or more properties of the combined foreground portion, the one or more properties are selected from the group consisting of changing sharpness, changing color, suppressing, and changing saturation.
11. The method of claim 10 , further comprising changing one or more properties of the combined background portion, the one or more properties are selected from the group consisting of changing sharpness, changing color, suppressing, and changing saturation.
12. The method of claim 11 , wherein the combined background portion is blurred using one or more blurring techniques including at least one of circular blurring and Gaussian blurring.
13. The method of claim 12 , wherein the combined background portion is blurred adaptively.
14. The method of claim 9 , wherein the combined background portion is replaced with a new background image.
15. The method of claim 1 , further comprising:
determining the foreground portion of each of the at least two initial images;
separating the foreground portion from a background portion of each of the at least two initial images;
wherein the cross-fading the at least two initial images comprises:
cross-fading the foreground portions of the at least two initial images; and
independently, cross-fading the background portions of the at least two initial images.
16. The method of claim 15 , wherein the cross-fading the background portions of the two initial images comprises shifting at least some of the background portions in a direction towards the object center line.
17. The method of claim 1 , further comprises repeating the receiving, determining, aligning, and cross-fading operations at least once.
18. The method of claim 1 , wherein the least two initial images represent at least two different frames of a video.
19. A system for combining multiple related images to enhance image quality, the system comprising:
at least one processor; and
a memory communicatively coupled with the at least one processor, the memory storing instructions, which when executed by the at least processor performs a method comprising:
receiving at least two initial images captured using a single camera provided on one device;
each of the at least two initial images comprising an object representation of an object,
the object representation provided on a foreground portion of each of the at least two initial images; and
each of the at least two initial images corresponding to a different imaging angle relative to the object;
detecting the object in each of the at least two initial images;
determining an object center line of the object in each of the at least two initial images; and
cross-fading the at least two initial images along the object center line, wherein the cross-fading yields a combined image.
20. A non-transitory computer-readable storage medium having embodied thereon instructions, which when executed by at least one processor, perform steps of a method, the method comprising:
receiving at least two initial images captured using a single camera provided on one device;
each of the at least two initial images comprising an object representation of an object,
the object representation provided on a foreground portion of each of the at least two initial images; and
each of the at least two initial images corresponding to a different imaging angle relative to the object;
detecting the object in each of the at least two initial images;
determining an object center line of the object in each of the at least two initial images; and
cross-fading the at least two initial images along the object center line, wherein the cross-fading yields a combined image.
Priority Applications (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US14/860,481 US20160065862A1 (en) | 2012-01-04 | 2015-09-21 | Image Enhancement Based on Combining Images from a Single Camera |
Applications Claiming Priority (5)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| US201261583144P | 2012-01-04 | 2012-01-04 | |
| US201261590656P | 2012-01-25 | 2012-01-25 | |
| US13/719,079 US20130169760A1 (en) | 2012-01-04 | 2012-12-18 | Image Enhancement Methods And Systems |
| US13/738,874 US9142010B2 (en) | 2012-01-04 | 2013-01-10 | Image enhancement based on combining images from multiple cameras |
| US14/860,481 US20160065862A1 (en) | 2012-01-04 | 2015-09-21 | Image Enhancement Based on Combining Images from a Single Camera |
Related Parent Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/738,874 Continuation US9142010B2 (en) | 2012-01-04 | 2013-01-10 | Image enhancement based on combining images from multiple cameras |
Publications (1)
| Publication Number | Publication Date |
|---|---|
| US20160065862A1 true US20160065862A1 (en) | 2016-03-03 |
Family
ID=48694539
Family Applications (2)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/738,874 Expired - Fee Related US9142010B2 (en) | 2012-01-04 | 2013-01-10 | Image enhancement based on combining images from multiple cameras |
| US14/860,481 Abandoned US20160065862A1 (en) | 2012-01-04 | 2015-09-21 | Image Enhancement Based on Combining Images from a Single Camera |
Family Applications Before (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US13/738,874 Expired - Fee Related US9142010B2 (en) | 2012-01-04 | 2013-01-10 | Image enhancement based on combining images from multiple cameras |
Country Status (1)
| Country | Link |
|---|---|
| US (2) | US9142010B2 (en) |
Families Citing this family (18)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9743016B2 (en) * | 2012-12-10 | 2017-08-22 | Intel Corporation | Techniques for improved focusing of camera arrays |
| US10119809B2 (en) * | 2015-02-16 | 2018-11-06 | Intel Corporation | Simulating multi-camera imaging systems |
| CN105100615B (en) * | 2015-07-24 | 2019-02-26 | 青岛海信移动通信技术股份有限公司 | Image preview method, device and terminal |
| KR101751039B1 (en) * | 2016-02-17 | 2017-06-26 | 네이버 주식회사 | Device and method for displaying image, and computer program for executing the method |
| CN107230187B (en) | 2016-03-25 | 2022-05-24 | 北京三星通信技术研究有限公司 | Method and device for processing multimedia information |
| KR102672599B1 (en) | 2016-12-30 | 2024-06-07 | 삼성전자주식회사 | Method and electronic device for auto focus |
| US10438322B2 (en) | 2017-05-26 | 2019-10-08 | Microsoft Technology Licensing, Llc | Image resolution enhancement |
| US10721419B2 (en) * | 2017-11-30 | 2020-07-21 | International Business Machines Corporation | Ortho-selfie distortion correction using multiple image sensors to synthesize a virtual image |
| CN107948519B (en) * | 2017-11-30 | 2020-03-27 | Oppo广东移动通信有限公司 | Image processing method, device and equipment |
| CN108182412A (en) * | 2017-12-29 | 2018-06-19 | 百度在线网络技术(北京)有限公司 | For the method and device of detection image type |
| US10691968B2 (en) | 2018-02-08 | 2020-06-23 | Genetec Inc. | Systems and methods for locating a retroreflective object in a digital image |
| CN108377342B (en) * | 2018-05-22 | 2021-04-20 | Oppo广东移动通信有限公司 | Double-camera shooting method and device, storage medium and terminal |
| EP4648403A2 (en) | 2018-10-04 | 2025-11-12 | Seadronix Corp. | Ship and harbor monitoring device and method |
| EP3903278B1 (en) * | 2019-04-10 | 2024-04-03 | Huawei Technologies Co., Ltd. | Device and method for enhancing images |
| EP4022590A4 (en) | 2019-10-26 | 2022-12-28 | Genetec Inc. | AUTOMATIC NUMBER PLATE RECOGNITION SYSTEM AND RELATED PROCEDURE |
| EP4172935A4 (en) * | 2020-07-23 | 2024-01-03 | Samsung Electronics Co., Ltd. | METHOD AND ELECTRONIC DEVICE FOR DETERMINING THE BOUNDARY OF A REGION OF INTEREST |
| US12081907B2 (en) * | 2021-08-11 | 2024-09-03 | Motorola Mobility Llc | Electronic device with non-participant image blocking during video communication |
| CN113660531B (en) * | 2021-08-20 | 2024-05-17 | 北京市商汤科技开发有限公司 | Video processing method and device, electronic equipment and storage medium |
Citations (3)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090122082A1 (en) * | 2007-11-09 | 2009-05-14 | Imacor, Llc | Superimposed display of image contours |
| WO2011096251A1 (en) * | 2010-02-02 | 2011-08-11 | コニカミノルタホールディングス株式会社 | Stereo camera |
| US20120219180A1 (en) * | 2011-02-25 | 2012-08-30 | DigitalOptics Corporation Europe Limited | Automatic Detection of Vertical Gaze Using an Embedded Imaging Device |
Family Cites Families (2)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US8922625B2 (en) * | 2009-11-19 | 2014-12-30 | Lg Electronics Inc. | Mobile terminal and controlling method thereof |
| WO2011084331A1 (en) * | 2009-12-16 | 2011-07-14 | Dolby Laboratories Licensing Corporation | 3d display systems |
-
2013
- 2013-01-10 US US13/738,874 patent/US9142010B2/en not_active Expired - Fee Related
-
2015
- 2015-09-21 US US14/860,481 patent/US20160065862A1/en not_active Abandoned
Patent Citations (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US20090122082A1 (en) * | 2007-11-09 | 2009-05-14 | Imacor, Llc | Superimposed display of image contours |
| WO2011096251A1 (en) * | 2010-02-02 | 2011-08-11 | コニカミノルタホールディングス株式会社 | Stereo camera |
| US20120293633A1 (en) * | 2010-02-02 | 2012-11-22 | Hiroshi Yamato | Stereo camera |
| US20120219180A1 (en) * | 2011-02-25 | 2012-08-30 | DigitalOptics Corporation Europe Limited | Automatic Detection of Vertical Gaze Using an Embedded Imaging Device |
Also Published As
| Publication number | Publication date |
|---|---|
| US20130169844A1 (en) | 2013-07-04 |
| US9142010B2 (en) | 2015-09-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9142010B2 (en) | Image enhancement based on combining images from multiple cameras | |
| US8619148B1 (en) | Image correction after combining images from multiple cameras | |
| US20130169760A1 (en) | Image Enhancement Methods And Systems | |
| US20230377183A1 (en) | Depth-Aware Photo Editing | |
| US12022227B2 (en) | Apparatus and methods for the storage of overlapping regions of imaging data for the generation of optimized stitched images | |
| US8749607B2 (en) | Face equalization in video conferencing | |
| CN103258316B (en) | Method and device for picture processing | |
| CN110636276B (en) | Video shooting method and device, storage medium and electronic equipment | |
| US9384384B1 (en) | Adjusting faces displayed in images | |
| CN105794202B (en) | Deep bonded compositing for video and holographic projections | |
| US20230152883A1 (en) | Scene processing for holographic displays | |
| CN113973190A (en) | Video virtual background image processing method and device and computer equipment | |
| JP7101269B2 (en) | Pose correction | |
| CN109982036A (en) | A kind of method, terminal and the storage medium of panoramic video data processing | |
| EP3681144A1 (en) | Video processing method and apparatus based on augmented reality, and electronic device | |
| US20230122149A1 (en) | Asymmetric communication system with viewer position indications | |
| US20190208124A1 (en) | Methods and apparatus for overcapture storytelling | |
| TW201701051A (en) | Panoramic stereoscopic image synthesis method, apparatus and mobile terminal | |
| WO2013112295A1 (en) | Image enhancement based on combining images from multiple cameras | |
| CN106888333A (en) | A kind of image pickup method and device | |
| CN118115399A (en) | Image processing method, system and non-transient computer-readable storage medium | |
| Weigel et al. | Establishing eye contact for home video communication using stereo analysis and free viewpoint synthesis | |
| Huang et al. | Learning stereoscopic visual attention model for 3D video |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |